A number of years ago, I was involved in an eye-opening debate about data tools (often provided to customers in a Software as a Service – SaaS-or subscription model) and the security that multiple subscriptions offer the supply chain for electronic components. At face value, this is sound logic – the more data sources you have, the better chance of capturing critical information. But as the debate progressed, the main limiting factor became apparent: people and their capacity to interpret the sheer volume of information. Realistically speaking, the law of diminishing returns applies very quickly when using any method to process multiple sources. However, when relying on people to make decisions, it’s very easy to overwhelm them with volumes of data.
In a previous blog, I mentioned a statistic from IBM regarding how much data we actually produce daily, and it’s quite difficult to wrap our heads around that number. However, IBM also demonstrates that the 'volume' of data only makes up part of the problem; we also need to consider veracity (correctness), variety (how many types of data) and velocity (how quickly the data changes). Unless companies can contextualise this massive flow of data, analysis paralysis will occur. It’s been shown time and again that making no decision is easy, but it’s often more costly than making a poorly considered one. So where’s the balance?
In another blog post, I outlined a presentation that I delivered earlier this year at Electronica in Munich, Germany. I opened the discussion stating that there are three distinct phases to data acquisition – content, context and control – and significant obstacles arise if you don’t have either of the first two.
Content we’ve previously discussed. This is gathered from external sources. Perhaps you have feeds of component obsolescence or product change notes (PCN) from your component distribution partners. This is content because it’s external. It just exists as data points, which don’t change regardless of the audience. This data stays the same no matter who you are.
Context is more interesting. Context is the environment within your company into which you pull this content and the market conditions that your company exists within. The reason you conduct contextual analysis is to make decisions. Those decisions change depending on any number of reasons: material availability, suitability of components, longevity of supply, customer considerations, engineering and design specifications — the list goes on. The key point is that context is internally dictated, whereas content is externally generated.
The final pillar to this triumvirate is control. Without being able to place external content into internal context, you cannot have control in your supply chain. The reason that anyone expected to utilise varying data sources makes poor decisions is that they still believe that the pure data is the most critical aspect of their process, and they fail to contextualise this external content against their current position or wider needs compared to the marketplace.
External companies, with data, experience and a deep understanding of the global markets, as well as agnostic of industry vertical, can help. However, they must be part of your overall strategy. Content in context gives control. Successful organisations control their supply chains more than they react to them.
I’m keen to understand your thoughts and discuss further. Contact me at firstname.lastname@example.org and we can talk.