Visibility across the supply chain is a key enabler of supply chain excellence. As industry professionals, we hear this in the seminars we attend, see it in the white papers we read, and are taught it in the academic programs we pursue. Having fewer, deeper provider partner relationships is also often promoted in these same contexts. The two messages may not initially seem related, but are in fact highly correlated. In order to unlock the full potential of visibility, it is not enough to simply see what has happened, is happening, or will happen in the supply chain – one must also fully understand. To that end, a digital representation of the supply chain, constructed in a virtual environment that allows for ongoing examination, validation, and projection of future states, can be an invaluable tool. Truly innovative organizations are already leveraging just such a tool in order to transform their supply chains with confidence, align their supply chain strategy with their business strategy, and turn their supply chains into sources of competitive advantage.
Faced with increasing competition and uncertain demand, today’s global company must make careful, educated choices regarding supply chain strategy. Tradeoffs in speed versus cost, agility versus responsiveness, and obsolescence versus stock-outs present a range of often competing priorities replete with levers to affect change. Like a concert piano, tap the right keys in the right order at the right time and the music comes alive – each note contributing optimally to the success of the whole.
Translated into the vernacular of supply chain management – find the right sources, place inventory in the right amounts and right locations, offer your customers the right choices with regards to speed and routings, and you can create a symphonic supply chain that balances risk and reward with elegance and efficiency. But where the music often never changes, the same cannot be said of the global supply chain. In a perfect world, managers could rapidly adjust their supply chains to correct not only for changes in sourcing strategies, demand patterns, logistics market dynamics, laws and regulations, etc., but also to exploit new opportunities that can appear without warning and just as quickly evaporate if not realized. The reality, however, is that change to a supply chain often entails significant cost and risk. How best to accurately qualify the many options, and quantify the potential results of any proposed change, in order to align and incent internal and external stakeholders to timely and effective action?
With three basic inputs, a virtual representation of the logistics network underlying the supply chain can be constructed and used to measure a range of supply chain activities. The first basic input is transactions, either historical or forecasted, that can be used to represent the activity in the supply chain. The more information that can be included the better, but at a minimum the transactions should include product characteristics to allow for proper demand segmentation, dates, and nodes (i.e. where product moved from, though, and to). The next basic input is supply chain costs including product value, tax rates, logistics tariffs, etc. Costs should achieve a minimum threshold of completeness and must have a consistent structure to ensure consistent application across the supply chain. The last input is the actual applied costs – i.e. what happened in the real world that the model is meant to replicate and predict, and should associate products with the conveyances they traveled on or in.
Transactions can then be modeled using cost inputs and compared to the actual costs that resulted from the activity (in the case of historical activity). Quality in model inputs is critical, and this is the first area where fewer, deeper relationships and visibility correlate: fewer provider partners means more consistency in inputs. Regardless of the medium of transfer, data points must be translated as they move between organizations and systems. Fewer environmental variances and translations promote better data quality, and therefore greater clarity of visibility. At the same time, the model framework must be flexible enough to scale for those cases where more participants to the supply chain are necessary or desirable.
The typical supply chain study involves establishing a baseline model of the existing network that is accurate, and therefore representative of the current state. Making changes to a representative baseline yields alternate states that can be considered predicative, providing the study is correctly structured and captures enough of the resulting supply chain costs. Models can be built in a variety of ways, ranging from spreadsheet-based approaches to those using sophisticated network optimization and simulation software. While these models may be representative and therefore predictive, they often lack a key component to enduring value: repeatability.
Building models that are repeatable is difficult. It has been correctly stated that up to 80% of the cost reduction potential in the supply chain is constrained by the design of the operating model. It is therefore not uncommon to find massive cost reduction potential in these studies. In addition, the resources required both in highly educated, skilled personnel and advanced software and hardware tools can be extremely expensive. These two dynamics often converge to create intense pressure to arrive at solutions (alternate state assessments) as quickly as possible. The result is that corners are often cut and assumptions made. Essentially, some degree of accuracy, and often all structure necessary to support repeatability, is sacrificed for speed.
Change to supply chains entails risk. Internal and external consensus must be secured, and the new design implementation orchestrated across participants to the supply chain before it can be executed. This is another area where fewer, deeper provider relationships is preferable and can foster a more agile supply chain. Even in single provider supply chains, there is often latency, sometimes considerable, between when a study is completed and when its recommendations can be implemented. During this time market dynamics continue to evolve, potentially further eroding the predictive accuracy of the original study.
Ideally studies would be revisited periodically prior to and during/after implementation to measure the impact of change. In a typical environment that would mean recreating the study, with new transactional data (Was it pulled from the same sources in the same way? Containing the same data points and no new members to data point populations?), potentially different project teams (Will they approach the modeling in the same way? Using the same algorithms and making the same assumptions?), and possibly updated or different tools. All of these factors contribute to potential disagreement between models from different time horizons and can call into question the accuracy of either or both models, potentially eroding stakeholder commitment to change. This is before considering the differences in models created to examine different supply, demand, business unit or geographical segments of an enterprise.
A “living model” is a persistent modeling environment. It is created to be flexible and scalable, and if executed correctly, can change the inversely proportional relationship between speed and accuracy in network design and performance measurement studies – solving the latency problem and creating exceptional relational accuracy between studies conducted over different time horizons or focused on different portions of the supply chain.
The difference between the common single-use model and a living model lies in the focus of the model and how it is constructed. A single-use model is built around the circumstances of an individual study, and is heavily focused on alternate state results. A living model is instead focused on the baseline, and built to accept periodic updates to key inputs – the aforementioned transactional data, cost inputs, and actual, resultant supply chain costs. Since the model must be built to accommodate not only the variations in specific data points at construction, but also any that may be encountered in future feeds, the initial buildout takes more time. In addition, the model must be constructed to consider the entire supply chain, as well as smaller portions for specific studies.
The Network Solutions Group at Expeditors has completed hundreds of supply chain transformation projects for clients, and has found that the tradeoff in initial time spent building an ever-present baseline is, however, repaid many times over in increased speed and accuracy as the model matures. Since the same baseline is used in all scenario modeling, it can be continually modified as assumptions are satisfied or as conditions change. Studies that would take weeks or months in a typical environment can be conducted in days or sometimes hours, with great relational accuracy between studies and across time horizons. And since study results are compared to actual costs for validation – there can be little doubt as to accuracy. In fact, once the model is proven accurate, the lens can be reversed and becomes a tool that can be used to measure operational performance.
Many may argue that in order for a model to be useful, it must accurately reflect reality. Yet isn’t the opposite true? The model can be measured and optimized. And when reality and the model disagree, therein lies opportunity – opportunity to correct the model, or opportunity to correct reality through operational practices, such as efficiencies discovered in the field applied across the rest of the network. The model in effect creates an environment in which new ideas can be advanced by multiple parties, quickly vetted, proven or disproven, applied to the supply chain, and then compared to what actually happens in order to ensure compliance. In addition, improvements generated in the field can be rapidly identified as positive deltas in expected performance, analyzed until they are understood, and then applied to other relevant areas of the supply chain – creating an environment conducive to innovation and used to propagate best practices across the network. And since studies can be conducted quickly and with audit-grade accuracy, companies with this competency can act quickly to exploit potentially fleeting opportunities, leveraging their supply chains as sources of competitive advantage.
Seasoned industry professionals will remember the time when the data describing product moving in the supply chain became as important, if not more important, than the product itself. Relationships were changed and dictated based on provider competencies in EDI and data presentation, until these capabilities ceased to be value-adds and became requirements – infrastructure necessary in order to participate in bids for supply chain service provider business.
Advancing technology and competition will soon demand another shift, one where the ability to accurately and consistently model or contribute to the modeling of the supply chain will begin to influence supply chain decisions. Before long, this competency will become a prerequisite for supply chain excellence – an evolution necessary to realize the full potential of supply chain visibility.