The ongoing debate about where and how MES fits in new era of digital technologies is raging. Its not surprising and in fact to be expected in any kind of change, basically the old guard vs the new guard. Of course you have to believe that the 4th industrial revolution is really a paradigm change. Something that I clearly align with and have some background to do so since I have been studying this phenomena since the 1990s.
As in other paradigm shifts there will always be a bit of the old that is part of the new. Steam power has not completely disappeared, it still relevant in specialized application but it is not the main source of energy powering industrial operations. This leads us to ISA-95 that I believe is a relic of the current "industry 3.0" era and not directly relevant in the new digital paradigm. (note I purposefully am trying to minimize the use of "Industry 4.0" since it starting to get a negative connotation with all the hype going on). But, that being said there are elements of ISA-95 and other best practices that may be relevant in the new paradigm, ie the old in the new?
If we let history be our teacher we can probably come up with some prediction and that is where the data model topic is interesting. ISA-95 includes a data model and all the established MOM solution include a data model that based on the available technologies at the time that seemed appropriate. The question is then; is the quest to achieve the nirvana of one standard monolithic data model for all manufacturing achievable and is it still relevant with the new digital technologies? The answer I think is clearly no and no, as far as I know there are very few, if any, examples of an organization achieving a real working standard data repository for all its operation and its not because of lack of trying.
The bottom line here is that striving for a single standard data model in a monolithic repository is a fools errand, regardless of if we try to implement it with modern digital technologies. That being said a common, shared and interpretable view of manufacturing operations is still needed and critical. In fact it's at the core of Industry 4.0, in that its the data and information that gives us the Visibility, Transparency, Predictive Capacity and Adaptability. This holistic view into the manufacturing operations is also at the core of the CIM concept from the 80s that advocated a common "shared knowledge" that all operational activities in plant uses in order to streamline to manufacturing of products. That means that both paradigms are aligned around the same challenge that to improve manufacturing operations we need to all have a common understanding and view into the operation!
The difference is how we achieve this common and shared view (information and knowledge). In the old paradigm it was the notion of a strict and rigidly structured data model, in the new paradigm we have relaxed these requirements to allow for analysis from both structured and unstructured data. I can hear the skeptics already; how can you gain any insights with different solution each having their own data structures? A few things to consider here: We do need context and this context should be defined at the source. We need to simplify data structures and get away from multiple levels of abstractions needed to run monolithic process driven solution. Adhere to some simple shared guidelines using a consistent data dictionary that allows for flexibility within your organization. (I know this sounds overly simplistic and see part II of this blog post). With these principles we can adopt many of the modern digital tools to curate views into our data, on demand with the flexibility needed for common and personalized views and insights including of course AI.
Let's take at an example where different solutions all represent some data about a lot of materials and its product code. The material can be referenced as Lot, Batch, Units, Pack, Kit, etc and the product code can be references as SKU, Item ID, Product, Material Number, etc. We of course immediately recognize these different names as similar because we understand how they are used. In the old paradigm we had to enforce strict rules in structure and semantics for software solution in order to visualize and analyze this data. That is however changing with new digital technologies and modern analytics platforms.
It is also where AI can help, you see simply put AI is good at finding patterns. Its not that AI understands what the meaning of Item and Material Number is. It simply is looking for similarities in the relationship to other data structure and how its used to see that Item and Material Number really are very similar. With enough data volume and variety this can be easily detectable. Notice I said volume and variety this is where Cloud based system are important. Using isolated traditional monolithic system data sources will never get you to this point, even if they are lift and shifted to the cloud. You need a modern cloud native operational platforms that provides easy access to the their data that can be amassed and used to identifying these patterns.
I know there are a number of concepts discussed here and there may be some lack of depth in the discussion. I promised a follow up on this post with some more detail. But assuming this is true, just think about it. It means we can relax the strict data type and structure requirements and allow citizen developers to extend template data structures to create solution to solve operational problems and know that we can still gain valuable insights about operations, and again the more data we have to more insight we have. The conclusion here is: prioritize data volume and variety and not monolithic structures.