Sunday, December 14, 2025

Agentic AI in Action: What I Learned Experimenting with Operational and Builder Agents

Over the last several months, I’ve been deepening my exploration of Agentic AI within Tulip, applying the concepts I laid out in my Agentic Framework and testing them in real operational scenarios. What started as curiosity has quickly become something else entirely: a recognition that we are opening a fundamentally new chapter in how manufacturing systems are built, operated, and scaled.

As I’ve experimented with both operational agents—those that support frontline teams in real time—and builder agents—those that help design, generate, and improve digital solutions—I am realizing how deep and wide the impact is going to be. The more I explore, the more use cases reveal themselves, and the more explosive the potential becomes. Its much more than I initially thought, and I have been thinking in terms of multi-agent systems for manufacturing since the 90's! Agentic AI (multi-agent systems powered by generative AI) are a much bigger step change than I would have imagined to how we think about creating and running manufacturing solutions.

Let's start with a brief recap. Operational agents extend the capability of the production system realizing digital twin capabilities in ways that introduce reasoning, interpretation, and contextual understanding directly into the work being done on the floor. Builder agents open the door to a multithreaded, parallel engineering process that fundamentally changes the speed and depth at which solutions can be created. It feels less like a “copilot” assisting a developer and more like a coordinated team of SMEs designing solutions - Augmented Lean at hyperspeed!

This combination—augmenting frontline execution while accelerating the design and iteration of digital systems—points to a future where humans orchestrate agent ecosystems rather than manually building every piece of a solution themselves. This brings me to the motivation for writing this post that became clear to me in a recent customer conversation about DCS integration in support of a digital solution for pharmaceutical manufacturing of clinical drugs.

Reimagining Composable Integration with DCS and ISA-88 Through Agentic AI

The question I was asked recently wasn’t the classic “How do you integrate an MES with a DCS?”—that problem has been addressed in many different ways in the traditional architectures. The real question was far more interesting: How do you integrate a Composable MES built on a Frontline Operations Platform with a DCS or other ISA-88 based batch system?



In a traditional MES world this integration immediately triggers a familiar debate about how to partition the recipe across systems, define boundaries of responsibility, and reconcile master data, recipe models and equipment hierarchies. And that debate is almost always constrained—if not outright dominated—by the rigidity of monolithic MES platforms. The architecture drives the discussion more than the operational needs do.

But in a composable environment, the constraints that shaped those historical debates simply don’t apply. Let's look at what happens when you apply a composable, agentic model.

1. Composable Apps Remove the Traditional Constraints

In a composable architecture, apps are not bound to a predetermined master data model or recipe structure. This means that there is no need for recipe model partitioning, no need to replicate equipment hierarchies, no predefined S88 recipe model to map into. 



This flexibility removes the most painful barrier in traditional MES ↔ DCS integration: the structural reconciliation of recipes and equipment models. The DCS can continue using its ISA-88 representations. Tulip apps can represent the process in the most intuitive and useful way. And the integration simply becomes the mapping of meaning and intent between the two worlds. You design the representation of the process that makes sense for your operation—not the one dictated by the systems.

Composable solutions also shift the perspective entirely by taking a human-centric, activity-based approach organized around the physical reality of the shop floor. I fully recognize that, in the traditional monolithic MES world, standard models like ISA-88 were considered essential—they provided structure, discipline, and a shared language for process-centric systems. But composability represents a fundamentally new paradigm

To democratize operational systems and bring them closer to frontline work, we must prioritize operator-first design rather than forcing every SME to become a master of S88 modeling. ISA-88 remains invaluable for process control, but the surrounding operational systems must be simplified and democratized so they can work hand in hand with the distributed nature of modern manufacturing. Composable platforms do exactly that: they allow process engineers, chemical engineers, and frontline teams to collaborate without being constrained by rigid, expert-only models.

This alone would dramatically simplify integration. But the real breakthrough comes with agents.

2. Builder Agents Enable Multithreaded, Generative Solutioning

Builder agents transform integration work from a linear, manual design activity into a parallel, iterative, and generative process. They don’t just help you “build faster”—they fundamentally change how solutions are conceived and engineered.

I experimented with builder agents that can ingest a full ISA-88 recipe structure and conduct deep introspection on it: understanding the procedural models, identifying phase logic, parsing parameter definitions, and extracting the relationships between equipment, units, and operations. It then suggested mappings, app contexts, and design patterns—not only based on expert interpretation of the ISA-88 standard, but also from what they’ve learned across existing apps, historical integrations, real-world performance of similar solutions and critically expert knowledge of composable design principles. In other words, these agent combines domain expertise with empirical insight, offering design options that reflect both best practices and operational realities.

This alone already feels like having a team of process engineers and MES architects working in hyperspeed. But the true power emerges when operational agents begin contributing dynamic intelligence into that design loop.



Operational agents provide real-time feedback about process variability, material availability, logistics implications, quality status, or unexpected delays. They can accommodate non-optimal or evolving recipes by dynamically dispatching materials, reallocating resources, or bringing the right expertise into the process at the right time. This dramatically increases operational resilience and reduces risk—because the system adapts rather than stalls when confronted with real-world complexity.

And then there’s compliance...

Specialized builder agents trained on GxP principles can support on-the-fly risk assessments, propose mitigation strategies, and generate validation documentation as part of the design cycle. Operational validation agents can take this further, enabling true continuous validation—monitoring execution conditions, evaluating deviations against risk models, and providing traceable explanations for decisions. Compliance becomes embedded, in fact native, in the system rather than layered on top.



When you step back and think about the implications, the potential is almost infinite. The combination of builder and operational agents elevates agility and compliance to levels we’ve never imagined in traditional MES architectures and design approaches. It enables systems that are not only faster to build, but continuously improving, self-aware, and aligned with both operational needs and regulatory expectations.

This is the beginning of a new era in how manufacturing solutions are designed, executed, and validated. It feels like a generative design process running at hyperspeed. Not a single assistant helping you code tasks faster — but a team of AI experts collaborating to create a complete solution.

And this unlocks something we have never had before in manufacturing software: the ability to rapidly iterate and explore multiple viable integration architectures before committing to one. This is enormously valuable in an ISA-88 context, where recipes, equipment logic, and operational variability rarely align perfectly.

Seeing the Explosion of Use Cases

If you let the builder and operational agents begin to work together, the number of possibilities just explodes - its the first step towards a Multi-Agent System (MAS). These agents don’t simply execute tasks—they learn, reason, and collaborate in ways that constantly reinforce and expand what’s possible. Suddenly, problems that used to take months of engineering effort can be tackled in days—or even hours.

Some of the notable and exciting use cases I’ve come across include:
  • Automatically mapping process logic into app structures.
  • Rapidly generating compliant workflows for regulated environments.
  • Exploring recipe variants and operational scenarios through simulation.
  • Using agents to assist in validation and documentation.
  • Dynamically interpreting and adapting recipes at runtime.
  • Applying cross‑system reasoning to catch inconsistencies early.
  • Coordinating multiple agents to design complete production solutions.
Each one opens a new door—where imagination, not technical limitation, becomes the real constraint.

What stands out to me most is the sheer power of these systems and what they make possible. Seeing a builder agent reason through an ISA-88 recipe, or an operational agent adapt to a real-time process disruption, feels less like traditional programming and more like working alongside a tireless, highly capable collaborator. My role has shifted from hands-on integration to guiding and steering intelligent agents—and that shift fundamentally changes how we think about manufacturing systems. The emerging dialogue between human expertise and machine reasoning opens up an entirely new design space, one where adaptability, resilience, and scale are no longer constrained by human bandwidth.

I’m also starting to document and share some of these experiences through AI‑generated videos, another capability I’m learning to use. They’ve turned out to be a surprisingly powerful way to show what agentic systems can do—and to help others visualize these new forms of collaboration on the shop floor. It’s a learning journey in itself, but it feels like the right extension of this exploration: using AI not only to build better systems but to communicate and learn in entirely new ways.

Seeing all of this unfold up close, it’s clear we’re not just evolving automation—we’re watching Holonic concepts come alive, the manifestation of the new digital manufacturing reality.

Crossing the Digital Divide: Human-Centric Manufacturing in a Multi-Agent World

When you step back and look at what emerges from the combination of builder agents and operational agents, it becomes clear that this is not just another productivity boost or architectural evolution. It is a convergence point — one that aligns remarkably well with how manufacturing operations have always been run by humans.

Manufacturing has never been a purely deterministic, rules-based environment. It is adaptive, situational, and deeply human. Engineers design intent. Operators respond to reality. Supervisors balance constraints. Quality professionals manage risk. For decades, our digital systems have struggled to reflect this reality, forcing people to adapt to rigid models and monolithic workflows rather than supporting the way work actually happens.


Multi-agent systems change that equation by enabling digital systems to finally reflect the way manufacturing actually operates—through parallel problem solving, continuous adaptation, and coordinated decision-making across people, processes, and technology.


Builder agents mirror how engineering teams work: exploring options in parallel, iterating designs, learning from past outcomes, and continuously refining solutions. Operational agents mirror how plants operate: responding to variability, adjusting to constraints, coordinating people and materials, and managing risk in real time. Together, they form a digital system that finally behaves the way manufacturing organizations behave — collaborative, contextual, and resilient.

 

This is profoundly human-centric, because it aligns digital systems with how manufacturing teams actually operate — dynamically, collaboratively, and contextually.


It also brings into sharp focus a theme I’ve been writing and speaking about since the late1990s. For decades, we have tried to digitize manufacturing by automating tasks, enforcing standard models, and embedding rigid logic into systems. That approach delivered value, but it also created the very constraints that now limit agility, scalability, and innovation.


What we are seeing now is the realization of a different paradigm — one where digital systems augment human reasoning instead of replacing it, where composability replaces monoliths, and where intelligence is distributed across agents rather than centralized in static applications. This is the paradigm shift I’ve been pointing to for years, and it is finally reaching a practical, scalable form.

The convergence of composable platforms, agentic AI, and multi-agent collaboration marks a true inflection point. We are no longer just modernizing legacy systems. We are crossing the digital divide — moving from systems that support transactions to systems that participate in operations.

The potential here is vast! Agility, resilience, productivity, and compliance are no longer trade-offs. They become reinforcing outcomes of a system designed around human workflows, continuously learning agents, and real-world context.

This is not the end state — it’s the beginning. But for the first time, the tools, platforms, and paradigms are aligned. And that alignment is what makes this moment different from other transformative eras that came before. And it will not stop - that is why we call it continuous transformation! 


Sunday, November 30, 2025

Experimenting With AI as a Creative Assistant: How I Created My Recent Videos

Over the last few weeks, I have been playing with AI as a creative assistant. Since my multimedia creative skills are - let's say sub par, I have used AI as a partner, or assistant in. The goal is to enhance content to promote knowledge sharing in manufacturing. Not AI as a replacement for expertise, but AI as a way to translate expertise into formats people actually absorb.

As part of this, I created two videos and I wanted to share the behind-the-scenes story of how I made them, what tools I used, and what I learned along the way.

Digital-First & Composable: The Future of Pharma Manufacturing Design

 

Grandpa Learns AI.


Why I’m Doing This

A few months ago, I was interviewed by a research team connected to the World Economic Forum. They’re studying the future of work and education in the digital age—specifically how people learn and adapt in environments that are changing faster than ever.

That interview got me thinking: Manufacturing is changing. Digital tools are changing. But our learning models haven’t caught up.

And if I’m being honest, my own communication style tends to be direct, dense, and sometimes… too straight to the point. Great for experts, not always great for everyone else.

So I wanted to see what happens when I let AI help me explain the concepts I care about—but in a completely different voice. So I leveraged the generative AI tools (specifically I used NotebookLM from Google for no other reason than availability - its free for now) and I’ll admit: I expected the usual AI fluff but the results was… surprisingly good.

With some well thought out prompting and iteration NotebookLM didn’t just rewrite my explanations—it transformed them into something more approachable, more story-driven, and dare I say it, more human. It brought out a teaching style that’s very different from my natural tone.

Transforming the Content

The first video was really just a "let me just feed some content and see what I get...". I recently wrote a whitepaper titled "Digital-First and Composable— A NewParadigm for ConceptualFacility Design in Pharmaceutical Manufacturing" about why its critical to take a digital first approach to the design of pharmacuetical manufacturing facilities. (Its not published publicly yet, but let me know if you are interested in a copy)

I wanted to test whether NotebookLM could help explain this somewhat deeper and more technical topic in a different way to non technical people. Basically as if you are explaining this to your grandmother. This is a known exercise that is commonly used to create a simplified and easier to understand content of technical topics. It was something I typically asked my students to do when defining their research topic, e.g. the The Feynman Technique

Here AI surprised me again. It took my content and created a narrative that felt clear, structured, less consultanty and was like a guided tour of the future of manufacturing It delivered the same intellectual payload—but in a format that's easier to digest for people who aren’t neck-deep in these topics every day.

For the second video I fed it the transcript from my WEF conversation about how people learn, and the AI picked up on a few of the stories that I used to exemplify how to explain new digital concepts to the industry. It took the my grandpa story  and created a story about a grandpa discovering AI for the first time. It turned a complex topic into something relatable and a little emotional. 

I shared both the whitepaper and the video I created with customers and colleagues and the feedback was that the video is by far more valuable than the whitepaper. The surprising part was that people actually learned from it. They weren’t just “getting the point.”, they were experiencing it - maybe even feeling the point. 

Why Use Personas?

One thing that became clear through this experiment is that who explains something matters just as much as what is being explained.

In manufacturing, we’re all guilty of communicating like… well, manufacturing people. Precise. Direct. Dense. Focused on efficiency. It’s great for experts, but not always for learners who don’t live and breathe MES architectures or Pharma 4.0.

This is where personas come in. Sometimes the most effective way to teach a technical idea is to have it explained by someone who is not you.

  • A grandpa.
  • A mentor.
  • A line worker.
  • A curious newcomer.
  • A future digital assistant.

AI helped generate voices and storytelling styles that I simply wouldn’t have used myself. And that difference matters. It’s disarming. It opens people up. It creates emotional connection. It makes the content stick.

But—and this is important—it didn’t invent anything on its own. It worked because I gave it:

  • the right context
  • the right source material
  • the right stories
  • and a clear intention
  • grounded in my decades of experience

AI can’t fabricate expertise but it can translate expertise into a form that reaches people where they are. The personas made the learning accessible and my context made it accurate. It’s a powerful combination.

What I Learned

In the end, this experiment taught me that AI can significantly expand my creative range—but only when it’s grounded in the right context. AI didn’t magically produce valuable content; it was effective because it worked with my whitepaper, my WEF interview, my research, and my own stories from years in manufacturing. When AI has that depth to draw from, it becomes an amplifier rather than a generator of fluff. 

I also realized how essential storytelling is for real learning. The emotional layer—whether it was explaining a digital-first facility as if to a grandmother or turning my grandpa anecdote into a touching narrative—made the concepts stick in a way traditional technical writing rarely does. And using personas was far more powerful than expected: having someone unlike me tell the story didn’t dilute the expertise; it made it more approachable and meaningful. What this ultimately reinforced is that AI isn’t the expert—it’s the assistant. It can translate, reframe, and humanize ideas, but only when guided by intention and supported by real experience. And that, I think, is exactly how AI will create value: by helping us communicate better, teach more effectively, and unlock new ways to share the knowledge we’ve spent years building.

Saturday, October 4, 2025

Composability, Governance, and the Future of Agentic AI in Manufacturing

We’ve all heard the stories: “Somebody created a solution in just a few hours with no-code or vibe coding…”.

It’s exciting, right? Engineers solving problems in record time, building digital tools with nothing more than intuition, creativity, and a bit of AI support. This is the promise of democratizationAgentic AI empowering everyone to innovate and improve.

And in many contexts, that speed is a superpower. But in manufacturing, the story is more complicated. Operations are inherently complex: machines, materials, people, processes, schedules, quality checks, and compliance requirements all interact in a dynamic web that is nonlinear, a complex adaptive system. In such environments, even small changes can cascade unpredictably, amplifying into disruptions far greater than their cause—an inherent feature of systems where interdependencies drive emergent outcomes.

This is where the risk lies. If an unguided agent makes the wrong decision in such an environment, things can go wrong very quickly—and often in ways that are difficult to anticipate or trace. The result might be downtime, compromised product quality, production delays and at worst safety issues. But the bigger danger is cultural: when something fails spectacularly, it doesn’t just cause operational damage—it can scare the organization away from using the technology at all.

Instead of unlocking incredible productivity, one misstep can trigger a mindset of “we don’t dare do this again…”. That’s not just a lost opportunity; it’s a setback that can stall digital transformation for years.

Digital Maturity Gaps

With the incredible pace of innovation of digital technologies and specifically Agentic AI, we have to acknowledge a hard truth: most manufacturers are still not fully ready to capitalize on this technology. The foundations are often shaky, and if we introduce solutions built by and with agents into this environment without addressing the gaps, the risks multiply.

Some of the most common issues include:

These are not new problems. I discuss them in detail in this blog.  Our traditional models both in technical architecture and deployment of operational solutions for manufacturing were never designed for the agile, composable, and connected environments we’re striving to build today. In addition we can't ignore that in any given manufacturing facility there are machines and systems of varying ages and legacy status.

Consider a Scenario

An enthusiastic engineer uses vibe coding to create an agent that dispatches materials to keep machines and operators as busy as possible. At first, everything looks good, machines are running at full capacity, operators always have work, and parts move quickly into assembly.

But no one realized the agent doesn’t properly recognize WIP limits or downstream capacity. It keeps dispatching jobs even when assembly stations and testers can’t keep up. The result: piles of excess material stack up between stations, overflowing into walkways and creating unsafe conditions for operators.

In a matter of hours, what was meant to boost productivity causes overproduction, flow disruption, and a serious safety hazard. The line is forced to stop because the system created too much of it, in the wrong place, at the wrong time. 

This isn’t hypothetical. It’s the kind of risk that emerges when Agentic AI is unleashed without governance. And unlike traditional tools, agents can act autonomously and at scale, amplifying errors at a speed we may not be able to react to.

Managing Complexity with Composability: Governance, Framework & Platform

Given this reality—where digital maturity is uneven, legacy systems abound, and the dynamic nature of manufacturing operations—the arrival of Agentic AI presents both an enormous opportunity and a serious risk. Agents are coming at us fast, and their power lies in democratization and speed. But those same qualities, in an environment as sensitive as manufacturing, can amplify problems just as quickly as they solve them.

This paradigm shift cannot be left to chance. To harness Agentic AI safely and effectively, we need to approach it in an organized, managed, and focused way. That requires: governance, framework, and platform all working together and grounded in composability.

  • Governance gives us the rules, guidance, and guardrails to ensure reliability, safety, quality, and compliance are never compromised. Governance is not just about control; it actively helps organizations overcome barriers like fragmented data, siloed systems, and cultural skepticism by providing structure, discipline, and confidence. Done right, it helps in the transformation of the manufacturing environment from both a technical readiness (data integrity, system integration) and an organizational readiness (culture, mindset, trust).

  • Framework provides the structure to make governance actionable. In A Composable Agentic Framework for Frontline Operations, I laid out how agents can be defined by type, given clear goals, and aligned through the artefact model. The framework ensures that agents are not just scattered tools but part of a purposeful, multi-agent system that reflects and supports real operations.

  • Platform is the enabling layer. An Agentic manufacturing operations platform purpose-built for frontline operations makes it possible to apply governance and framework seamlessly—across both authoring (building digital solutions) and execution (running the production line). Unlike generic vibe coding environments, such a platform is designed specifically for process engineers and operations teams. It embeds an understanding of how manufacturing works—constraints, variability, compliance, and safety—and provides tools to build solutions quickly while remaining aligned with operational best practices. Most importantly, it enables the creation and deployment of operational agents at scale that are are integral parts of the production system itself.

And underlying all of this is composability. Composability ensures that agents and solutions don’t exist in isolation, but as modular, human-centric, bottom-up elements of a system that adapts as needs evolve. Digital transformation success depends on treating composability not as an IT concept, but as an operational paradigm. In fact, composability was conceived for multi-agent systems (MAS)—and Agentic AI now makes that vision practical.

Final Thoughts

I think that we all agree that Agentic AI despite all the hype is not a fad—it’s here, its real and it’s already reshaping how work gets done. But to harness it safely and effectively, we need to recognize the two distinct scenarios where agents play a role:

  1. In building and engineering: where agents help create digital content, processes, and solutions. Here, governance ensures what gets built is valid, safe, and aligned with operational goals.

  2. In live operations: where agents support and even run production activities, interacting with machines, data, and humans in real time. Here, governance ensures reliability, compliance, and resilience in execution.

Both scenarios are powerful—they are needed but both also carry risks if unmanaged. And in manufacturing, where operations are complex and dynamic, small missteps can cascade into detrimental consequences.

That is why governance, framework, and platform are critical. Governance provides the rules and guardrails; the framework gives agents structure, goals, and alignment through the artefact model; and the platform operationalizes it all, giving process engineers and frontline teams the environment to deploy agents as integral parts of the production system, and all of this at scale.

And beneath it all is composability. As I’ve argued before, composability was conceived for multi-agent systems—modular, autonomous, and collaborative components working toward shared goals. Agentic AI now makes that vision practical on the shop floor.

The challenge ahead is not whether manufacturers will adopt Agentic AI, but whether they will do so in a way that balances speed with safety, democratization with discipline, and autonomy with alignment. Done right, it promises resilience, reliability, and the kind of productivity gains that digital transformation has always aspired to deliver.


Friday, September 12, 2025

A Composable Agentic Framework for Frontline Operations

Over the last year, “agentic AI” has shifted from emerging concept to practical conversation. Everyone is now talking about agents, and with today’s tools, building one has become every person’s business. But that raises a much bigger question: what agents should we build, and how do we organize them into something meaningful?

This is a topic that is not new and I have reflected on it through a number of lenses, e.g. Why IIoT is Transformative - its not the technology! and The Genius of the Toyota Production System Explained. This post however offers my first reflections on what a Composable Agentic Framework could look like for operations in general and manufacturing specifically. It’s an attempt to give industry an initial perspective, a concept and maybe some guidance for applying agentic AI to frontline operations in ways that improve both productivity and adaptability. Notice I say "initial", there is clearly much more to this topic that needs to be explored, discussed and debated - in addition the technology is nascent and we can expect much more sophistication and depth as it evolves.

Why Agents? Why Now?

For decades my peers and I, as manufacturing thinkers, have dreamed of holonic systems and fractal factories—production environments that adapt in real time, scale seamlessly, and continuously self-optimize. But until recently, that vision stayed in the realm of theory. The technology just wasn’t ready.

Now it is. Today’s digital platforms, IIoT connectivity, cloud infrastructure, and AI capabilities make it possible to realize this vision in practice. Instead of rigid, monolithic systems, we can now compose operations out of, and with, autonomous, collaborative agents that interact dynamically.

And this matters because composability is built on the idea that continuous, incremental improvements add up to transformation. What’s new is that agentic AI can accelerate these improvements beyond what we can imagine. Large language models and agentic frameworks have already proven their ability to supercharge productivity in domains like software development, research, and customer interaction. The challenge—and opportunity—before us now is to understand and define how to bring that same acceleration into composability and frontline operations.

Agents make this possible. Each agent is discrete, goal-oriented, and autonomous, yet designed to collaborate with other agents. When composed together, they create systems that flex, adapt, and continuously optimize.

In earlier posts, I’ve argued that digital transformation is not about IT and OT learning to coexist, but about creating a new whole where the distinction ceases to matter. That’s the essence of composability. Agents are the next step: a way to make that vision practical, modular, and scalable.

An Agent Framework for Composable Digital Solutions

The starting point is simple: in new digital operations platforms like Tulip, apps already behave like agents.

  • They have a clear goal (guide an operator, track a unit, log a machine event).
  • They operate autonomously within their context.
  • They collaborate with other apps and systems through shared data, triggers, and transitions.

Add AI into the mix, and these apps become agentic apps—supercharged digital teammates. And when multiple apps are composed together, they form a multi-agent system that mirrors the complexity of real operations, in other words they become digital twins in the true sense of the concept.

But in order for this "mirroring" to become a digital twin we need to define some rules and this is where the framework comes in. To move from scattered apps and automations to composable digital solutions, we need a structured way to think about agents.

Add AI into the mix, and apps evolve into agentic apps—supercharged digital teammates. Composed together, they form multi-agent systems that can mirror the complexity of real operations—becoming digital twins in the truest sense. But achieving that requires more than simply building agents; it requires structure and guidance. Without it, the risk is a proliferation of scattered apps and agents with no cohesive purpose—at best delivering little value, and at worst creating more complexity, reduced productivity, and even unsafe outcomes. The goal of this suggest framework is to provide the rules and design principles that ensure agents align toward a shared purpose. This reflects the essence of holonic structures: their power comes not just from autonomy, but from working together toward a common goal. That shared purpose is what makes them optimal, and what turns a collection of agents into a true composable digital solution that delivers measurable benefits.

In my earlier blog post “To Data Model or Not to Data Model”, I described the Artefact Model as a key element in composability: a scalable, flexible, interpretable representation of the operational system. The Artefact Model gives us the common context in which agents can interact—products, orders, machines, deviations, and operators all represented digitally and consistently.

A Perspective on Agent Types

Before diving into the types themselves, it’s important to recognize that agents serve two distinct scenarios in manufacturing:

Authoring / Building – Here, agents augment the creation process. They help engineers, developers, and even citizen builders design solutions faster and smarter. Think of them as co-pilots that propose app templates, generate artefact structures, suggest best practices, and automate repetitive setup tasks. These agents accelerate innovation and democratize solution-building.

Operations – Once deployed, agents act within day-to-day execution. They monitor machines, guide operators, coordinate workflows, manage deviations, and connect enterprise systems. These operational agents are the ones “living” in production, continuously working toward defined goals while collaborating with other agents and humans.

The framework presented here is focused specifically on the operations scenario. That being said there are commonalities and some agent types apply to both scenarios, but the context differs: in building, agents amplify human creativity and speed; in operations, agents amplify execution and adaptability. 

Agentic AI in operations enables a powerful ecosystem where "teams of experts" (agents) work together, demonstrating "collective intelligence". Crucially, operational agents, empowered by AI, transform what were once innate objects like machines and materials into active, intelligent members of the dynamic work environment. We are giving them the ability to be agents—autonomous and collaborative participants within the dynamic manufacturing operation network.

With that lets take a look at a shared taxonomy for the different types of agents in a composable agent framework:

Physical Agents: These agents are defined by their direct representation of physical manufacturing objects within the digital twin. They continuously mirror the real-world status, attributes, and behaviors of their tangible counterparts, enabling real-time monitoring, analysis, and control. Here are some examples:

  • Product Agent: Represents a specific product or unit throughout its manufacturing journey. Its goal is to track the product's individual status, quality parameters, and genealogy, providing a comprehensive digital record for each item produced.
  • Machine Agent: Serves as the digital twin of a specific piece of equipment or machinery on the shop floor. Its purpose is to monitor machine health, performance metrics (e.g., OEE, availability, performance, quality ), and predict potential failures, enabling proactive maintenance and optimized utilization.
  • Tote Agent: Represents a tote or device that carries and conveys product or material on the shop floor. This agent's role is to track the movement of the material or products it carrier in the operations and facilitates traceability and location with ease.

Operational Agents: These agents are defined by their focus on tangible operational entities and processes used in manufacturing management. They manage the flow of work, information, and events, ensuring that manufacturing processes adhere to plans and respond effectively to deviations. Here are some examples:

  • Order Agent: Represents a specific production or work order. Its goal is to oversee the end-to-end execution of that order, tracking progress against the schedule, managing material consumption, and ensuring all required steps are completed.
  • Deviation Agent: Activated when a process or quality deviation occurs. Its purpose is to identify, classify, and manage the deviation, potentially initiating corrective actions, alerts, or escalation workflows to relevant personnel or systems.
  • Schedule Agent: Responsible for dynamically managing and optimizing production schedules. This agent works to ensure resources are efficiently allocated and production targets are met, adapting to real-time changes in machine status, material availability, or order priorities.

System Agents: These agents are defined by their role in facilitating integration and intelligent interaction with broader enterprise-level systems and data repositories. They ensure data consistency, enable seamless workflow orchestration across different platforms, and provide access to critical business context. Here are some examples:

  • ERP Agent: Manages the flow of information between Tulip and the Enterprise Resource Planning system. Its function includes receiving work orders, reporting production updates, and managing material consumption and inventory levels in the ERP.
  • UNS Agent: Represents the integration with a Unified Namespace. This agent enables seamless, real-time data exchange across the entire operational landscape, ensuring that all systems have access to consistent and up-to-date information.
  • Data Lake Agent: Responsible for managing the ingestion of operational data from Tulip into a central data lake and enabling access to this data for advanced analytics and further AI model training. It ensures that the rich data captured by Tulip's composable applications is leveraged for broader insights.
  • Device Agent: Corresponds to a specific connected device, such as a scale, barcode scanner, or sensor. This agent's role is to facilitate seamless data exchange between the physical device and the Tulip platform, ensuring accurate data collection and enabling device-triggered actions.

Staff or Companion Agents. These are a general type of agent that augment the human's ability to find information, research topics, suggest improvements, and perform tasks. They are used in a variety of scenarios and serve as utilities in both the operational environment as well as the engineering or builder environments. Here are some examples:
  • Quality Research Agent: Quickly finds documentation, suggests troubleshooting steps to an operator, or summarizes a quality history for a supervisor.
  • App Builder Agent: Generates app templates, proposes table structures in the Artefact Model, or scaffolds connectors based on device specs—accelerating citizen developers and engineers.
What unites these agent types in this framework is the three core properties: it has a goal, it operates autonomously within a bounded scope, and it is collaborative—able to exchange data, signals, and intent with other agents and humans.

Crucially, these agent types do more than align with the Artefact Model — they enhance it. By consistent real time representation of artefacts (products, resources, orders, deviations), agents enrich the shared digital twin with actionable state, decisions, and provenance. That enriched Artefact Model becomes the lingua franca that lets agents interoperate reliably, enables composition, and prevents the classic failure mode: a landscape of scattered apps and ad-hoc bots with no unifying purpose.

In short: agents must be designed to work as an integral part of  the Artefact Model so that autonomy and collaboration add up to a cohesive, safe, and value-driving digital twin.

Beyond Incrementalism: The Future of Multi-Agent Collaboration Frameworks

Much of what we know about manufacturing improvements has historically been driven by incrementalism—step-by-step gains in efficiency, quality, or throughput. This mindset is not wrong; in fact, it is the foundation of continuous improvement and the heart of lean thinking. But incrementalism alone can only take us so far. To thrive in today’s volatile and complex operating environments, we need systems that don’t just get gradually better but can adapt dynamically to new conditions.

This is where multi-agent collaboration becomes transformative. Agents, by design, are autonomous but collaborative, and when they interact at scale, they exhibit something greater than the sum of their parts: collective intelligence.

The result is emergence and self-organization: a system-level intelligence and adaptability that was not explicitly programmed into any single agent. Emergent behavior is what allows multi-agent systems to flex and reconfigure in response to disruptions, market changes, or unexpected events. This is not just automation—it’s a new layer of operational intelligence applied directly to the frontline.

But to realize this potential, we must also reconsider the frameworks that structure our digital manufacturing systems. Such a composable, agentic world—where apps act as agents and operations are orchestrated by multi-agent systems—doesn’t fit neatly into traditional manufacturing systems standards and hierarchies definitions (see my earlier post, OK, Let’s Talk ISA-95).

That doesn’t mean throwing standards away, but it does mean rethinking or adapting them to this new reality. If emergence is the key to adaptability, then our models and standards need to evolve to describe systems that are dynamic, distributed, and composable rather than hierarchical and rigid.

In short: incrementalism is still essential, but it is no longer sufficient. Collective intelligence, powered by agents and guided by frameworks like the Artefact Model, is what will enable manufacturing to achieve adaptability at scale—and truly fulfill the promise of digital transformation.

Final Reflections

Building, authoring, creating agents is within everyone’s reach. But without structure and guidance, we risk ending up with a fragmented landscape of apps and bots—scattered efforts that deliver little value, or worse, add complexity, reduce productivity, and even create unsafe outcomes.

My hope is that this initial framework presented here provides that needed guidance. It should help us define what kinds of agents to build, how to compose them into systems, and how to ensure they align with a unifying purpose. It grounds agentic design in Composability and its Artefact Model, ensuring that agents not only adhere to but actively enhance the shared digital twin. This alignment is what keeps autonomy and collaboration from drifting apart and turns them into something greater: collective intelligence with emergent adaptability.

This is why composability and agent frameworks matter. They give us the structure to channel autonomy toward common goals. They offer to guide us on the path to increased productivity with adaptability. And they point to the need for new thinking in our standards and models—beyond the rigid boundaries of monolithic approaches, toward a more dynamic and composable reality.

In the end, the promise of agentic AI in operations is a the newest step in the digital transformation journey: Continuous Transformation - reinvention at scale of manufacturing operations.

And this is just the beginning. There is so much more to explore, define, and refine. I invite you—industry peers, practitioners, and thinkers—to engage in this discussion and debate. Let’s shape together what a composable agentic framework should look like in practice. After all, it takes a village... 

Tuesday, September 2, 2025

IT/OT Convergence: Still Vague, Still Critical



For years, IT/OT convergence has been a recurring theme in digital transformation conversations. It’s almost become a cliché. Everyone agrees it’s important, but few can define it clearly, and every company seems to have its own “flavor.”

That vagueness is both a challenge and an opportunity. IT/OT convergence is not just about technology stacks, data pipelines, or network architectures. It is about organizations, people, and how digital capabilities become part of the fabric of operations. And in the context of continuous transformation, this conversation remains more relevant than ever.

Why IT/OT Convergence Matters in Continuous Transformation

Digital transformation is not a one-time project—it’s a continuous process of adapting, learning, and embedding new technologies into how we operate. In that context, IT/OT convergence is essential.

Why? Because transformation cannot happen in silos. The systems that plan and account (IT) and the systems that control and execute (OT) must work together seamlessly. If they remain separate—organizationally, technologically, or culturally—you end up with fragmentation that slows down transformation instead of enabling it.

Can We Define It? And Why Does That Help?

One of the reasons IT/OT convergence feels vague is because it is often reduced to a technical exercise—connecting networks, integrating databases, or sharing dashboards. But that misses the bigger picture. To make it actionable, we need a broader and more ambitious definition.

At its core, IT/OT convergence is about making IT and OT inseparable. Not aligned, not just integrated, but merged into one digital foundation for the business.

That means:

  • Integration of technology across the entire operation—from planning and engineering, to execution on the shop floor, and even to customer-facing processes. IT and OT must form a continuous digital thread that spans the lifecycle of design, production, quality, logistics, and service.

  • Merging organizational roles and responsibilities—so that IT and OT aren’t two camps negotiating interfaces, but one team co-owning outcomes. The boundaries blur until it becomes irrelevant whether a capability was once “IT” or “OT.

  • Embedding digital practices into operations—so technology isn’t an external tool to be “applied” to operations, but a core element of how the organization works, improves, and creates value.

One of the most dangerous misconceptions in digital transformation is treating technology as an external layer—something added on top of operations. This is what has been going on for decades, born out of necessity of dealing with super complex monolithic systems. It is a model that creates friction, and this friction is detrimental to progress - it will suffocate any digital adoption initiative.   

Defining convergence in this way is helpful because it reframes the conversation: it’s not about how to connect two separate worlds, but how to design an organization where there is only one world. That shift in mindset is what makes IT/OT convergence transformative.

For digital technology to be impactful, it must be embedded into the way work is done at all levels: from the operator on the line, to the planner in the back office, to the leadership team setting strategy. IT/OT convergence makes this embedding possible.

When data, insights, and digital tools flow seamlessly across operations, technology doesn’t feel like an “extra.” It becomes integral to how people work, decide, and improve.

The Composability Pillar of Agile Operations

Finally, IT/OT convergence is inseparable from the principle of composability. To be agile, organizations need technologies that can be composed, reconfigured, and adapted as needs change.

That means convergence cannot be separated—neither organizationally nor by use. If IT and OT are treated as distinct silos, agility suffers. But when convergence is embraced, composable technologies support operational excellence: flexible enough to adapt, strong enough to sustain, and aligned enough to deliver value across the enterprise.

How Do We Know When It’s Complete? And Does That Matter?

Here’s the truth: IT/OT convergence is never “complete.” Like continuous transformation itself, it’s an ongoing journey. Technologies evolve, organizational structures shift, and new business challenges arise.

The goal is not to check a box that says, converged. The goal is to continually deepen the integration between IT and OT so that technology becomes invisible—it simply is the way you run operations.

So whether or not it’s ever “done” is less important than whether it’s continuously evolving to support value creation.

Moving Beyond the Buzzword

So, is IT/OT convergence vague? Absolutely. But it’s vague not because the idea lacks merit—it’s vague because it was born out of conflict.

IT and OT have so far been separate domains, each with its own responsibilities, budgets, and power structures. IT managed enterprise systems, data security, and corporate standards. OT managed the machines, processes, and operational continuity. Bringing the two together is not just a technical exercise—it’s a challenge to established authority.

That’s why IT/OT convergence often feels like a “hot potato.” Nobody wants to own it fully because it requires organizations to do things that are uncomfortable:

  • Merging organizations that were once distinct.

  • Relinquishing power as decision-making becomes more distributed.

  • Diminishing rigid responsibilities as democratized technologies empower more people to contribute to digital solutions.

At its heart, convergence means releasing control—accepting that digital technologies are no longer the sole domain of one function, but a shared capability that belongs to everyone.

And that’s hard. It’s hard for people who have built careers around defending their territory. It’s hard for organizations that have optimized themselves around silos. It’s hard because it requires a cultural transformation just as much as a technological one.

But here’s the truth: without this convergence transformation halts. You cannot achieve continuous transformation if half the organization is innovating in isolation while the other half is protecting legacy boundaries. The result is friction, fragmentation, and failure to capture the value that digital technologies promise.

This is why IT/OT convergence—however uncomfortable, however vague—remains critical. It is the cultural and organizational foundation on which digital transformation rests.

In the end, convergence is not about IT and OT learning to work better together. It’s about creating a new whole where the distinction ceases to matter. That is the mindset shift. And until organizations embrace it, “transformation” will remain more slogan than reality.