Monday, December 29, 2014

A Tale of Two Factories: Chapter 1


How many applications do plant floor personnel need to learn to do their job?  The title of my blog is Dickensian enough, so I won't open with a cheesy "best of times, worst of times" parody.  This is the first in a series discussing the impact of technology, production processes, and the intersection of where they meet – the production workers on the factory floor.  The story begins with two plants – Deuxieme Botte Manufacturing (DBM) and Premiere Chaussure Industries (PCI).  We are introduced to two workers: Don is a maintenance technician at DBM, and Brad has a similar position at PCI.  Let's take a look at a typical morning for each of them.

Morning at DBM

Don began his day just like every other; he stopped by his desk to see what work orders he needed to complete during his shift.  He found three in his inbox – all preventive maintenance work.  He picked the one he knew would take the longest, popped his head in the maintenance supervisor's office and said "'Mornin' Jeff – I'm headed off to Assembly Machine 3 to take care of the PM work."
"What work order number is that?" asked Jeff.  "I need to make sure it's entered into the maintenance dispatch system."
Don knew the WO was already in the system, but read it off of the paperwork for Jeff anyway.  "Yep, it's there!" Jeff said.  "Let me know when you're done and I'll update the status in the system."  "Will do" said Don as he headed out the door.  As he pushed his tool cart to the assembly lines, he began reading the details of the work order. "Hmmm, guess I'll need to stop by MRO to pick up the parts kit for this PM".
When he got to MRO, Don showed the attendant Debbie the work order and asked for the part kit. "I'm going to have to look this up by kit number to make sure we give you the right one – you know, correct storage bin number and all" she said. As he waited, Don thought about what an improvement it was getting PM parts pre-kitted over requesting each part individually like they used to.
"Hey Deb – you going to be long?" ("Maybe there's time for coffee", he thought to himself.)
"Well, the system seems a bit slow today.  This usually only takes a few minutes, but for some reason it's taking longer today!  Finally – ", she said with dramatic flair, "- here it is.  Could you fill out the requisition form using kit #80054124?"
"Oh great – I don't have a charge number on this work order. Deb, do you happen to know the department charge for Assembly?"
"I think I have it on a list here somewhere…here it is", she says.
"Thanks Deb.  I don't know why we can't just use the work order number instead of a department charge. Life would be so much easier."  He completed the paperwork and returned it to Deb.
"We can ask IT to make that change, but you know how long that will take" Deb replied.
"Yeah, I love my job" muttered Don.  He was being only partially sarcastic; he mostly enjoyed his work but it seemed to him that more and more of what he did was to feed a system that really only provided benefit to the bean counters somewhere in the front office.  Things that would make his life easier got "prioritized" and placed in a queue until IT could assign resources to make the necessary system changes.
Deb laughed and handed Don the part kit, and he was off to the Assembly area.  About 30 minutes into the job, he happened to notice Jeff approaching.
"Hey Don!  I've been trying to get ahold of you! We've got a machine down in Parts and I need you to get over there right away!  Why didn't you answer your cell phone? We gave them to you so we could reach you immediately!"
Don looked at the phone sitting on his tool cart.  The PM job required reaching into some tight spaces, so he had taken the phone off his belt.  Assembly is a noisy area, so he couldn't hear the phone ring. "I love my job", he thought to himself. "OK – give me two minutes and I'll be there."  He looked for the nearest Maintenance Dispatch display board to see which machine was down, gathered up his tools and was off.

Morning at PCI

Brad stopped by his locker and fit his Bluetooth-enabled personal protective equipment – which interacted with his smart device - over his eyes and ears. He used to have a desk, but since he hadn't really used it in over a year he had given it away.  "Good morning, Smarty" he said to his personal digital assistant.  He named the device "Smarty" because he didn't want to play favorites among the competing technologies, and it really was a smart device.  It interfaced directly with the enterprise systems and actually helped him do his work. "What's on my 'to-do' list for the day?" he asked.
The device responded "Good morning Brad.  The first job on your list is a preventive maintenance task on Assembly Machine #4. The part kit was delivered to the department and is waiting by the machine, and I have the procedure list ready when you need it.  Would you like to hear about the other tasks on your list?"
"Later Smarty. I'm off to work on the assembly machine." Brad grabbed his tools and headed directly to Assembly Machine #4 and began working.
After some time, Smarty interrupted Brad.  "Excuse me Brad, but your assistance is needed. An operator has reported a parts machine down.  The upstream buffer for this machine has space available, but the downstream buffer will be empty in 15 minutes. This will cause other processes needed to meet schedule to shut down, which elevates the priority over the PM task you are currently working on."
"OK Smarty – which machine is it?" asked Brad.
A plant map appeared in Brad's protective eyewear; the breakdown glowed red on the map, along with a green routing from Brad's current location to the machine requiring his attention. Brad first made sure his LOTO was secure on the assembly machine and then headed for parts, where he was greeted by Betty, the machine operator.  "Hi Brad.  Wow – that was fast!" she said. "I just reported the breakdown five minutes ago!"
"Yeah, Smarty here told me it was kinda urgent" said Brad, pointing to his smart device.

Two Worlds

There are many comparisons to draw between Don's world at DBM and Brad's at PCI. Both have similar systems in place, but in Brad's world the artifacts of the technology – things such as work orders and department charges – are invisible at the worker level. Even the systems themselves are not apparent to the technician.  He has one interface – his smart device – which ties all the enterprise systems (such as ERP, CMMS, MES/MOM, and PLM) together.  They collaborate to help him do his job.  In contrast, Don's world is so cluttered with these artifacts that he cannot imagine a world without them. He has log-ins for all of the systems, and knows a few transactions in each.  He even keeps instructional "cheat sheets" in his tool cart for reference when he accesses systems on the plant floor.  He may have to search for an available PC on the floor to interact with the enterprise systems, which is a "hit or miss" proposition; an operator may have the PC tied up doing quality checks, or a supervisor may be performing a downtime study during his gemba. Feeding the technology has been bolted onto his work processes.
Which factory is more competitive?  Which factory is seeing greater technology ROI? The world of the PCI factory seems like science fiction, but every technology that enables it currently exists and is readily available.  Many can identify with (and many more may even be envious of) the world of DBM, but cannot see the path to the PCI world.  What strategies are in play?  Who owns them? Who drives them?  How did management become convinced to fund this approach? What additional technologies are needed, or how can existing technologies be re-deployed? These questions will be explored in future postings in this series.

Monday, December 1, 2014

Downtime Follies

The journal "Academy of Management Executive" published a classic article entitled "On the folly of rewarding A, while hoping for B" by Steven Kerr (1995, Vol 9. No. 1).  The article explains why we end up with a less desirable state when we really want something better.  Kerr covers a wide array of sub-optimal situations: medicine, consulting, sports, government, and business.  He concludes that the reward system is at the heart of the issue; there are organizational forces which reward the less desirable behaviors and essentially starve the more desirable ones.  Jim Collins opens his book "Good to Great" with a similar thought: "Good is the enemy of great." We want things to be better, but we continually accept and reward the mediocre.  This is no less true on the plant floor than it is in other areas of life, and asset downtime provides a prime example. 

Asset Performance

This posting is not specifically about machine ​downtime, but about how people respond to downtime and the consequences of their decision-making processes.  Consider the following graphic:
Performance.jpg
It shows two identical machines in running and stopped states over time (the horizontal axis).  Note that OAE is short for "overall asset effectiveness", a metric that is essentially the same as OEE, or overall equipment effectiveness.  In this example, total running time is the same, production is the same, and asset effectiveness is the same (i.e. quality and schedule are equivalent between machines).  Answering the question of which performs better isn't obvious based on these metrics, but common sense would say that Machine A is more desirable than Machine B.  It has a higher reliability (greater mean-time between failure, or MTBF) even though the individual downtime periods are longer.  Since it has fewer downtimes, it is easier to address the downtime causes.  Machine A also has lower operating costs – starting and stopping equipment is inefficient and costly.  There are also greater opportunities for quality issues to occur when there are frequent starts and stops, and of course there is more wear on the equipment which increases maintenance requirements. How is it we end up with assets performing more like Machine B, with more frequent downtime events, but faster recovery (mean-time to restore, or MTTR)?

Unwritten Rules

Let's examine some of the unwritten rules which dominate the production environment:
  • Rule 1: Downtime is bad.
  • Rule 2: Long periods of downtime are worse than short periods of downtime.
  • Rule 3: People should not be idle in the production environment.
All three rules are highly logical; who would question them?  Why would a manufacturer invest capital in equipment that isn't producing, and why pay people to stand around?  But notice these are also exactly the rules that drive behaviors which cause more frequent but shorter downtimes.  Technicians are rewarded for reducing downtime duration without maintaining equipment reliability.  As long as production schedules are being met and asset effectiveness metrics are maintained, equipment reliability and process predictability are of little concern.  I'm familiar with one case where process MTBF was less than 2 ½ minutes, but because the machines could usually recover without operator intervention the failures were not even noticed (MTTR was around 5 seconds).  The MTTR frequency distribution was highly skewed toward the lower end, with median MTTR around 2 seconds.  In short, the machine was effectively unreliable, stopped frequently but recovered quickly, and the operators were essentially unaware of the downtime.  Since this type of performance became accepted practice, budgeting and planning were built around the sub-optimal operation and there was little incentive to change.

The Point

An organization which embarks on an operational excellence initiative must be willing to challenge the unwritten rules which govern plant floor behavior because they have unintended consequences.  There are no business drivers which would cause anyone to consciously choose Machine B over Machine A, but that is exactly what ends up happening as a result of these cultural imperatives.  The ability to measure downtime does not automatically translate into better asset effectiveness, but it should cause the organizational introspection and culture changes which do.  The folly of accepting B while expecting A comes in rewarding inefficient processes because they fit into a flawed set of beliefs.

Wednesday, November 26, 2014

Mind the Gap

"Mind the gap" is a phrase that originated in the Lon​​​don Underground (subway system).  It is intended to warn riders of a potential hazard that exists between the railcar and the platform.  People unaware of this hazard can stumble, trip, and be injured.  The phrase has also become the title of a popular BBC America blog that warns of the differences between the British and American cultures.
There is a gap in most manufacturing operations of which we should be aware as well.  Continuous improvement strategies such as Lean and Six Sigma have done much to increase the effectiveness of manufacturing operations.  However, there remains a gap that practitioners of these techniques frequently overlook: the power and capabilities of available technology is not reflected in the implemented processes.

The Gap

Many manufacturing processes have been implemented with an incomplete understanding of the capabilities of existing technology.  The term "process" here is used broadly: it refers not only to the value-adding steps of transforming incoming materials to outgoing products, but all the interactions of people to make this transformation happen.  In his book "Competitive Advantage", Michael Porter introduced the concept of the "value chain", which describes how an organization delivers value.  Porter's chain is similar in concept to other lifecycle models, tracing value from origin (inbound logistics) to termination (product service). Each link in the chain has its own internal "chains" (or "value streams" in Lean terminology), and each of these have their own set of processes.  Undergirding all of these processes are foundational services provided by organizational infrastructure, HR management, and technology.
Value Chain.jpg
Figure 2: The value chain ("Competitive Advantage", Michael Porter, 1985)
Typical kaizen and DMAIC approaches will involve people whose upstream or downstream processes intersect with the "as-is" or "to-be" state (the horizontal axis of Porter's value chain), but frequently do not include people from the enterprise systems in the discussion (the vertical axis).  This eventually leads to disconnection between process and technology – the "gap".

An Example

Every manufacturing plant has some form of maintenance, repairs and operations (MRO) inventory.  Responsibility for this inventory is usually given to the Plant Maintenance department, or sometimes to Engineering.  Initial stocking levels are determined using processes such as "reliability-centered maintenance" (RCM) analysis, but on-going stock levels are determined based on experience and governed by simple rules such as "never run out of critical parts".  Maintenance and engineering people are not usually trained in inventory management techniques, so concepts such as "safety stock" and "service level" are foreign.  Rules around when to re-order and order quantities may be tribal knowledge, and those rules may not take into account changes in lead time or material consumption patterns over time.  This inevitably leads to excess inventory.  Other areas of the business usually have implemented consumption-based planning and have materials resource planning (MRP) technology readily available which could be used in MRO, but the people responsible for MRO do not know the full capabilities of the technology.  There's the gap, and the plant leaks money – almost unnoticed - in terms of unneeded inventory and carrying costs through that gap.  This gap can be closed with simple training and modifications to the existing processes (and perhaps bulk updates to the material master records in the ERP to take advantage of the MRP technology.)
This is just one of many examples of the process/technology gap; many more exist.  Some gaps, when closed, have the capability to totally transform the organization.  Examples of this may be found in strategic initiatives such as Product Lifecycle Management (PLM), Manufacturing Operations Management (MOM), and Business Process Management (BPM).  Others, such as the MRO example, simply require knowledge-sharing and minor process tweaks.

Closing the Gap

Like the riders of the London subway system who intuitively (or perhaps from an unfortunate experience) know to step over the gap, it can be very difficult for those immersed in the day-to-day activities that surround a process/technology gap to even recognize the gap exists.  They have adapted to processes and found ways to make things work.  They may be aware that processes are sub-optimal, but cannot place any priority on improving things because they work well enough to keep production running.  This applies to internal analysts as well as managers: they are just too close to the situation to see the process/technology gap.  It has become part of the fabric of operating the business – the way things get done.

Awareness

The key to closing a gap and recovering business value is to recognize that a gap is real, even though those within the organization may not see it.  Most manufacturing executives have a vague sense that something is missing and that technology can help, but until a situation changes from a perceived minor annoyance to an actual pain-point they cannot assign priority to investigate improved methods.

Networking

There are many organizations which can help identify gaps and recommend strategies to prevent future gaps from forming.  Some are manufacturing industry peer networks (IPNs) where non-competing organizations interact to share best practices; according to an MIT/Sloan Management Review research article (Winter 2006), these IPNs help address issues categorized as myopia and inertia – the underlying causes of the process/technology gap.  Other forms of networking include professional organizations such as MESA International, IEEE, and ISA.

Consultants

An outside consultant can be an extremely valuable asset.  A good consultant brings value that exceeds the cost of his or her services by at least a factor of ten. He or she brings a different perspective than can be found by those enmeshed in the daily workings which include the unwritten business rules that allow processes to operate.
Too Busy to Improve.png

Tuesday, November 11, 2014

Control Engineering - The Next Level

While I have not investigated university-level engineering curricula recently, I have the impression that most engineers that work with industrial controls graduate with some level of competence in computer programming. I don't want to diminish the value of the skills these engineers possess, but I do want to make and observation: an engineer who writes software is not the same thing as a software engineer.  This observation comes not just from personal experience working with industrial control developers for many years, but from conversations with other professionals with similar experiences.
Some comparisons:
  • An engineer who writes software believes the current revision of the source code is what is currently running on the controller; anything on a PC or network storage is a snapshot of the current version and should not be relied upon to function in the same way as the version running on the controller.  A software engineer believes the current revision of the code resides in a version control repository with official release records.  If what is running on the controller varies from the official version, it is considered an illegitimate release.
  • An engineer who writes software is highly pragmatic; the goal is to make a process or machine work.  A software engineer accepts the goal of making the process/machine work as one of numerous requirements to ensure optimal business value.
  • An engineer who writes software sees each project as a unique problem set requiring a unique solution.  A software engineer looks for design patterns and implements reusable objects/modules to not only provide a solution for the current problem set, but to reduce the cost of future solutions as well.
  • As long as production and regulations are not impacted, an engineer who writes software may not be constrained by existing standards.  A software engineer embraces standards because they improve code maintainability, reusability, improve enterprise integration, and reduce the life cycle costs of the solution.

Why it Matters

Before the proliferation of PLC-based controls, production equipment was driven by relay logic.  After the introduction of microprocessor-based controllers, ladder logic was implemented to provide a familiar "relay emulation" approach to control programming.  In the relay-driven world, getting the machine to function correctly was the main requirement - this did not change with the introduction of the PLC.  But PLCs have vastly more capabilities than the relay-driven control systems, and ladder logic evolved into the IEC-61131 standard which provides for higher-level programming capabilities.  Then came graphical user interfaces and a variety of SCADA tools to provide even greater capabilities, but with added layers of complexity as well.  With complexity comes cost and the need for management.

Controlling Costs

Accounting practices vary from organization to organization, but source code (whether for industrial control or any other function) can be considered a capital asset.  Studies have shown that the initial cost of development is 20% or less of the total life cycle cost.  This has implications for everyone developing software: find ways to optimize your organization's software development practices to reduce the long-term costs associated with maintenance.

Risk Management

In addition to maintenance costs, business risks must also be taken into consideration.  Relay-driven equipment is not threatened by internet viruses, but nearly every PLC or PAC being sold today has an Ethernet connection, and the number of controllers being connected to the enterprise network grows daily.  There are also downtime risks associated with controller failure; if the "current software release" is only available on the controller on the plant floor (as opposed to a version control repository), it will take much longer to get production going again after a controller failure. 
There are also risks associated with system upgrades.  Many modern control platforms have a server-based footprint in addition to the controllers on the plant floor.  Not only do control engineers need to be concerned with firmware revision management on the plant-floor controllers, but must also be concerned with operating system patches in the server room.

Enterprise Integration

Controllers are becoming a bigger part of the manufacturing business, because they contain valuable production information, including production history, machine health, asset effectiveness, parameter change history, and more.

In Summary…


All of these factors point to the need for controls engineers to become proficient software engineers rather than engineers who write software.  It would be easy to place the accountability for this transition on management, but this really is more a leadership challenge than one of management.  Control system professionals cannot simply accept status quo, but must actively seek to improve and they should not wait for management to direct them.  If anything, they should be teaching management what changes are necessary to meet the business demands.  Standards (such as ISA-88, ISA-95, and ISA-99) exist that help establish the necessary disciplines; it's time for software engineers to step up in industrial controls!

Tuesday, October 28, 2014

Big Data in Manufacturing: Is the Emperor wearing clothes?

Today I attended LNS Research's "Global State of MOM" webinar; quite a bit of good information that I'm still digesting (kudos to Matt Littlefield & company).  There is a bit I find hard to swallow though – Big Data in manufacturing.  Perhaps I'm just so far out of the loop on this that I'm just not comprehending the obvious, but I can't see a real business case for Big Data in the manufacturing environment, nor can I see plants investing in the infrastructure required to support Big Data solutions.
Big Data is (of course) making a big splash in the business press.  In June of 2014, Forbes magazine quoted several sources stating that Big Data analytics, services, and infrastructure will grow at a 30% rate over the next five years – what software, hardware, or integration vendor wouldn't want part of that pie?  But beyond the buzz, I question if there is real business value for Big Data at the plant floor level.
Manufacturing has always produced volumes of data; SPC, batch records, lot traceability, maintenance records, machine down time, material flow, root cause analysis, design of experiments – the list is huge.  But "a great deal of data" isn't the same thing as "Big Data"; you don't need Hadoop and MapReduce with petabytes of storage on multiple servers and ultra-high speed networking to deal with manufacturing data analytics.
Sorting through some of the reasons given for interest in Big Data during the LNS webcast:
  • Better forecasts – This has some potential, but the data isn't generated on the plant floor.  This information comes from the marketplace, and the need for better forecasting isn't unique to manufacturing.  Better visibility into customer demand in near real-time should result in better capabilities to collaborate with suppliers, warehouses, and logistics.  But is "Big Data" really the answer for improving forecasting at the plant level?  Maybe I just need to see a good example of this in practice, but until then I remain skeptical.
  • Better understand multiple metrics – I suspect this is more related to Enterprise Manufacturing Intelligence (EMI) tools than Big Data, and that there is confusion in the user community distinguishing between analytics and Big Data.  I could be wrong, but I don't think software vendors are doing much to clear up this confusion.
  • Service and support customers faster – I would examine the existing business processes first before implementing a Big Data solution here.  I don't believe the lack of actionable information is what's causing service/support issues.
  • Real time manufacturing analytics – Again, I think there's confusion between Big Data and analytics.  MES/MOM, Data warehouses, and historians are sufficient for this; does a plant really need a Big Data infrastructure to provide analytical insights?
  • Correlate manufacturing and business performance – Honestly, I don't know why this is different from "Better understand multiple metrics" and "Real time manufacturing analytics".  Aren't these things done to ensure correlation of manufacturing and business performance?
At this point, I remain a solid Big Data curmudgeon, hoping someone more enlightened will share their insights.  The folks at A.T. Kearney have stated "While this massive wave of [Big] data promises to transform both top and bottom lines, few organizations have been able to operationalize and monetize this promise for their enterprise."  I believe efforts to integrate Big Data into manufacturing will prove this true.

Monday, October 20, 2014

Business Value and the HMI

This blog post may be sailing into stormy waters, but today I'm putting on my bean-counter hat and challenging conventional wisdom with regards to human-machine interfaces, or HMI for short.  This post is somewhat in response to the LinkedIn article "Look at all them pretty pictures" by Gerhard Greeff of MESA International. (As a side note, if you are in the manufacturing automation industry and are not following Gerhard on LinkedIn, you are the poorer for it!)  But my hope is that we can start a conversation about the true business value of graphical user interfaces in today's manufacturing environment.

Lifecycle Cost

How much are you paying for your HMI?  As an example, there was a new machine build for a discrete unit tray loader; the company had many older versions of this machine, but this would be built with a modern PLC and touch-screen graphical user interface (GUI).  Implementation of the HMI took over 300 man-hours, and training operators, electricians, and mechanics to use the new interface took even more time.  Then came maintenance costs associated with making the complex GUI "more user-friendly".  Using this example, lifecycle cost = initial cost + integration cost + maintenance cost.  (NOTE: Writing the actual control software for the PLC took significantly less time than creating the HMI!)

Business Benefit

What value are you getting for your HMI investment?  One way to approach this is to look at the incremental value provided by the GUI over non-GUI equivalents.  This was fairly easy to do in the unit tray loader example, because there were several implementations of the non-GUI machines to with years of run-time.  The examination proved disappointing from a business perspective; there was no measurable benefit from the GUI-based UI.  In fact, the opposite was true; operators, mechanics, and electricians found the user interface confusing and non-intuitive.  There were even requests to replace the GUI with the older push-button panels found on the other machines.  The expected benefits of the graphical HMI included faster diagnostics when downtime occurred, better insight into machine performance, and easier ways to change machine settings.  These proved to be no better than the non-GUI approaches.  Business benefit: less than zero!
I realize there is a danger of over-generalizing based on a single example, but I do wonder if this truly is an over-generalization.  Manufacturing organizations may have hundreds or thousands of assets on their plant floors, and if no one questions the value of the de facto approach then the drive to implement graphical human-machine interfaces will provide no return on investment and actually decrease the competitiveness of the organization!

A Smarter Approach


The problem isn't inherently one of GUI versus non-GUI; it is the custom nature of the prevailing approach to HMI implementation.  One of the enabling forces of the industrial revolution was standardized, interchangeable parts.  Prior to this concept, every part needed to be custom-made for its application.  Replacing parts meant getting a skilled craftsman involved.  This offers insight into how the custom user interface problem might be addressed.  What if, instead of spending hundreds of man-hours developing a new UI for each machine, there were a standard UI that could be customized in less than a day?  Or even require no customization whatever?  It would provide the same basic operations, independent of machine implementation, so training personnel would be very easy – once they've learned to run one machine, they have the basic skills to run any machine based on the standard UI.  It would not have all the intricacy of a custom UI, but those features are seldom used anyway and could be provided via alternative approaches.  And those alternative approaches may ultimately prove more valuable anyway, because they could involve the use of mobile technology which would lessen the requirements for fixed HMI.  This is exactly the approach ultimately taken by the company in my tray loader example.  While visualization standards are important (and again I'll reference Gerhard's wonderful article), true business value will only be realized when an organization-wide standard UI is implemented.

Monday, September 29, 2014

Antifragility and Plant Floor Systems

Few words strike fear in manufacturing operations executives as do "enterprise system software upgrade".  Once uttered, there is an almost instantaneous demand to IT for assurances that all existing processes will still function after the upgrade has been completed.  In many cases, the existing platform has been customized, tweaked, interfaced, and personalized beyond anyone's ability to fully know.  Clearly, the current state of enterprise-level systems is "fragile".
In his thought-provoking and frequently irreverent book "Antifragile: Things That Gain From Disorder" (Random House, 2012), Nassim Nicholas Taleb introduces the concept of antifragility.  Antifragile systems thrive when exposed to volatility and change.  The very nature of continuous improvement is constant change, which begs the question: "How can our plant floor and enterprise systems keep up?"  With practices such as kaizen and DMAIC becoming ubiquitous in manufacturing, there needs to be some thought given to making systems such as MES/MOM, ERP, SCM, and PLM not just robust (indifferent to volatility), but antifragile.
This is particularly challenging to information technology professionals.  In a traditional waterfall life cycle, requirements gathering takes place early in the project.  In the iterative, agile approaches the requirements are refreshed incrementally.  Neither approach addresses the issue of requirements that change after the software has been delivered; a new project cycle must be initiated to deal with such "problems".  Unfortunately, change is the very nature of manufacturing and engineering systems.  This leads to IT project backlogs, self-developed applications (such as complex Excel spreadsheets or Access databases), demands for commercial off-the-shelf point solutions, and other forms of duct tape and chewing gum patches.  Then come additional requests to IT to propagate, integrate, and maintain these patches.  Finally, managers spend precious time prioritizing requests, allocating resources, and determining value, and it is because deployed systems are fragile.
Making manufacturing systems antifragile, in my view, requires two elements: a modular enterprise architecture that takes axiomatic design principles into account and effective business process management.  Axiomatic design has found a home in design for six-sigma processes, but needs to be extended into the IT world as well.  It is based on two fundamental axioms:  the independence of functional requirements must be maintained, and the information content of the design must be minimized.  Such an architecture would share common master data, provide domain knowledge modules with well-defined interfaces.  It would provide a common user interface layer independent of any of the specific domain modules, and BPM would bind the platform together to deliver dynamic work processes which could change as instantly as needed based on a kaizen or value stream mapping event.
There is current thinking in line with this approach.  In their 2006 book "Enterprise Architecture As Strategy", authors Ross, Weill, and Robertson of MIT/Sloan's Center for Information Systems Research recommend a maturity growth model which leads to the kind of modular architecture previously discussed.
CISR Model - Copy.jpg
CISR Enterprise System Maturity Growth
Michael McClellan of Collaboration Synergies Inc. has proposed a similar approach in his white paper "Manufacturing Enterprise 3.0: The New Information Management Paradigm Built on Process".  In both cases, the approach is strategic and incremental.  It does not require a "rip-and-replace" mindset.  Some suppliers are moving in this direction; we've recently seen Dassault purchasing BPM supplier Apriso and PTC buying ThingWorx in the PLM space.  SAP is offering BPM solutions, and there are other cross-vendor integration BPM solutions (such as WinShuttle) available.
We need to move away from the fragility of our current information technology and create antifragile systems which embrace change to support a culture of continuous improvement.  I would love to hear other thoughts on how this can be achieved.

Monday, September 22, 2014

BPM vs. Big Data

I recently went to a local "meet-up" on Big Data.  This event was very well attended, with a great deal of "buzz" on the subject.  I listened to several presenters discuss Hadoop, architecture, network requirements, and other subjects around large data sets.  My interest is from a plant-floor perspective: is this technology something anyone would implement in manufacturing?  What kinds of applications would be appropriate?  There is a tremendous amount of hype surrounding big data, and I noticed there were a lot of young, bright IT people participating in the event so apparently there is a great deal of expectation around Big Data becoming the next "big thing".  But nothing really caught my imagination regarding the applicability of Big Data to the manufacturing problem domain.  I left wondering if there is more flash than substance in Big Data.
In contrast, later that week I attended a vendor-sponsored event where clients discussed success stories around BPM-based solutions.  The crowd for this meeting was smaller, and consisted more of seasoned IT and management professionals.  The cases discussed were immediately relevant to manufacturing and business value was obvious.  Yet this strategic initiative (it's really much more than "technology") doesn't seem to have all the gilding and hoopla that I observed with the Big Data crowd.  As I pondered this, I started to think about which would have a greater impact on the future, and to me the answer is obvious – BPM, hands down.  Here's why.
Organizations go through a maturation process in systems architecture.  MESA International models this as beginning at an application-centric thinking level – everything fits a business silo, and there are many point solutions which IT is left to integrate.  The next level of maturity is data-centric thinking; standardized data sets (master data) shared between applications with appropriate centralized governance.  The third level is interface-centric thinking, where interfaces are defined which allow applications to interact in real time.  At the apex of this maturity model is process-centric thinking, where multiple applications cooperate to implement end-to-end business processes.

Clearly, BPM is at the top of the strategic food chain; Big Data is somewhere around level 2.  Big Data is an answer in search of a problem, but BPM is all about getting things done in the least-waste way.  One common question that arises in the plant environment is "What are you going to do with the data?"  That's a question Big Data cannot answer, but BPM can.  Big Data does nothing for the mechanic on the plant floor who needs to learn the ERP, the CMMS, the Maintenance Dispatch system, the MES, the PLM, and other systems to do his job.  BPM can knit all these systems together to form a "blended application" specific to that particular mechanic – and without getting IT involved.  Big Data may be all the rage now, but if you're investing for the long term I would recommend BPM.​

Friday, September 5, 2014

MES/MOM As Strategy

Recently I have been seeing quite a bit of discussion regarding ROI for MES/MOM. MESA International released a strategic guidebook on the topic in May. ISA debated the topic “MES Does Not Deliver A Return on Investment” (Youtube:https://www.youtube.com/watch?v=p_O9Wlw69gU) at its MES Conference in Cork, Ireland in March. There have been some recent blogs on the topic posted to some MES/MOM-related groups on LinkedIn as well. I don’t want to diminish the importance of ROI for MES/MOM, but focusing solely on ROI is a mistake. The problem is that measuring the impact of MES/MOM is quite difficult, because the value comes from improved processes and increased velocity of resolving issues rather than from the technology itself. Yes, we need to answer the ROI questions, but we also need to move the discussion into a higher plane as well. We need to start talking about MES/MOM in strategic terms.
There are tons of books and articles on strategy, but it isn’t really that difficult a concept to wrap your arms around. Strategy is simply the context in which decisions are made. Why does an army go around an obstacle instead of taking the most direct route? Why does a billiard player choose one shot over another? Why does a chess player choose to sacrifice a rook to move a pawn? Why does a business choose cost leadership instead of differentiation? In each case, the situational dynamics allow a number of choices to be made, but the best choice is always made in context of the desired goal. In the absence of strategy, the default choice will nearly always be the path of least resistance.
How does this relate to MES/MOM? What you will find in a majority of manufacturers is a plethora of point solutions. These manifest themselves as spreadsheets, stand-alone applications (Microsoft Access, for example), and “siloed” systems (SPC, CMMS, Maintenance Dispatch, etc.) – often developed internally or purchased without consideration of a bigger picture. Frequently (I want to write “always”, but there are exceptions) these point solutions are incompatible with each other, making aggregating information difficult or impossible. But solving plant-floor problems almost always requires combining information from multiple sources, forcing someone – typically at the engineering level – to become a data integrator and to make connections between these systems. Point solutions represent the path of least resistance (and if you can avoid getting IT involved, there is even less resistance), and are indicative of decision-making which occurs outside a strategic framework. The hidden cost of an ineffective plant-floor IT strategy is the reduced speed of problem-solving.
An example will help illustrate this. Improving equipment effectiveness (OEE) requires an in-depth knowledge of downtime causes, quality defects, and equipment speed. It also requires an understanding of responsiveness to downtime causes, reliability improvement efforts, machine maintenance history, and replacement part availability. All this information is distributed between the ERP, the CMMS, the QMS, maintenance log books, kaizen newspapers, and engineering notes. Gathering the information from each of these systems and creating a coherent understanding is time consuming, which means solutions are delayed and someone is getting paid to put the picture together instead of implementing the solution. These costs may not be quantifiable but never the less impact the bottom line.

In addition to ROI discussions with management, there needs to be a very strong emphasis on strategy regarding MES/MOM implementations. This is where maturity growth models (not to be confused with capability maturity models) are helpful. There are two models that I am aware of: one from MESA International (as taught in their Global Education Program), and one from MIT/Sloan Center for Information Systems Research (CISR) as presented in the book “Enterprise Architecture as Strategy” (Harvard Press, 2006). This is also where standards such as ISA-95 start to become important. MES/MOM implementation needs to be the core of a manufacturing IT strategy, a strategy which needs to be embraced by the organization’s leadership. Solution providers (bless them – they’re doing God’s own work!) cannot drive strategy – that must be done internally. Ultimately, it is strategy which will define the success or failure of an MES/MOM implementation.​