You would have thought that by now, with a few years of Industry 4.0 under our belt, that our German friends would be reveling in the success of Industry 4.0. However, recent reports show that overall factory productivity has continued to decline in German companies, even though investments have been made in new automation technology. This is being called the “productivity paradox.” To a growing number of people and companies, this does not come as a surprise because investment in automation alone is still just an extension of Industry 3.0. There has been a failure to understand and execute what Industry 4.0 really is, which represents fundamental changes to factory operation before any of the clever automation and artificial intelligence (AI) tools can begin to work effectively.
In the electronics manufacturing world, we experience the worst-case scenario of effects that change has on production operations. We have accepted deep down that higher product mix leads to reduced productivity. It is easy to simply write this off as a cost of doing business where there is the need to provide flexibility without an increased stock holding of finished goods.
However, this is not the complete story. For example, SMT machine vendors have long been making their equipment suitable for flexible production with hardware- and software-based solutions where feeders can be placed anywhere on the machines. Further, SMT machine vendors have been developing techniques to provide rapid changeovers with common feeder setups for groups of similar products, and even changeovers between disparate products with the simple swap out of removable feeder trolleys.
Even without these technology aids, the manual changeover time on a fully-loaded SMT placement machine time varies across companies from around five minutes to six hours, depending on the approach used. Take a look at the pit stop in a Formula One race to learn how to achieve the five-minute SMT turnaround. Huge losses to achieve flexibility are not a done deal. There are both technology and management options out there.
The productivity paradox continues to thrive. At an event in Scandinavia recently, I showed a slide of how high-volume productivity expectations of 80% or more are now replaced with 20–40% in today’s higher mix environment. I was approached afterwards by someone saying that I had the numbers wrong, as they dreamed of being able to achieve 20%. The real number in many companies is far less than that today, which appears to support the German report. The mix of products in production continues to increase. Additionally, with a more volatile demand requirement, customers of manufacturing want the ability to change delivery quantities and times immediately whilst also not accepting the cost of holding additional buffer stock. Factories are paying the price of the effect of fluctuating customer demand directly into their automated production lines.
To find the answer to this apparent conundrum, we start by looking at what is reported in terms of metrics within factories. Targets are the critical drivers for manufacturing performance. Around the factory, we see reports and charts explaining targets and achievement related to things such as on-time delivery, production rate, materials scrap, quality issues, etc. Pretty much everything in the factory is measured to some extent based on those statistics. Management wants to see that each process is under control and improvements are being steadily made. To focus on the productivity perspective, seeing what is posted in factories appears rather optimistic compared to what would be expected when thinking about the productivity paradox. Any schoolboy mathematician will quickly deduce that the metrics within the factory are based on a different dataset than the German report, which is where the problem lies.
The measurement of internal performance can be justifiably made in many different ways. Statistics can be made to show whatever specific perspective is needed. This style of reporting started when many enjoyed high-volume production. Dedicated production lines were making products as fast as possible. The emphasis was on getting more and more throughput from each square meter of line space. Performance was simply based on how many placements per hour could be achieved. Extreme effort went into the optimization of machine programs. However, to measure the machine performance accurately meant that machine downtime outside of machine responsibility should be ignored. If the line could potentially make 2,000 products per day, this was the rate against which performance was measured. It was extremely unlikely that the customer needed exactly 2,000 products per day.
Even in those days, demand fluctuated. When the finished goods warehouse started to fill to a bursting point, the line was taken down, unscheduled, and perhaps used this opportunity to perform maintenance. These times were excluded from the productivity calculation because it was an external, uncontrolled variable from the point of view of the machine engineers. This was the start of bad habits that developed and broadened over the course of time. More and more exclusions were made to reflect specific narrow scopes of responsibility as product mix increased. Productivity and capacity calculations became far more complex as techniques to manage higher mix came into play.
For example, the common setup of feeders on SMT placement machines was seen as a way to avoid the physical changing of locations of materials on the machines between different products. If two products running consecutively required the same materials, then why not keep the same materials in place on the machine and eliminate the time to change them? Unfortunately, this created a new restriction for machine program optimization. The path travelled by the machine head to pick the most commonly used materials to the points of placements could no longer be optimized through the choice of where the materials should be set up on the machine. As a result, the program execution time was longer and less efficient.
Many overlooked this as the focus on machine program optimization had changed once the effects of higher mix came in; it became all about the changeover time. However, in many scenarios, as time went on, the losses in the programs exceeded those avoided in the changeover process. This lost productivity due to reduced program efficiency was almost never included in the productivity reports. Throughput performance was simply measured against the machine program time. Invisible losses like these started to become an increasingly common part of the regular high-mix production paradigm.
Production planning is another black hole example of lost productivity where fixed production configuration assignment rules for product allocation to line configurations by engineers have to be followed due to the sheer time and effort needed to prepare product data, thereby significantly restricting the optimization process. Generic legacy scheduling tools are useless in this environment. In reality, production is mainly scheduled utilizing Excel as a just-in-time (JIT) planning tool; no one has time to think about whether more optimized production allocation plans could be made.
The associated losses of opportunity continue to increase due to these kinds of problems, many of which are hidden and not reported due to the narrow focus of individual operations and lack of overall visibility. Productivity reports made from each perspective look good on the shop floor and everyone is making an excellent effort, but if you step back and look at the bigger picture, the overall result is going in the wrong direction, which is exactly what the German reports are highlighting. The realization of this is a good thing because even if the numbers themselves are very poor, without this information, there is no opportunity to investigate and improve. In the meantime, it is fine to continue to measure the internal performance of each element within manufacturing with the current metrics and key performance indicators (KPIs); one should not replace the other.
However, what the German reports don’t offer is a solution to this issue. From a high-level perspective, the situation is complex and bewildering with numerous variables and barriers, many of which are buried in the technical detail of operations. Discovery of hidden issues—as well as the consequences of actions taken—are very difficult to understand, never mind quantify. To start to gain a sense of it all, there are two main things to address.
The first is to understand the real need of the business—both current and future. For example, in Germany, the most active sector of the industry is automotive. The pride of German automotive assembly lines has a final production assembly line running at a fixed takt rate like a heartbeat delivering cars reliably and on time every time. The line is also fully flexible and capable of making any of the millions of combinations of options and features that any customer could require. It all sounds good and appears to be a supreme achievement of automation, which includes human activity.
However, if you focus away from the final line operation, it is possible to see the damage that the assembly-line operation is causing. Looking downstream, we see that the factory still has to make sure that the total demand for the family of cars that each line produces exactly meets the customer demand, so as not to create a buildup of finished goods stock. There is no scope for the line to make any model changeover. Sales and marketing will create campaigns and incentives in the market to drive a constant factory demand, but even so, making to customer order has to be mixed with some standard builds to keep the line running continuously, which are often sold at a discount.
One would imagine that if you ordered a new car in this situation—even with a bespoke set of features—it could be made quite rapidly, as it is clearly shown that just a couple of days lead time in the factory is all that is required. Instead, the reality of the situation is that the waiting time for a production slot can be many months. People tend to want to look elsewhere for their cars when faced with this ordering lead time. The reason for the long lead time is the planning that goes into the final assembly line.
To ensure that it never stops, there is a buffer of orders upfront. This is needed JIT to give the forecast of demand for subassembly options and configurations to be supplied to the factory. The cost of flexibility is simply being passed upstream. Suppliers to automotive final assembly lines receive a somewhat variable production demand, often volatile, with which they have to comply. Delivery must occur within a specific window—no sooner, and certainly no later. Optimization of the subassembly factory operations represents the worst-case scenario in the industry. Safety-critical assemblies require in-depth quality, process, and engineering management. With a high mix and often small work-order quantities, changes are continuously required in the factory flow with each configuration change needing to be qualified against the required standards. There is little compensation for this, as the pricing in automotive is very sensitive indeed.
A typical automotive subassembly provider needs to be aware of these constraints of doing business and create a production model that is optimized and developed in a way that all of the hidden losses are exposed and addressed as a part of the model. The same process applies to all types and sectors of electronics and assembly manufacturing in general. However, creating the optimum operation is not as simple as it used to be. Industry 4.0 was created to address these new operational paradigms, the software-based automation layer above the increasing number of automated processes that the older Industry 3.0 represents. The German reports are looking for Industry 4.0 results, but are based on Industry 3.0 activities. It is time to create some real tools for Industry 4.0, which is the second thing that needs to be addressed.
Industry 4.0 is the optimization of every aspect of the manufacturing process through the use of live data. The solution for each factory taking Industry 4.0 on board is going to be a little different depending on the business need, whether this is lines of machines talking to each other, managing Lean materials, adaptive planning, assignment of products to line configurations, or digital twin-based assembly process engineering. However, it is not practical to develop individual bespoke Industry 4.0 software that would drive these factory processes because the cost quickly becomes prohibitive and non-sustainable.
Standardization needs to happen at a level that promotes the use of standard digital platforms that deliver values based on the use of Industrial Internet of Things (IIoT) technology specifically for manufacturing. This technology is a distinct paradigm shift from legacy data collection. Through the inclusion of data flow between every operational process, a live, digital, and detailed holistic view of the shop floor, with a scope that is inclusive of every manufacturing and dependent event and has a deep and well-defined level of detail is available at all times. Unacknowledged losses are no longer out of sight and can be included in any activity related to performance improvement, optimization, and operational decision-making all in line with the goals of Industry 4.0. Effects of decisions made and changes executed can be seen, measured, and continuously refined, and in this case, directly contribute to the overall productivity of the factory.
At IPC APEX EXPO 2018, we saw the world’s first demonstration of the Connected Factory Exchange (CFX). Though machine communication is nothing new, the fact that key metrics and data from any type of machine from any vendor could be viewed on demand by visitors on their mobile phones without any installation or configuration of software was an eye-opener. The barriers of having different communication methods—as well as different levels of data content from machines—had been eliminated.
One year later at IPC APEX EXPO 2019, we look forward to seeing the published CFX standard in action in terms of the scope and depth of communication now supported, gathering data for use in dashboards, AI decision-making, and factory optimization. Whatever the software tools of choice are for manufacturing, the ability to have visibility of the status and performance of every event that takes place on the factory floor provides the opportunity to see and understand exactly where time and opportunities are being lost.
Software tools at the factory level can then utilize this information to optimize the entire factory in real time. Machine vendors also have the opportunity to get significantly more data about the environment in which their machines are working through seeing materials and planning information with which to further automatically optimize their machine operations. As a standard based on true industry consensus, CFX has been designed to provide information about all areas of opportunity, control, and management in the factory.
CFX is the definition of how data is exchanged and the exact language and meaning of that data. The adoption of CFX is now being made into software tools provided by machine vendors on the machine or line level, as well as by solution providers across the whole factory. The paradigm of manufacturing execution systems (MES) changes as a result. Legacy MES systems that simply gather data save it into a series of databases and then provide reports are not going to be up to the challenge of processing IIoT data in real time; thus, they will be limited when it comes to live optimization and decision-making support.
A new breed of digital MES systems specifically designed for the IIoT and CFX environment deliver the most value, spanning the whole gamut of factory operations and offering a single standard digital platform. For example, bespoke extensions to the platform to support specialized reporting and monitoring—as well as the inclusion of product-specific processes such as functional test—can easily be added by local IT developers by creating the required CFX interface and utilizing the free IPC CFX software development kit (SDK).
The technology shown by the many participating vendors at IPC APEX EXPO 2019 represents the key turning point to reverse the productivity paradox and enable new automation management techniques and digital best practices that address the long-standing hidden, ignored, or unavoidable causes of lost productivity. Full visibility and control of even the most complex of factory operations for engineers and managers are restored, providing the intelligence with which to identify and eliminate causes of losses. This provides an opportunity for increased flexibility whilst also increasing productivity, quality, on-time delivery, and a reduction of material-related costs.
Though the demonstration of CFX—together with the IPC Hermes Standard (which replaces SMEMA)—at IPC APEX EXPO 2019 is limited in scope due to the nature of an unconnected manufacturing line working live in a trade show environment, the fundamentals will be on display from machine vendors. The latest CFX-enabled MES software tools and experts will be on hand to demonstrate and explain how the use of CFX technology can bring an end to this productivity paradox. I look forward to meeting you there.