Most discrete manufacturing plants struggle with maintaining good on-time delivery performance. Consistent on-time performance of high 90s (sticking to committed deadline) is extremely rare.
In fact, many organisations have “built-in” low reliability in their commitments; they commit in wide ranges of delivery lead-time or use monthly or weekly buckets to measure delivery performance. Few have the ability to commit to a date and meet it. (The best way to validate this hypothesis is to check the level of skew of dispatches or order completion in a month or a quarter for long lead-time environment.)
If the order arrival is erratic throughout the month, we expect the due dates to be staggered across the month. However if most of the order completions are skewed towards month or quarter end, it is a sign of poor on-time performance. Skewed completions of orders or dispatches during month-end or quarter-end are an industry wide epidemic).
The real challenge of maintaining high on-time performance in discrete job shops is meeting different objectives, which at times, conflict with each other. The need to maintain high on-time performance can come in the way of ensuring maximum capacity utilisation from the critical work centers.
Evolution of MRP (Material Resource Planning)
Academicians, consultants and plant managers have been dealing with the problem for decades. Attempts were made with MRP, MRPII (also called Manufacturing Resource Planning) and later with Advanced Production Schedulers or Optimisers (APS or APO) to solve the problem.
Each solution tool was designed with limited understanding of the underlying problem. The solutions were also “biased” based on the capability in computing technologies, available at that point of time.
When MRP was invented, it seemed like a panacea for all ills in manufacturing. With the available computing power, one could do a batch process and convert the independent end product demand to demand of dependent components after netting off the available inventory.
The “pre-fixed” lead-time of each stage can be used to calculate the due date, and derive the corresponding production or procurement schedule. By touching just a button, one could get to know what “dependent demand” components need to be manufactured/procured, in what quantities and by when. One needed to just meet the intermediate due dates of components processing to meet the final due date. This seemed like a breakthrough for managers trying to do the same manually wherein the efforts were not only time consuming but were prone to human errors.
While the logic of just using a fixed lead-time to schedule procurement orders was reasonable, it is not suitable for a manufacturing system as it is an erroneous assumption. The capacity usage by loaded orders has an impact on lead-time of new orders. Hence one could not use a fixed lead-time to schedule as doing so would imply that the capacity is being completely ignored.
(article continued on next page)
The author is Director, Vector Consulting Group.
The gap in MRP (of ignoring capacity) was “plugged” by building in the module of capacity requirement planning – leading to invention of MRPII. After one has completed production schedules, a subsequent step of capacity loading can be checked.
With this step process, one could find out the over load and under loads on different work centers – which is not possible at time of initial rough cut capacity planning. However if the capacities were overloaded, one was supposed to adjust the demand, which in turn can create overloads in new set of work centers. This process was a near impossible task as the iterative loops can go on forever. The “closed loop” process of capacity requirements planning was only on paper – it never really “closed” in real life.
The step process of capacity requirement planning just indicated the load but the iterative adjustment of demand to remove the overloads was a laborious process due to the batch processing technology of MRP.
The capacity usage by loaded orders has
an impact on lead-time of new orders.
The “compromise” of accepting overloads at specific work centers discovered only in the execution was normal. However allowing perpetual loading on already over loaded work-centers can lead to a manufacturing system going out of control. Experienced managers have worked a way out of the conflict by using the concept of planning horizon.
Within a planning horizon, orders are not rescheduled despite the overloads at work centers. The correction happens at end of the planning period where pending orders, along with new orders, are considered for execution in the next planning period.
In many plants, the planning horizon was set at a monthly level. After every monthly planning horizon, the production planner would take the backlog of orders along with new orders and re-schedule the entire production plan. However, this constant re-scheduling affected capacity utilisation particularly at the beginning of the planning period.
This was due to the fact that the latest knowledge of “spill-overs” of current planning horizon could only be known close to end of the period. So, planning for new horizon could be done when it had already started. Since it always takes time for raw material to be aligned to the new schedule in the new period, there is usually a drop in output in the first part of a new planning horizon.
The concept of continuous “rolling” plans invented to solve the problem never worked as many vendors/work centre managers did not make plans. They waited for the fixed plans of the month so that they could avoid manufacturing unwanted inventory.
The first part of a planning period is always sluggish. Efforts go up close to the end of the planning period leading to a skew at the end. Because the skew was at the end, the real spillovers of a planning horizon could be known only at the end, which in turn caused a sluggish first half. A vicious loop!
If someone could find a solution to avoid the overloads, the problem would be resolved. The MRP II earned the stigma of assuming “infinite capacity”, and the “quick–fix” solution of capacity requirement planning did not help in any way.
(article continued on next page)
APO: The New Saviour?
With newer technologies of data processing, people found a “technology” solution to the MRP II problem. A good algorithm of optimisation, which can do a simultaneous check against multiple constraints, can provide answers faster than the sequential and batch process approach of MRP II technologies could.
The key selling point was the best of both worlds – optimal usage of capacity and other limiting conditions (material or tools) along with high on-time delivery. The capability to do fast on-line processing also made “what-if” analysis and rescheduling much easier than MRPII did. “On paper”, one had found out the way to avoid overloads, right in the planning phase when order due dates were being set. It seemed as if everyone had finally got the elusive silver bullet!
However most of the implementations of APO failed to give the desired results – overloads happened in execution. The feature of frequent rescheduling actually did not help much. The plants which tried to do so had to stop it immediately because frequent rescheduling amplified a small uncertainty into chaos in the shop floor. In some cases, the schedules churned out by APO tools did not make intuitive sense to shop floor managers so they did not follow it.
Faulty Assumption of APO systems
APO was built in a lab without considering the “real world”. The practical world has two problems, which make it difficult to define capacity accurately at any point of time. First, Variability is a way of life – there is no perfect plant without breakdowns, rejections, absenteeism, and even changing demand requests. And second is the changing product mix (capacity available can change based on product mix loaded on the plant at a point of time).
The combined effect of these two factors makes it difficult to precisely define capacity. Product mix changes impact capacity and so does variability in terms of worker skill, machine conditions and many other factors. It is nearly impossible to consider all possible factors that can accurately define capacity at a point of time.
Because of the above conditions, it is difficult for a computer schedule to match the intuition of a plant manager. A plant manager will always have more information (not considered by the computer) to arrive at a “better” decision. For example a dyeing department would want a specific colour sequence to maximise its output but the subsequent spinning department would want a conflicting sequence based on its need to produce a desired sequence of “count” of the yarn.
In the real world, the schedule given by a computer, which has globally considered both, may not be acceptable to either work centre manager. On a specific day, the dyeing manager may want to avoid making the difficult shade of colour (as per the schedule generated by the computer), because the most experienced person who can do the colour mixing without rejections for the “difficult” shade is absent.
At the same time, the spinning manager might want to avoid taking up a particular schedule because it is not productive to produce the specific yarn after breakdown maintenance. Such numerous considerations cannot be incorporated in the capacity definition because at times such conditions are also not “rigid” enough to be followed. Considering all such numerous conditions as rigid can de-rate the overall capacity of the plant.
The other problem originated from the very stated advantage of APO algorithms they can optimise under environment of multiple constraints. In an environment of dependencies (the way one schedules a particular work centre has an indirect impact on subsequent work centre), it is mathematically impossible to maximise the usage of multiple constraints to the full capacity.
At the same time, when there is variability in the system, namely shortages, rejections, breakdowns etc, there is a need to leave aside adequate buffers in all identified constraint to maintain stability in the system. Without adequate buffers, one would be forced to reschedule very frequently with even minor variation. When multiple work centers are rescheduled on every variation, de-synchronisation sets in feeding departments making the plant chaotic.
The waiting time can amplify many times over multiple work centers. Alternatively, if one wants to have a stable schedule from an APO in an environment of multiple constraints and seemingly conflicting objectives, the level of buffers required at multiple places would make the plant stable but reduce the output significantly.
APO investment ultimately resulted in lot of efforts and resources without any meaningful outcome. The plant performance in terms of on-time performance remained at the same level.
(article continued on next page)
The Theory of Constraints Approach
The way to maintain reliability is to have stability in the due date scheduled; variability should not force change in schedules. This required one to keep aside protective capacity while scheduling, which means, practically, there should be only one constraint in the plant.
Having many constraints will force one to keep buffers in many places with low overall output of the plant. So with a capacity buffer on the single constraint resource, one could get a schedule that stays stable without trying to have buffers in many resources. This approach can ensure one has the maximum output from the plant as a whole while maintaining a stability of due dates.
This means that other limiting conditions (or constraints) have to be removed. This may seem like an impractical idea due to potential investment required. However, in most plants, various visible “multiple constraint” problem is more of a symptom than a real problem. An environment of very high WIP in a plant can create temporary bottlenecks in many work centers.
At the same time, it could also lead to cases of artificial material shortages due to diversion of common material used across orders. When there is lot of WIP, and every work centre is driven by utilisation/efficiency, each work centre “cherry picks” components across orders. Also, the consideration for “cherry picking” is different for different machines. As a result production lead-time goes up and order reliability becomes extremely poor. This unreliability creates urgencies/fire-fighting in the plant due to late orders. Urgencies create additional set-ups creating multiple bottlenecks leading to low plant output.
As a first step towards removing the symptom of multiple constraint and associated chaos is to reduce the WIP, and forcefully maintain the WIP at a constant low level. With low WIP, the opportunity to “cherry pick” orders is limited as there are few orders on the shop floor. However, very low WIP can lead to starvation and low output. The way to check if one has excessive WIP is to compare the touch time of an order with the total production lead-time. If the touch time is less than 10 percent of the lead-time, and if there is day-to-day fire fighting with frequent requests for expediting, the WIP is definitely high.
In such an environment, halving the WIP does not lead to starvation. Reduced WIP along with a priority system focused on order completions prevents wastage of capacity and the output of the plant goes up. At the same time, with reduced WIP, the real constraint is revealed. As part of the solution, a constant reduced WIP is maintained before the constraint resource. All other resources subordinate to ensure there is no starvation at the constraint resource.
The material release to the plant is based on the WIP maintained. So, if the output of the constraint resource falls (due to uncertainty or product mix changes), further material release is stopped to maintain the WIP. Similarly, if the output of the constraint resource increases (due to no murphy or favourable product mix), material release is increased so that there is no starvation at the constraint. In other words, WIP is maintained.
This mechanism of “pull system” of using constant WIP ensures maximum utilisation of the constraint resource without the need to precisely define capacity in the planning phase. At the same time, one could leave behind a buffer capacity in planning (while quoting due dates) without any fear of losing it in execution. This would also ensure that the reliability of orders goes up.
In a plant where reliability (i.e. order due date performance based on initially committed dates) is extremely high, there is no need to follow the concept of a planning horizon or the bucket system of planning. One can follow a system of daily perpetual planning of new orders, without following the planning bucket (or horizon) system for rescheduling based on observation of past period performance. The concept of daily perpetual planning also ensures high utilisation throughout the month.
In some manufacturing plants, there can be cases of interactive constraints due to drastic changes in product mix. The only way to solve this problem is to ensure that orders are throttled in planning as well as releases to ensure constraint stays at one single place both in planning and execution. In the long run, it pays to elevate such temporary bottlenecks to ensure a stable plant.
The execution based pull system of Theory of Constraint takes away the need to be “perfect” in planning. The silver bullet in manufacturing systems lies in the approach of “good enough” planning (schedule with capacity buffers) coupled with a perfect execution by the way of controlling WIP every day.
The author is Director, Vector Consulting Group.