Theory Of Constraints Handbook - Theory of Constraints Handbook Part 57
Library

Theory of Constraints Handbook Part 57

In Ohno's (1988) Toyota Production System: Beyond Large-Scale Production he said, "All we are doing is looking at the timeline, from the moment the customer gives us an order to the point when we collect the cash. And we are reducing the timeline by reducing the non-value-adding wastes" (1988, ix).

The other important part of the solution at Ford and Toyota, was to achieve and sustain continuous improvement was standardization of work (but ensuring the new standard would always be challenged). Ohno is famously quoted as saying (Shimokawa et al., 2009, 9), "Where there is no standard, there can be no kaizen." Without standard work, we cannot be sure what impact our changes will have on our process and company performance.

It is clear that Henry Ford and Taiichi Ohno both approached the problem of achieving continuous and (when required or possible) step-change improvement in the same way. They started with the belief that anything can be improved, communicated a clear vision of where CI will be most valuable to the organization, and created an environment to encourage continuous experiments to find better, simpler, faster ways of doing things with less waste. They then made sure that there are continuous audits to ensure alignment (no inherent conflicts) between organizational policies. This is in full alignment with the direction of the solution proposed by TOC today.

Importance (and Risks) of Measurements and Incentives

Measurements play three important roles in CI and auditing: 1. To help managers determine the status of the system (good/bad).

2. To help managers determine the likely cause of the system status.

3. To drive the right behavior (doing what should be done) and discourage or prevent the wrong behavior (doing what should not be done) for all the stakeholders.

TOC's Buffer Management (BM) satisfies all three conditions as it provides a reliable mechanism to indicate the status of the system (the percent red and black within TOC's time or stock buffer status indicate to what level the system is in control or not.). The level and causes of these buffer penetrations can be used to track the level and causes of downtime or unavailability on capacity constrained resources (CCRs) and level and causes of delays on the critical chain (longest chain of dependent events) to provide an indication of the likely causes of the system status.

With respect to the third role of measurements, Goldratt realized early on the important part that measurements played in the behavior of people, which drives their contribution toward organizational improvement, inertia, or decay. Goldratt's insight (Goldratt, 1990b, 145) was captured in his now famous quote "Show me how you measure me and I'll show you how I behave!" In BM, a "black" or "red" status serves as a visible signal that everyone needs to prioritize and, where possible, expedite such orders (to drive the desired behavior).

One aspect not frequently reported on is Goldratt's insight that it appears to be more important to remove "bad" measurements that drive "bad" behaviors (such as local efficiency measurements that result in local optima and poor synchronization), than it is to replace these with "good" measurements. He has also frequently warned against incentive schemes intended to motivate and drive improvements.

Why? Surely, it makes sense that when we stop using one measurement we should start using another or else we face the risk of people falling back in line with the old measurements. Surely, it makes sense that if you want people to continuously improve, you should link performance against these measurements with appropriate incentives (IF "good behavior" THEN the "Carrot" and IF "bad behavior" THEN the "Stick" consequences).

Like many of the "counterintuitive" insights of TOC, the cause-effect relationship between incentives, motivation, focus, collaboration, and how this affects the level of performance of people is quite misunderstood within most organizations. In fact, there is a major mismatch between what the social sciences have known about the effect of incentives on performance and problem solving and how most of the incentive schemes used by organizations, work today (Pink, 2007).

Scientific research over the past 40 years has proven the "common wisdom" that incentives drive higher performance is, for a large group of boundary conditions, simply not true. Incentives, in many cases will contribute to a vicious cycle of decaying performance (or at least stagnation) rather than a viscious cycle of continuously improving performance. The first scientific research into the relationship between incentives and performance was by Sam Gluxberg, who used the "Candle Problem" designed by Karl Duncker (19031940) in 1926 as a way to measure how cognitive problem solving is influenced by incentives. People are challenged with figuring out how to attach a candle to a wall in a way that would prevent wax from dripping on the table (Fig. 15-13).

FIGURE 15-13 Karl Duncker's candle problem to measure cognitive problem-solving skills.

Duncker found (Pink, 2007) that most people struggled due to what he called "functional fixedness"-a mental block against using an object in a new way that is required to solve a problem." Most people eventually figure it out (to attach the box used for holding the thumbnails with the thumbnails against the wall to provide a base for the candle), but it takes them a while to get it. Years later, Sam Gluxberg decided to see how a monetary incentive would affect people's performance on the candle problem. He told one group if they were among the fastest 25 percent, they would get $5. If they were the fastest in the entire group, they would receive $20. Naturally, the people offered the incentives completed it faster, right? Wrong! In fact, they took an average of 3 minutes longer than those who were asked simply to perform the task as fast as possible, explaining that their results would be compared with the test standard.

Sam Gluxberg then repeated the same experiment but changed it to make the solution more obvious by placing the thumbnails next to the box, rather than inside the box. In this case, incentives fulfilled their purpose. What are the lessons from these two simple experiments?

Financial incentives tend to focus the mind and as such only tend to be productive on left-brain tasks, that is, relatively simple problems with a clear set of rules and a single solution. In contrast, when financial incentives are offered to people to solve more right-brain tasks-those problems that are more conceptual or complex in nature and require greater use of cognitive power-the incentives actually make the problem harder to solve because they narrow the focus when the solution tends to be on the periphery and so the solver needs to be thinking more holistically and laterally (thinking out-of-the-box).

These results were confirmed by an extensive study lead by Dr. Bernd Irlenusch at the London School of Business whose team studied 51"Pay for Performance" plans inside companies and found that financial incentives can result in a negative impact on financial performance (e.g. financial incentives for sales people involved in complex sales will lower, rather than increase their success rate).

So, science has known about these flawed links between problem-solving and financial incentives for decades, and yet despite that, they endure. At the same time, more and more of the work we do is shifting to right-brain thinking as we delegate the routine, rule-based stuff to computers and outsourcing agents. But what is the solution?

Pink (2007) suggests that we move to incentives that are based on intrinsic motivators such as autonomy (e.g., opportunities to be independent such as Google's 20 percent "do what you want" time rule), mastery (e.g., opportunities to improve and excel such as Toyota's Kaizen events), and purpose (e.g., opportunities to be driven by what really matters to them and others in their organization). An example frequently used to prove the power of intrinsic motivators at the organizational level is how Encarta, with its teams of thousands of highly paid contributors and the backing of Microsoft, was beaten by Wikipedia, which depended on volunteers driven by a common purpose, an autonomy to contribute when and how they wished within certain guidelines and with the opportunity for mastery.

As one might expect, there are other problems with measurements and incentives. For example, when there are (many) conflicting measurements-something that frequently happens in environments that implement a balanced scorecard without aligning each measurement to a business strategy (strategy map)-people will tend to focus on those measurements they believe are most important in the eyes of management, neglecting the others (which might be more important), and making performance unpredictable. For example, if a production manager is responsible to achieve both high due date performance and their monthly cost recoveries, which they believe is their prime measurement, then it is likely that the manager will compromise on due date performance toward the end of the month to meet the targeted tons per hour for the month.

Ensuring the New Direction Addresses All Major UDEs

Overcoming the Problem of Low Expectations for Change

Previously, we identified one of the consequences of the vicious cycle in CI as stakeholders (especially top executives) having low expectations for the impact of change initiatives. To address this problem and ensure that all stakeholders have the same (high) expectations for the outcomes of the selection and implementation of any changes to better exploit their system constraint or to elevate it, Goldratt (2008b) recommends the adoption of the six success criteria listed in Table 15-4 together with the logic of why each is needed, and a recommendation based on extensive field-testing, on how these can be used. Such extensive field-testing (Barnard, 2009) has also shown that these criteria help prevent mistakes of omission and commission in the selection and implementation of changes and that these criteria should be shared with managers and employees at all levels especially during the analysis and "buyin" phases of change initiatives and for use during ongoing audits of these initiatives.

Overcoming "Not Seeing" the Inherent Improvement Potential

The famous quotation, "Necessity is the mother of invention," can be traced to Plato's Republic, book II, 369C, which was written in 360 BC. We all know that crises allow us to challenge and overcome prevailing assumptions and identify and unlock potential we never knew existed. However, what if you do not have a real crisis now? In such situations, the literature on managing change is quite consistent-good leaders should create a "crisis" by creating a large gap between the current level of performance and the goal. An example of this is a new CEO coming into an organization that is already doing well at 10 percent profit to sales and then (to inspire them to higher performance) gives the team the goal of doubling their profit to sales (to 20 percent) within three years.

In the case where there is no crisis yet, but where we can observe a stable or growing gap between the actual performance of an organization and its goal, we should see this as a warning and opportunity that a breakthrough is needed. We should start by answering what could cause such a gap. There are at least two hypotheses for a cause.

Hypothesis #1: The system's starting conditions (its capacity, capability, etc.) are simply insufficient to meet the demand and the only solution is to "elevate" the system constraint(s) (constraining starting condition) by investing in more resources or better resources. This hypothesis of the underlying cause for a gap is quite a common claim. "If you want my department to do more . . . I need more resources, better systems, etc."

Hypothesis #2: The system's starting conditions (its capacity, capability, etc.) are sufficient to achieve significantly higher levels of Throughput within significantly shorter lead times than currently but capacity, time, and costs are wasted due to the current mode of operations. The solution in this case will be to "better exploit" (not waste) the potential of the system constraint (i.e. always try better exploiting before elevating the system constraint).

TABLE 15-4 Success Criteria Recommended by Dr. Eli Goldratt (2008c) How can we validate whether Hypothesis #1 (no significant inherent potential) or Hypothesis #2 (significant inherent potential) is most valid for a specific organization?

Let's start with the general facts (governing principles) about any system and see what we can deduce from these.

Fact 1: The system constraint (bottleneck) governs the Throughput (flow rate) of goal units for the whole system.

Implication: The system (on average) can never produce more goal units than what the constraint is capable of. However, if constraint capacity is wasted through starvation, blockage, breakdowns, or rework, then the system will achieve a lower Throughput than what the system (based on its constraint) is capable of. The level of constraint capacity wasted on starvation, blockage, breakdown, rework, etc., can be used as a reliable way to estimate whether inherent potential exists (i.e., the opportunity to do more without investing in more resources). The capacity lost is normally between 25 and 50 percent of the available capacity.

Fact 2: The critical chain (the longest path of dependent events considering both process and resource dependency) governs the lead time (flow time) of all goal units through the system.

Implication: The parts going through the system can never go faster (on average) than the time to cover the critical chain. However, this flow time will be longer than the sum of processing and movement times on the critical chain when goal units traveling through the system have to wait for a resource or a decision. The level of time wasted on the critical chain due to resource or information unavailability (delays) can be used as a reliable way to estimate whether inherent potential exists (i.e., the opportunity to do the same or more within less time without investing in more resources). The time lost is normally 25 to 50 percent of the critical chain time.

Fact 3: Every system's performance (Throughput of goal units, lead time, costs, and investments) varies over time. Sometimes there is a significant variation between the best, the average, and the worst.

Implication: The "best ever" performance shows what is possible with the current starting conditions. Normally the "best ever" is achieved under ideal or crisis circumstances. The "ideal" circumstances should be turned into standard best-practice. It is in crisis situations that we become very open to "do whatever it takes," including changing the current rules (normal mode of operation) and ignoring efficiency measurements. For example, if there is a scarcity in the market, we naturally move to a "wait for the pull" rather than "push as much as you have." Why not use pull all the time?"Necessity is the mother of invention," but frequently these "inventions" that got us out of the crisis don't "stick" since we go back to the "way we've always done it before."

Therefore, if when we observe a significant gap and variation between the actual performance of a system and its goal, we simply need to identify: 1. How much constraint capacity (that governs the overall system throughput) and critical chain time (longest path of dependent events that govern total lead time) is being wasted (poor constraint or critical chain exploitation).

2. How much unnecessary costs or investment incurred to validate (or invalidate) the level of inherent potential (e.g., profitability) can be unlocked without any significant investment in more or better resources.

We can represent this opportunity within the model with Fig. 15-14.

We can apply the same logic to validate or invalidate whether it is possible to achieve the same Throughput with less resources (truly variable costs, Operating Expenses or Investments). We can determine this through observations, studying "best-of-breed organizations," or simply identifying all possibilities where truly variable costs, Operating Expenses and Investments are incurred unnecessarily (events such as overtime cost, emergency shipments, or unnecessarily investing in more capacity than needed because of starvation or blockage caused elsewhere in the system). Once these categories of avoidable or unnecessary truly variable costs, Operating Expenses and Investments have been identified, we can then validate whether they exist within the organization we are analyzing and, if so, to what extent they exist as a reliable way to quantify the "inherent" improvement potential. Then, tests can validate how much of this potential we can unlock without significant investments.

FIGURE 15-14 Quantifying inherent potential by looking for performance gaps/variation.

Figure 15-15 shows a summary of the hypotheses, magnitude of inherent potential and validation that, in most organizations, it is possible to do more with less in less time. "More" by achieving higher Throughput by not wasting any constraint capacity;"with less" by achieving lower truly variable costs, Operating Expenses or Investments by eliminating the causes of avoidable costs and investments; and "in less time" by achieving shorter lead times by eliminating causes of delays on the critical chain.

Overcoming the Difficulty to Quantify the Impact of Change Initiatives

One of the key requirements of adopting a systems approach to continuous improvement and auditing is the ability to judge the impact of decisions on the system as a whole-especially the impact of financial decisions. For most managers in organizations, the idea of trying to evaluate the impact of their local decisions or proposed investments on the "system as a whole" is a daunting, lengthy, and frequently frustrating experience (especially if they need to make a decision quickly). Throughput Accounting (TA) was invented by Goldratt (1990a) to meet this challenge as an alternative to cost accounting. TA (according to the IMA Statement 4HH on TOC) differs from traditional cost accounting, first in its recognition of the impact of constraints on the financial performance of an organization (i.e., if a decision impacts the constraint, the system's Throughput will be impacted and vice versa); and second in that it separates totally variable cost from Operating Expenses (OE) (all costs that are not totally variable with increased/decreased production) to assist with faster and better decisions. This definition removes the need to allocate all costs to products and services, which frequently results in sub optimum decisions when managers erroneously assume that once OEs are allocated they become variable.

FIGURE 15-15 Identifying the inherent potential to "do more with less in less time."

TA improves profit performance (even for not-for-profit organizations) with better and faster management decisions (Corbett, 1998), by using measurements that more closely reflect the effect of decisions on three critical monetary variables-Throughput, Investment/Inventory, and Operating Expenses (defined below). Goldratt's alternative begins with the idea that each organization has a goal and that better decisions increase the amount of goal units that the organization can generate now and in the future. The goal for a profit-maximizing firm is easily stated-to increase profit now and in the future. TA applies to not-for-profit organizations too, but they have to develop a goal that makes sense in their individual cases. Organizations that wish to increase the attainment of the goal should therefore require managers to test proposed decisions against three questions. Will the proposed change: 1. Increase or reduce Throughput (Sales TVC)? If yes, by how much?

2. Reduce or increase Investment (Inventory)? If yes, by how much?

3. Reduce or increase OEs? If yes, by how much?

The answers to these questions determine the effect of proposed changes on systemwide measurements: 1. Throughput (T) = Sales Revenue Totally Variable Cost = SR TVC 2. Net profit (NP) = Throughput Operating Expense = T OE 3. Return on Investment (ROI) = Net Profit/Investment = NP/I 4. TA Productivity = Throughput/Operating Expense = T/OE 5. Investment Turns (IT) = Throughput/Investment = T/I In summary, TA is an important development in modern accounting that allows managers within both private and public sector organizations to understand the contribution of constraint resources and the frequently nonlinear impact of local actions or decisions on the overall profitability and viability of an organization.

TABLE 15-5 Using TA to Show the Leverage with a 1-Percent Change in Price, Volume Sold, and Wages Knowing the impact of changes on these variables plays a vital part in both knowing where to focus scarce resources (especially management time) and knowing how to predict the impact of changes on the organization's profitability/viability. As an example, Table 15-5 provides a baseline case, that shows the leverage achieved in a 1-percent increase in average selling price, a 1-percent increase in volume sold, and a 1-percent reduction in wages on the Net Profit of the organization (10%, 5% and 2%).

Preventing and Correcting Errors of Omission and Commission

It was stated previously that two of the most common types of management mistakes include errors of omission and errors of commission. Another way to look at these errors is by relating the errors to whether action was taken based on a tested or untested hypothesis.

1. Errors of Commission: Doing what should not be done or acting on an untested hypothesis.

a. Do work that is not important (what Goldratt calls choopchiks9) or urgent.

b. Do the wrong thing (it will not solve the problem and may even make things worse).

c. Do too many things at the same time.

2. Errors of Omission: Not doing what should be done or not acting on a tested hypothesis a. Don't act because "we still have time" or "we don't have the time (to act now)."

b. Don't act because "we are different (it will not work here)."

c. Don't act because "they will never agree (and without their agreement it's a waste of time to even start)."

Using the classification of "acting on an untested hypothesis" and "not acting on a tested hypothesis," is also useful as it helps to identify the simple solution to prevent both mistakes. To prevent errors of omission and commission we should always check (our hypothesis) before we act/don't act, unless acting is the only way of checking.

But, how do you check your hypothesis? It starts by recognizing that every decision we make and every conclusion we reach and communicate or act on that contains the word "because" or "will result in" contains a hypothesis, such as "We should not make the offer because customers will never agree," "Customers are buying less because our competitors have reduced their prices," etc.

We have two ways to check a hypothesis. One is through logic, using the effect-cause-effect method to identify and then validate predicted effects. The more predicted effects are validated, the more valid your hypothesis, considering that even one predicted effect that is validated (if it could not happen by "fluke") could be enough to validate. At the same time, even just one predicted effect that cannot be validated could invalidate your hypothesis (e.g., the observation of just one black swan can invalidate the hypothesis based on thousands of observations that all swans are white).

The second method, frequently referred to as the scientific method of "trial and error," is to test the hypothesis by acting. Leonardo da Vinci said, "I am always impressed with the urgency of doing. Knowing is not enough; we must apply. Being willing is not enough; we must do." It is only once we apply that we can really test the necessity and especially the sufficiency of our hypothesis (new solution). That is why our "injection" to reduce errors of omission and commission includes ". . . unless acting is the only way of checking." This realization is critical to overcoming disagreements or fear. For example, it might be that the only way of really checking how customers will react to a new offer or product is by actually presenting this offer. Of course, such "tests" should be designed as an experiment and reviewed with the same rigor rather than wasting time arguing in the boardroom whether they will like it or not, or whether to first do more market research (what Goldratt calls "just a sophisticated way of procrastinating").

Overcoming the Fear of Uncertainty

In studying the success stories such as Toyota, Walmart, and GE, it is noticeable that each of these had a leader/leadership team willing to take the responsibility to decide which philosophy or methodology to use rather than leaving it up to levels 3, 4, or 5 to decide. Not only did they decide on the philosophy, but also on the vision and made sure the connection between the two was clear to every level and every function in the organization and then empowered everyone to contribute (within the boundaries of the philosophy) to achieving the vision or goal. These leaders also showed that they are continuously challenging their own patterns of thinking and want those around them to do the same-not randomly, but in a systematic process using the scientific method.

Why is this important? It helps overcome the fears and prevent the mistakes related to the decisions and conflicts of when to change, what to change, and how to cause, sustain, and continuously improve on the change.

What can organizations do who find themselves almost paralyzed by the fear of changing (due to a culture that seems to punish failure and not recognize the courage to invent and test new ways of doing things) unless competitors made it necessary for it to do so?

This deficiency in organizations can be eliminated by taking the following steps (Barnard, 2001; Ackoff, 2006).

1. Record every important decision, including the ones not to do something because of reasons such as "we still have time" (when you've run out), "the cost or risk of doing is too high" (ignoring the cost or risk of not doing), etc.

2. The Decision Record should include (a) the event that triggered the need for change, (b) the expected effects of the decision and by when they are expected, (c) the assumptions on which the expectations are based, (d) the inputs to the decision (information, knowledge, and understanding), and (e) why the specific decision was made (the logic) and by whom.

3. Monitor the decisions to detect any deviation of fact from expectations and assumptions. When a deviation is found, determine its cause and take corrective action (to reduce the time to detect and correct mistakes).

4. The choice of a corrective action is itself a decision and should be treated in the same way as the original decision; a Decision Record should be prepared for it. In this way, one can learn how to correct mistakes; that is, learn how to learn more rapidly and effectively. Learning how to learn is probably the most important thing an organization or individual can do.

The decision by an organization not to adopt systems thinking/holistic approach of TOC should be treated in this way. Making explicit the assumptions on which such a decision is based and monitoring them can lead to a reversal of the decision in time.