Theory Of Constraints Handbook - Theory of Constraints Handbook Part 133
Library

Theory of Constraints Handbook Part 133

Goldratt, E. M. 2008b. The Goldratt Webcast Program on Project Management: Sessions 1-5. (Video series: 5 sessions) Roelofarendsveen, The Netherlands: Goldratt Marketing Group.

Goldratt, E. M. 2008c. Retailer S&T tree Available at: http://www.goldrattresearchlabs.com Goldratt, E. M., Goldratt R. and Abramov E. 2002a. Strategy and Tactics Tree TOC Weekly. December 11, 2009. www.toc-goldratt.com.

Goldratt, E. M. Goldratt, R. and Abramov E. 2002b. Strategy and Tactics Tree: Part Two. TOC Weekly. December 16, 2009 www.toc-goldratt.com.

Goldratt, E. M. Goldratt, R. and Abramov E. 2002c. Strategy and Tactics Tree: Part Three. TOC Weekly. December 23, 2009 www.toc-goldratt.com.

Goldratt, E. M. and Goldratt, R. 2003a. Insights into Distribution and Supply Chain. Bedford, UK: Goldratt Marketing Group.

Goldratt, E. M. and Goldratt, R. 2003b. Insights into Operations. Bedford, UK: Goldratt Marketing Group.

Hamel, G. and Prahalad, C. K. 1994. Competing for the Future. Boston, MA: Harvard Business School Press.

Johanson, U., Skoog, M., Backlund, A., and Almqvist, R. 2006. "Balancing dilemmas of the balanced scorecard," Accounting, Auditing & Accountability Journal 19(6):842857.

Kaplan, R. S. and Norton, D. P. 1996. The Balanced Scorecard. Boston: Harvard Business School Press.

Kim, W. C. and Mauborgne, R. 2005. Blue Ocean Strategy. Boston, MA: Harvard Business School Publishing Corporation.

Porter, M. E. 2008. "The five competitive forces that shape strategy," Harvard Business Review January 86(1):7893.

Thompson Jr., A. A., Strickand III, J., and Gamble, J. E. 2008. Crafting and Executing Strategy. 16th ed. New York: McGraw-Hill Irwin.

About the Author.

Lisa A. Ferguson, PhD, is the founder and CEO of IlluminutopiaSM, an organization that is focused on "Illuminating the way to utopia for individuals, organizations and societySM." Our Websites are located at www.illuminutopia.com and www.illuminutopia.org. Lisa is coauthor, with Dr. Antoine van Gelder, of an S&T tree for hospitals. She is currently working on writing books and papers to publish as well. Until June 2008, she spent a year working directly with Dr. Eli Goldratt (the founder of the TOC) as his technical assistant and writer (learning how to write the way he does). Since 2005, Lisa has been teaching part-time for Goldratt Schools (GS) training consultants in different countries, including India, the United States, and Japan, to be TOC Experts or Supply Chain Logistics implementers. She has a PhD in Operations Management from Arizona State University and an MBA. She taught operations management full-time in a university business school for 10 years. The last 5 years were spent teaching only MBA and doctoral students with a practical focus. Lisa has been involved with the TOC International Certification Organization (TOCICO) since its inception. She is currently a member of its Board of Directors. She is TOCICO-certified in Supply Chain Logistics Project Management, and the Thinking Processes. She resides in Sedona, Arizona and enjoys spending time with horses, hiking, and playing tennis.

CHAPTER 35.

Complex Environments

Daniel P. Walsh

Introduction.

At times, the challenge of making the correct decisions in a value-added chain is daunting at best and at other times it is simply overwhelming. This appears to be the case in every organization regardless of size or the complexity1 of the products produced or services provided.

Reliance on suppliers and vendors both internal and external to our span of control further fuels the levels of uncertainty, complexity, and frustration. On any given day, we are ourselves a consumer, a producer, and a supplier of these very goods and services. Add the unreliability of our ability to forecast successfully future demand for our goods or services to the mix and it is no wonder we find ourselves mostly in a survival mode. Having observed these phenomena in many different companies within an industry sector and indeed across multiple industry sectors, the survival mode appears to be common practice. So much so that it is accepted and viewed as a fact of life that cannot be easily changed in spite of significant investments in improvement initiatives (Brown et al., 1994), and the environment has only grown more complex since this article was written.

If we view our organization as a system, then by definition all of the activities are connected. At first glance, they may appear to be independent of each other but in reality, any action taken by one of the activities will impact the others. Then it follows any real and lasting change for the better must be based on a systems approach; all changes must not just improve a local activity, but rather the entire organization. There are two characteristics of all systems (Goldratt and Cox, 1984): dependency of variables or activities and fluctuation (more commonly referred to as variability). Even if this tenet of improvement is recognized and accepted, it will immediately create a conflict with existing metrics. This conflict highlights the requirement for an overarching common set of metrics for evaluating individual contributions of all activities while establishing connectivity to the performance of the organization as a whole. Once these new metrics are in place and effective management tools assessing the impact of internal and external variability are being used, then the variability can be evaluated quickly and corrective action taken to protect the organization's performance.

Copyright 2010 by Daniel P. Walsh.

FIGURE 35-1 Evaporating Cloud of managers' dilemma of judging the system performance. ( E. M. Goldratt used by permission, all rights reserved. Source: E. M. Goldratt 1999. Viewer Notebook 137.) The purpose of this chapter is to provide a better understanding of why addressing the effects of local variability is crucial to developing strategies that are more effective for managing entire supply chains. Again, these new metrics and approach must provide connectivity from the local activities to the global Throughput of the organization. In addition, it is important to use the correct planning, scheduling, and controlling algorithms; in other words, make sure the right tool is being used. Lastly, it is important to make sure these tools and algorithms are holistically employed.

Brief Background

First, we must better understand the chronic dilemma virtually every manager faces on a daily basis. To illustrate the dilemma and the resultant conflict, we will use a simple Evaporating Cloud (EC) developed by Dr. Eliyahu Goldratt (1994; see Fig. 35-1). In order to [A] manage well we must [B] control costs; in order to [B] control costs we (managers) must [D] evaluate and make decisions based on how it impacts locally. The other side of the dilemma is that in order to [A] manage well we must [C] protect the company's Throughput and fulfill our commitments to the market; in order to [C] protect the company's Throughput, we [D'] must not make decisions based on local impact. The needs of the company, [B] controlling costs, and [C] protecting Throughput, are necessary conditions and must be achieved in order to [A] manage well. The conflict is very clearly defined as being between whether we [D] make decisions based on local impact or we [D] do not evaluate according to local impact.2 Now why do we feel compelled to evaluate according to local impact? We feel compelled because of the ingrained assumption that the local impact of decisions is equal to the impact it will have on the company as a whole. In fact, this is consistent with common business folklore and is fortified by what is accepted and being taught in virtually every learning institution throughout the world.

The other side of the dilemma is that in order to protect Throughput, many times we must not evaluate and make decisions based on the local impact, rather do whatever it takes to meet our commitments to the market. This, of course, is the familiar phenomena commonly referred to as "firefighting," the bane of all managers. It also manifests itself by managers focusing on local metrics during the first part of a reporting period and then later in the reporting period shifting the focus to meeting orders whose due dates are starting to slip. When this occurs, the focus is no longer on local impact, but rather on delivering our products to clients.

It is clear this dilemma must be addressed or managers at all levels will remain frustrated and the true potential of a company will never be achieved.

Guiding Strategies

If this dilemma is the starting point, then we have two broad guiding strategies available: 1. The first approach is focusing on improving the individual parts of the organization within our span of control as "fires" crop up. This approach has been the predominant approach and continues to remain popular among many process improvement practitioners and managers. It is based on inductive reasoning and the belief that improving individual parts of the organization will result in improving the performance of the organization. Starting with the early pioneering efforts (see for example, Alford, 1934, sect. 4), most of the literature and developments on organizational improvement (Churchman, 1968) have focused on this piecemeal or fragmented approach. Indeed, the majority of the widely used tools and methodologies (see, for example, Zandin and Maynard, 2001; Barnes, 1980) can trace their origins to these scientific management tenets (Taylor, 1911).

2. The second approach views the organization in its totality, focusing on a systems approach (Churchman, 1968) for improvement. It is based on deductive reasoning, long a cornerstone for breakthrough advances in the sciences and starting to show considerable promise in some of the more advanced evolving business methodologies (Rummler and Brache, 1995).

Today there are many powerful tools and methodologies available such as TOC, Lean, Six Sigma, Business Process Reengineering, etc. to help implement these improvement strategies. Still, the results have been mixed. In some cases, improvements have been documented; in other cases, the organizations showed little improvement or even none at all. Even when initial improvements were achieved, the sad reality was many of those were not sustainable. Almost invariably, the improvements took longer and were more difficult than expected.

So, where does that leave us? Rather than attempting enhancement or improving existing tools and methodologies, we can focus instead on how to holistically develop and employ a significantly more effective solution set. This focus will require building on and leveraging the current available body of knowledge. Perhaps it would be helpful if we first gained clarity and understood why many improvements fail to meet managers' expectations.

The limitations of following the strategy of improving the individual parts of the organization lead to managing the individual parts in isolation. If everyone is managing their areas of responsibility this way, then all of the fine tools and methodologies are focusing on improving the individual parts separately. Local effects reflect the impact of problems that exist within the system of operation. Measurement of these effects on isolated "local activity performance" does not necessarily lead us to understanding the systemic problems that may be leading to negative performance. We can all agree, then, that the improvements can be summarized as follows: where I is the sum of the individual improvements i from 1 to n improvements.

Now, it is important to accept the painful reality that the sum of the individual improvements has very little to do with improving the performance of the organization (Goldratt and Cox, 1984 Chapter 4; Johnson and Kaplan, 1987; Goldratt, 1988) and will be a pure random event if it leads to any improvement of the organization. The sum of individual improvements is simply the summation of disconnected events. This can also be described as sub-optimization. It appears that this erroneous assumption-action taken locally will necessarily result in improving the performance of the organization-is one of the main contributing causes for failure to achieve real and sustainable enterprise improvements. Therefore, it must follow that this erroneous assumption must be challenged and de facto abandoned, replaced by an approach focusing instead on improving the performance of the enterprise.

In order to develop an alternative approach, we must focus on improving the performance of the individual areas, such as a department or an area of activity, while providing connectivity for improving the enterprise. In other words, we must improve individual areas only if we can establish a cause-and-effect relationship between the two, showing that local improvement translates into global improvement. This will require a fundamental shift in our thinking. Before we pursue this line of reasoning, first we must agree that in any enterprise there are two indisputable and absolute truths: Every function and task within the enterprise is connected and therefore its outcome will affect other parts of the enterprise. Therefore, regardless of the complexity we must understand the cause-and-effect relationships the functions and tasks have on the individual parts and more importantly on the performance of the entire enterprise.

Every part of the enterprise is subject to uncertainty, which is simply another way for describing the inevitable variability experienced in actual execution. Regardless of how meticulous our planning and scheduling, when actually executed, uncertainty and variability will inevitably affect our efforts.

These two tenets (dependent events and statistical fluctuations) provide the foundation for developing any breakthrough holistic approach (Goldratt and Cox, 1984, Chapter 15 and 17), leapfrogging our ability to significantly increase the Throughput (discussed later) of a single enterprise or the larger value-added processes of an entire supply chain. Another important element of this new holistic approach is providing relevant performance and operational metrics to monitor the stability of the enterprise on a day-to-day basis. These metrics must provide connectivity on the short term, highlighting when and where specific action must be taken while providing longer-term visibility for effective risk management.

An operational metric must have a cause-and-effect relationship providing connectivity between an action taken and the positive or negative impact it will have on the organization's Throughput. Therefore, in most cases if these operational metrics are providing the priorities for managing, they are focused on increasing Throughput. As this is done, the recurring costs shall not increase and any variable cost increase will be significantly less than the corresponding increase in sales. An example of an operational metric is using the speedometer in an automobile while driving on a trip. The output of the speedometer is the effect of the input of how hard we press down on the accelerator pedal. Therefore, if we have calculated the average speed that must be maintained in order to complete the journey on time, the information received from the speedometer will allow us to take the correct action. This is in real time, not in hindsight. In this example, a performance metric would be measuring on the road map the distance covered during the journey. Measuring the variance of the distance covered vis-a-vis what was expected to be covered is important but of very little use in making real-time operational decisions.

Throughput Accounting

The Theory of Constraints (TOC)3 defines Throughput (T) as Sales $ (S) minus Truly Variable Costs $ (TVC). It should be pointed out that all recurring costs including fixed labor costs are captured as Operational Expenses (OE) (Corbett, 1998).

If decisions are being made using operational metrics and they are focused on increasing Throughput, then it is possible to have the organization's financial metrics aligned as well (Corbett, 1998). TOC builds on this concept and recognizes that an organization is a system and therefore regardless of how well it is managed, its ability to increase Throughput will be limited by the system's constraint. Furthermore, if we have identified what and where the constraint is and we are subordinating everyone's efforts toward maximizing its effectiveness, then we have unlocked the secret for maximizing the organization's Throughput (Goldratt and Cox, 1984).

As we can see in Fig. 35-2, we now have a model for resolving the conflict depicted in Fig. 35-1, which most organizations face on a daily basis. The conflict of course is whether to take action in order to control costs or take action to protect Throughput. It is important to note that this conflict is in large part caused by using performance metrics to evaluate individual parts of the organization rather than using operational metrics to evaluate contribution to Throughput. This is analogous to driving your automobile by looking in your rear view mirror and using the history of what is behind you (performance metrics) to guide future decisions (Fig. 35-1). On the other hand, change to a new model by focusing on looking out the front windshield (operational metrics; see Fig. 35-2).

We are now using the same measurements to make operational and financial decisions. Once this new model is adapted, it is very easy to turn the operational metrics into performance metrics. Since the new model is measuring the rate of Throughput being generated at the constraint, it simply requires adding up the individual contributions at periodic intervals.

where T is the periodic Throughput and t is the individual contributions to T from 1 to n.

FIGURE 35-2 Evaporating Cloud solution of managers' dilemma of judging the system performance. ( E. M. Goldratt used by permission, all rights reserved. Source: Modified from E. M. Goldratt, 1999.) The following discussion is intended to provide a roadmap for developing such an approach. It is important to share that this approach has been successfully employed across organization types and different industry sectors. I believe it has universal applicability in private and public organizations.

A Holistic View

In order to develop a holistic approach to better achieve the goals of the enterprise, it stands to reason we must first model our value-added chain as one system. Before we discuss the modeling, we must address the characteristics of a system.

There are two characteristics present that can be used in describing any system: Everything within the boundaries of the system is connected, which means all of the elements are subject to cause and effect. None of the elements operates in isolation. At first glance it may seem so, but one must continue looking until the connectivity is established. Figure 35-3 provides a top-level system depiction of a typical company that is part of a much larger supply chain. As the systems architecture for the company is developed, a much more detailed view will be modeled. The interdependencies will be identified and the flow of information and work that culminates in a value-added product or service emerges. This is a very important part of the planning process. This is a precursor to developing the approach of dealing with variability, which is the execution part of our systems architecture.

In execution, the individual elements are influenced by variability (see Fig. 35-4). Therefore, due to the connectivity of the elements, the variability is transferred throughout the system and thus affects the outcome of the system itself. Since variability can never be eliminated, an important part of the systems architecture design must include the capability to better manage and mitigate the variation.

FIGURE 35-3 Links in an organization chain.

FIGURE 35-4 Statistical fluctuations and dependent resources.

The majority of companies find it very difficult to achieve their planned objectives, so much so that it leads to a belief of inevitability, of not being able to control the constant stream of uncertainties that confront them on a daily basis. Focusing on individual performance levels instead of the performance of the enterprise seems to be the only choice. The uncertainty that causes the fire fighting is actually the manifestation of variability. The negative impact to the company is a result of not being able to mitigate the impact caused by the inevitable variability but having to respond in a fire-fighting mode.

Categories of Variability

There are two categories of variability-common and special cause.4 They have different origins but both can adversely affect the performance of the company. Many different kinds of management philosophies have evolved attempting to minimize their impact. For example, in Lean, the kanban is used to signal variation; when it appears, it will start chocking the release of work to control the amount of work in process (WIP). This recognizes the fact that if work is authorized and released prior to resolving the cause of the variability, the queue increases, which will increase cycle time. Similarly, in Six Sigma, process control charts such as X bar and R charts look at specific process variability. This in turn highlights areas for improving an individual process while providing feedback in execution.

Many companies may see improvements with these approaches to managing variability. However, today there is growing consensus that something additional is needed to get them to the next level. There is a need for an additional classification of variability that will provide greater understanding and insight in choosing the correct planning, scheduling, and execution applications that will provide better focus. First, it is important to examine the confusion and negative impact that is being caused and view it in a historical context.

Tools Selection

In every organization, there is a requirement to plan, schedule, and execute a series of actions in order to provide a product or service. Specific TOC applications and tools have been developed to manage different parts of the organization such as departments, work centers, etc. The three predominant application solutions for planning and control systems5 are: Project Management System-This is used to manage the projects in the company.

Production Planning and Control System-The origin of this application is in manufacturing. However, it has evolved into many other parts of the company such as the customer service and administrative areas.

Material Management and Inventory Control System (supply chain system)- Primarily focused on material procurement, transportation, warehousing, and inventory control.

Which application solution the organization uses is determined by the product or services provided to the market. It is interesting to note that many companies locked in old paradigms using production planning and control systems such as MRP and MRPII, measures, critical path project management and distribution systems such as DRP and DRPII have not taken advantage of the evolving thinking and technologies now available and are thus blocked from adapting them. Building on the emerging thinking and new tools available, companies are already leveraging on the conclusions reached by Schragenheim and Walsh (2004) that deeper understanding of when to use each of the logistical tools, the application solutions for Project Management, Production Planning and Control, and Material Management and Inventory Control will lead to powerful hybrid solution sets. An example will be shown later in the chapter. In fact, companies using holistic planning, scheduling, and execution techniques such as the Integrated Enterprise Scheduling engine which focuses on holistically managing the value-added chain Throughput rather than Throughput of the individual parts, are obtaining remarkable and sustainable results. The rationale and explanation for such an approach was highlighted in an article written by Schragenheim and Walsh (2004). Indeed for the first time it appears there are software solutions being developed recognizing the requirement and immense potential in being able to better manage the negative impact on the enterprise caused by the inevitable variability in execution.