Dealing with Unintended Emergence: A Cha-Cha-Cha Investigative Approach – Part I

Part II of this series may be found here.

Acclaimed systems thinker Donella Meadows has defined a system as “an interconnected set of elements that is coherently organized in a way that achieves something.” David Peter Stroh has elaborated upon this definition of a system to note that systems achieve a purpose, “which is why they are stable and so difficult to change. However, this purpose is often not the one we want the system to achieve.” The unfortunate reality all too often is just what Stroh touched on: that despite our best efforts, systems often do not do exactly what we intend them to do, and once they are built, they are hard to change.

These unintended behaviors often generate unwanted or detrimental effects that ultimately reduce the system’s performance, effectiveness, or longevity. Humorous engineers often refer to these behaviors as “features” when they don’t impose catastrophic or mission ending effects and are just minor inconveniences. A more serious recent example of this are the well-known issues surrounding the Boeing 737 MAX program. The most prominent issue involves the interplay of the use of larger engines while trying to reuse much of the previous 737 architectural configurations, which resulted in center-of-gravity shifts that caused the aircraft to become unstable in various modes of flight. While this was understood to potentially be a flight stability issue, the attempted fix and resultant complexity of the MCAS software and its interfaces with both crew and vehicle resulted in additional emergence, causing confusion amongst crew and feedback to the system that increased the probability of inducing stalls rather than reducing them as desired. Acknowledging that the associated corporate governance and regulatory oversight issues contributed to these problems, finding them so late further compounded the ability to address them. In fact, we continue to see the attempted fixes play out in real time at incredible financial impact to Boeing.

We should also certainly acknowledge that emergent behaviors can have beneficial effects for end users, but these are typically much rarer and more serendipitous. Text messaging was a feature that cellular service providers developed to push alerts to customers about possible outages and service interruptions. The designers didn’t realize that by having developed a feature to enable messages to be sent to customers, they had also developed a technology that enabled customers to text message each other. Ironically, this emergent behavior created a revenue stream that the large cellular providers were not equipped to monetize at the outset. While an interesting discussion in its own right, we will put aside the concept of emergent applications and usages of existing systems for a future blog post.

This blog will focus on addressing the questions of:

  • Where does emergent behavior originate from?
  • What can we do as systems engineers and systems thinkers to mitigate the unintended emergence to the degree possible?

While there are numerous ways one could approach this topic, I will focus on applying a method used in the scientific community and tailor it for our engineering needs. This approach is called the “Cha-Cha-Cha Theory of Scientific Discovery.” (For more information, see Daniel E. Koshland Jr.’s article by the same name in the August 10, 2007 edition of Science Magazine).

Koshland proposes a way to approach problems through three viewpoints: Charge, Challenge, and Chance. Each requires a certain mindset, and each can help provide unique insights under appropriate conditions. The following table describes the original definitions and mindsets developed for the scientific community and then tailors them to the engineering application of understanding emergent behavior.

Scientific Application Engineering Application for Understanding Emergent Behavior
Charge These discoveries solve problems that are quite obvious – the cure for heart disease, understanding the movement of stars in the sky – but for which the way to solve the problem is not so clear. Albert Szent-Gyorgi says that the scientist must “see what everyone else has seen and think what no one else has thought before.” These are deviations to planned behavior that are obvious to the engineer through observation or testing, but where the underlying cause(s) are not so clear.

The engineer must use systems thinking, reasoning, and models of the system as a whole to come up with theories as to how the perceived behavior originates. A systems mindset and understanding of the system as a whole and all of its key interdependencies and impacts is necessary.

Challenge These discoveries are a response to an accumulation of facts or concepts that are unexplained by or incongruous with scientific theories of the time. The discoverer perceives that a new concept or a new theory is required to pull all the phenomena into one coherent whole. These are deviations to planned behavior in particular circumstances. They don’t always happen, but occur occasionally or infrequently. They may be considered a nuisance, “feature,” or inconvenience by the end user.

The engineer must abstract patterns from the observed behavior, looking for clues of what is common between observations and then tracing these back through system models to their originating elements.

Chance These discoveries are those that are often called serendipitous. These are chance events that a ready scientist recognizes as important and then explains to other scientists. These are deviations to planned usage or unintended behaviors that have useful and positive applications in the real-world environment. These can result in serendipitous applications of the system that may provide new or improved benefits to the end user.

The engineer must be inquisitive and solicit feedback from users or obtain usage data through other means. The engineer must keep an open mind and be willing to accept that end users may find novel or unique applications and usages of a system, or that unintended behaviors that emerge at the system level have some benefit to users or other external stakeholders. This new knowledge can then be fed back into the design process to better understand user behavior for upgrades or new products and services to be developed in the future.

Emergence ultimately can be traced to not just components/elements (called nodes in graph theory) but also, and perhaps more importantly, their interconnections with other nodes (called edges in graph theory)—the items that flow across these interfaces and the complex feedback loops that inevitably occur. The focal point is the interrelationships between elements. These interrelationships may be internal to the system, external to the system (both with its operational environment and users) and external to the system when used in the broader context of a system-of-systems that is intended to collectively achieve a higher set of objectives.

In the next blog, we will look briefly at each of these viewpoints, using the cha-cha-cha investigative method to first assess where the key focus points of emergence reside, and we will then discuss approaches to deal with emergence in each case. Several potential investigative approaches will be examined from each reference viewpoint to illustrate the investigative approach.

Leave a Reply