Complexity, artificial intelligence, emergent behavior, systems of systems . . . these are a few of the factors that are making problems and solutions both more interesting and more vexing for today’s systems engineer. In the problem space, systems that we have thought of as merely complicated are now being understood to be complex. Solutions increasingly must connect across disciplinary and governance boundaries in order to fully address problems.
For example, the Russian invasion of Ukraine is creating military, political and social problems whose solutions are inextricably intertwined. There is a debate around establishing a “no-fly” zone over Ukrainian airspace in order to allow for the safe evacuation of civilian non-combatants out of the combat zone and to allow the Ukrainian insurgents to deal with the ground combat without being threatened from above by the Russians. The social problem of trapped civilians and wounded personnel needing evacuation is inextricably entwined with the issues around military intervention and the potential provocation of a larger scale Russian response that might even broaden into nuclear war.
Other examples are less dramatic, but just as interconnected. The interplay of the impact of Covid 19 on the economy through the resulting supply chain problems and consequent inflation drivers are no less important than the war in Eastern Europe and are every bit as interrelated in terms of the problems posed and the solutions required.
One impact on systems engineering from such complex problems is that they point up the wider opportunity for applying systems thinking and system design principles and methods by systems engineers. The system solutions needed on this scale go far beyond the size and complexity of just developing a weapons system or crafting a vaccine. There are opportunities for systems engineers to engage their skills wherever there are systems. This includes societal problems as well as the traditional technical challenges.
But, these opportunities come with a new set of responsibilities. The social, economic, political and health aspects of these problems will require systems engineers to come to the table with an understanding of disciplines beyond the usual scope of their training and experience. The failure to recognize these problems or to understand the systemic ramifications of candidate solutions for narrow aspects of the overall problems is as potentially dangerous as building a vehicle without any knowledge of the operating conditions it will face.
An example of this arose in a discussion of artificial intelligence (AI) at INCOSE’s International Symposium in 2021. The discussion took place in a group of systems engineers and software developers with varying degrees of experience with the design and application of AI. The conversation had not gone very far when a question was raised as to ability of AI to deal with “moral choices.” The example used was an autonomous vehicle operating using an AI algorithm. The questioner posited a situation where the vehicle sensed one or more persons in its path. The vehicle could not stop in time to avoid the person in the roadway and would have to choose either to run over the person or swerve to miss them resulting in a crash that could kill the vehicle occupants. Some debate ensued without any solid conclusion resulting.
The relevant aspect of the debate for our discussion was that it proceeded without any hint that anyone present was aware that the problem posed was a variant of a very famous – and thoroughly explored – dilemma from the field of moral philosophy. The problem, known in the literature as the “Trolley Problem,” is a thought experiment first popularized by an English philosopher, Phillipa Foot, in 1967. The problem has been widely discussed in serious philosophical literature over the last half-century. The philosophical debate has ranged from the aspect of “allowing” versus “doing” something that causes harm to one or more human beings, to the consideration of weighing the harm done to one person (the one in the road) against harming several (in our case the multiple occupants of the autonomous vehicle).
Public policy debates have invoked the Trolley Problem in subjects ranging from the allocation of Covid 19 treatments to drone warfare casualties or the division of resources for fighting AIDS. The problem has even been the basis of discussions and choices in popular media such as the television shows The Good Place, The Unbreakable Kimmy Schmidt and Orange is the New Black. The point here is not to suggest a specific strategy for solving the problem, but to point up the lack of awareness among the group of even a very famous philosophical discussion or the existence of the philosophical viewpoints (utilitarianism, consequentialism, deontologicalism) that might shed light on the AI design issues.
The group that fielded this discussion seemed content to explore the issues de novo, without recourse to any external scholarship (of which they may well have been unaware). They could easily have been a group of engineers tasked with creating the AI algorithm that might actually be confronted with the moral dilemma. The most hopeful sign was that someone raised the question at all.
I would not suggest that engineers need to have expertise in moral philosophy or economics or political science. But, they should have at least enough exposure to other disciplines to know that they exist, when they might be of help, and where that help is to be found.
The lack of exposure to social sciences and the humanities is a result of the educational design that created the path to the engineering profession. At the college level, engineering students are allowed to allocate only about 25-30% of their course load to non-technical subjects. Most of these hours are defined to include classes like the universal “Freshman English.” This limited opportunity reduces the student engineer’s ability to experience the social sciences (anthropology, sociology, psychology, economics and political science) and the humanities (history, languages, literature, the arts, and philosophy). Without even a survey course in many of these disciplines, engineers are ill-prepared to recognize or confront problems that touch these fields.
In a world full of problems that are increasingly complex and interconnected, systems engineers will find themselves unable to offer their disciplinary knowledge and skills to their best advantage. It is time to consider some serious revisions of the engineering educational program. Perhaps the model adopted by the traditional “professions” (medicine, law, ministry) requiring a full undergraduate degree (with some specific course requirements from the physical sciences and mathematics) to be completed prior to entering the graduate level “vocational” track in engineering, would be of some value as an exemplar.
It is risky for the customer and society to be subjected to engineering performed by engineers blind to the “nontechnical” ramifications of their work and unfair to those engineers to expect them to reinvent the work of other disciplines from whole cloth with little or no guidance or experience. The time has come for a serious discussion of what to do about this important and growing problem.