Abstract

The derivation of a theory of systems engineering has long been complicated by the fact that there is little consensus within the systems engineering community regarding precisely what systems engineering is, what systems engineers do, and what might constitute reasonable systems engineering practices. To date, attempts at theories fail to accommodate even a sizable fraction of the current systems engineering community, and they fail to present a test of validity of systems theories, analytical methods, procedures, or practices. This article presents a more theoretical and more abstract approach to the derivation of a theory of systems engineering that has the potential to accommodate a broad segment of the systems engineering community and present a validity test. It is based on a simple preference statement: “I want the best system I can get.” From this statement, it is argued that a very rich theory can be obtained. However, most engineering disciplines are framed around a core set of widely accepted physical laws; to the authors’ knowledge, this is the first attempt to frame an engineering discipline around a preference.

Introduction

Systems engineering has been recognized as an emerging subdiscipline of engineering for at least 50 years. Still, the systems engineering community has not reached agreement either on what constitutes systems engineering or a disciplinary basis for systems engineering. Kasser et al. [1] suggest that “a discipline generally matures when an overriding axiom is presented and accepted by the majority of practitioners.” They then present seven principles for system-engineered solution systems. While these principles fall short of comprising an axiom, the first is rather provocative: “There shall be a clear, singular objective or goal.”

A goal of the International Council on Systems Engineering (INCOSE) has long been to create a theory or theories of systems engineering. A step toward this goal is exemplified in a recently issued white paper entitled “Systems Engineering Principles” [2]. This white paper presents seven criteria for a theory of systems engineering: (1) transcends lifecycle, (2) transcends system types, (3) transcends context, (4) informs a world view on systems engineering, (5) not a how-to statement, (6) supported by literature and/or widely accepted in profession, and (7) economy of principle. Strangely, however, this list does not include the concept that an underlying theory should provide a validity or consistency test, which would seem to be a key requirement of any such theory.

In the 1990s, Hazelrigg [36] proposed to the engineering design and systems engineering communities that design and systems engineering are decision-making processes and that decisions are optimizations requiring objective functions or preferences. This concept reinforces the notion of Kasser and Hitchings that systems engineering needs a clear, singular objective and also hints to the notion that theories of systems engineering are more likely to be preference-based than based on physical principles or “laws of nature” as have been other engineering disciplines.

In keeping fully with the criteria of INCOSE and the notion that systems emerge as the consequence of the decisions that define them, we propose a preference-based theory with a more intuitive objective, “I want the best system I can get.”1 We contend that this preference should be widely accepted given that the definition of “best” is left entirely to the project manager or systems engineer to specify. Our principal goal for this theory is to create distinctions among methods, approaches, procedures, or practices that support this preference versus those that fail to support it, thus leading to a fundamental theory of systems engineering.2 The preference statement itself provides a basis for mathematical proof of certain methods that lead to choices supporting the preference, while it also identifies methods, approaches, procedures, or practices that fail to support this preference, for example: (1) by virtue of faulty mathematics, (2) by their use outside the boundaries of their validity, (3) by their ability to create path dependencies where given inputs can yield multiple results with radical differences in system performance, or (4) by leading to unwanted and/or unnecessary reductions in system performance.

Prior Work

As noted in the review by Sage [7], prior research seeking to develop a theory of systems engineering has been ongoing for over 50 years. This research has taken two distinctly different paths, a descriptive path (how systems engineering is performed) and a normative path (how systems engineering should be performed). Consider the example of arithmetic. A descriptive study of arithmetic might examine how children learn to add numbers. The knowledge gained from such a study could prove useful in determining how to teach addition to both accelerate the learning process and minimize mistakes. A normative study, on the other hand, would examine the theory of addition, leading to an understanding of what to teach, that is, what rules or procedures lead to correct results. The theory of systems engineering presented here is normative. Its focus is on how systems engineering should be done. Hence, we shall concentrate our review of prior work on this research path.

The earliest concepts of value under uncertainty and risk were formulated by Daniel Bernoulli in the early 1700s [8], and the fundamental concepts of decision theory were laid down by Dodgson in the book, Alice’s Adventures in Wonderland [9], written for children in 1865.3 In Alice’s encounter with the Cheshire Cat, the Cat provides Alice with the fundamental axioms of decision-making [6]. In the 1940s, these axioms were extended by John von Neumann and Oskar Morgenstern [10] to the case where outcomes are uncertain. Their mathematic is referred to as utility theory, and they also, in the same reference, lay out the foundations of game theory. Interestingly, the derivation provided by von Neumann and Morgenstern offers a proof of the existence of “utility” as a valid measure of preference under uncertainty and, within the context of a well-argued set of axioms, shows that it is the only mathematically consistent measure. Accepting this proof demands that a rigorous theory of systems engineering must be consistent with the utility theory.

The applicability of decision theory to systems engineering has been recognized for at least 50 years. An early application by Miles at the Jet Propulsion Lab considered the design of the science of a Mars mission. Miles also contributed to the application of decision theory for other missions [1113]. In addition, Sage [7,14] and Tribus [15] clearly recognized the role of decision theory in engineering design. Both Sage and Tribus outline utility theory as a tool for engineering design under uncertainty. In the early 1970s, Hazelrigg, under support by the NASA Planetary Office, applied utility theory to the study of rationality in NASA’s planetary missions, resulting in a master’s thesis by Brigadier [16]. For the past 50 years, Howard [1719] has been a major contributor to the foundations and teaching of decision theory. He has fostered the application of decision theory in many areas of business and engineering. By the late 1990s, ABET (the Accreditation Board for Engineering and Technology) formally recognized this.4 Also, in 1996, Hazelrigg [20] recognized the relevance of how Arrow’s Impossibility Theorem establishes that certain approaches used in engineering design and systems engineering cannot lead to optimal choices. Yet, despite the extensive research and application of decision theory to engineering design, there has been no attempt to use decision theory as a basis for the derivation of a theory of systems engineering. That is the unique challenge addressed here.

The proposed approach is not the first attempt to mathematize engineering design and systems engineering, however. About 30 years ago, Suh [21] published his work on axiomatic design. In so doing, he recognized the value of a mathematically rigorous axiomatic approach to the establishment of a science of design. But his approach was not based on the mathematics of decision theory, and he did not posit his theory as being based on a preference. Axiomatic design is based on two design axioms, the information axiom–minimize the information content of the design, and the independence axiom–maintain independence of functional requirements. Mathematically, these axioms describe a constrained optimization framework.5 The preference (or objective) of Suh’s framework is less information is better. But this dictates to the decision maker a preference that he or she most likely does not hold, nor is it intuitively obvious that it is a preference that he or she should hold. As noted by Dodgson [9], no person nor method should dictate preferences to a decision maker.6

A second problem with Suh’s axiomatic design lies with the independence axiom. This axiom is actually a constraint. Constraints never improve optimal results. They are either inactive or active. Constraints that are inactive are satisfied automatically by the optimal solution and have no impact on the result. In effect, they are not constraints at all. Constraints that are active only degrade the performance of the system—they never improve it. As a result, it is generally desirable to impose as few constraints as possible on the design of a system.

Despite these shortcomings of Suh’s axiomatic design, much work has been done to derive operative theorems from Suh’s “axioms.” Although not meaningful in the context of a rigorous framework, this work nonetheless shows the potential for a preference-driven theory of systems engineering.

Many systems engineering decision-making approaches have been posited and published over the past 50 years. Among these are such widely used approaches as analytical hierarchy process (AHP) [22,23], axiomatic design [21,24], Taguchi [25], robust design [26], Pahl and Beitz [27], the Pugh method [28,29], Physical Programming [30], Quality Function Deployment (QFD) [31], and Six Sigma [32]. It would seem reasonable to subject methods such as these to a validity test, particularly as the extant evidence for their efficacy is largely anecdotal. High on this list is the notion of systems thinking [33,34]. Many researchers, authors, and practitioners of systems engineering advocate systems thinking–thinking in terms of the big picture—as a rigorous approach to system design. It would seem reasonable to challenge this view.

Goals for a Theory of Systems Engineering

A clear goal for a theory of systems engineering is to give credibility to Systems Engineering as an engineering discipline. In this context, a discipline may be defined as a field of study or branch of knowledge, a set of rules, or code of behavior. We view an engineering discipline as an agreed upon and demonstrably valid set of fundamental rules, laws, processes, procedures, or methods that define a field of study. The rules and laws of the discipline distinguish between processes, procedures, or methods that are deemed valid and those that are deemed not valid. For example, we would not consider a person who does not accept the first, second, and third laws of thermodynamics to be a thermodynamicist. Without such distinctions, there is no way to distinguish a person who is an expert versus one who is a complete neophyte or even a rebel to a discipline. Systems engineering has lacked such distinctions, and we believe that this is a key reason that it has yet to emerge as a maturing discipline.

To show that a theory, method, practice, or procedure is valid, it must be proven to be valid in all relevant cases or the boundaries of its validity must be clearly distinguished. For example, to show that the laws of addition are valid, it must be shown that they are valid for all possible combinations of cardinal numbers. To show that a theory, method, practice, or procedure is not valid, it is necessary to show that it is not valid in only one relevant case7 or to show that it violates an accepted underlying premise.8 To show that a theory, method, practice, or procedure is subject to path dependencies, it must be shown only that path dependencies may occur in at least one case, unless those cases can be specifically excluded by some rules or mechanisms.

Recognizing that engineered systems emerge as a consequence of the set of decisions that determine the system, a theory of systems engineering should provide a mechanism to distinguish between methods, approaches, procedures, or practices that support decision-making that improves overall system performance and methods that degrade system performance.9 In addition, a theory of systems engineering should have clearly defined boundaries, namely, a well-defined set of conditions, within which the theory is provably always valid.

The Concept of “Best”

We begin our derivation of a theory of systems engineering with the concept of “best” as expressed in the preference statement, I want the best system I can get. So long as the relevant decision maker is allowed to define “best” in any way he or she wishes, it would seem that this would be a universally acceptable preference. Conceivable alternatives to this preference might be, “I want the worst system I can get,” or simply, “I don’t want the best system I can get.” The former preference statement merely redefines best to be “worst” and so is the same as our proposed preference. The latter alternative simply eliminates one system alternative, the best one, and fails to distinguish between all others. To enforce this preference, we first must define best to be sure to exclude it, and then, we must be indifferent to all other outcomes. This leads us to a final alternative, I don’t care how good the system is. But this statement would render systems engineering valueless.

As general and meaningless as this preference statement may seem, the word “best” itself imposes a number of mathematical conditions that serve to underpin the mathematics of systems engineering. First, “best” requires the existence of a decision maker who has a preference. A preference is a statement made by a decision maker that rank orders outcomes by desirability in the mind of the decision maker.10 It is entirely subjective. Mathematically, the existence of a preference is stated as follows. Given any two outcomes x and y, one and only one of the following conditions must apply:
xy,yxorxy
(1)
Namely, x must be preferred to y, y must be preferred to x, or the decision maker must be indifferent between x and y, and this preference must be clear and distinct, which infers that it must be deterministic. Without such a preference, the concept of “best” does not exist.
The second condition necessary for the existence of a “best” outcome is, given a set of outcomes xi, the outcome xo is best if and only if
xoxiforallio
(2)
Clearly, this condition can be violated if, for outcomes x, y, and z, xyzx as, in this case, for each outcome, there is a better outcome. Any preference that obeys the transitivity condition, if xy and yz, then xz, or the weaker negative transitivity condition, if xy and yz, then xz, does not preclude the existence of a “best” outcome.

A third condition is that the set of allowable alternatives be closed. For example, if a requirement is that the system must weigh less than w pounds, and if the system performance improves with weight, then (mathematically) there will be no “best” design because there is no weight that is closest to but less than w pounds. Alternatively, a “best” system may exist if the requirement is stated. “the system may not weigh more than w pounds,” as this requirement yields a closed alternative set. Under these conditions, it should be immediately clear that a simple preference statement such as, “I like money and more is better,” enables the existence of a “best” outcome.

Where there is a single decision maker or a dictator who determines the preference that defines “best,” we can argue that the transitivity condition must be met, particularly in the case of well-considered decisions such as system design decisions. A person who has an intransitive preference is subject to a “money pump” [36], which subjects the person to a circular series of trades that drain the person of his or her wealth with no resulting benefit. But, beyond this, to be irrational, a person must know that they are being irrational.11 For example, the person must be presented two alternatives, A and B, each with a clear and distinct outcome such that the person clearly prefers one, say A, over the other, yet consciously chooses B. It would appear inconceivable how this could happen. For example, suppose the person is presented with two stacks of money from which he may choose one. Stack A contains $1,000, stack B contains $100, the person is clearly aware of these amounts, and the person clearly wants more money rather than less, yet consciously chooses stack B. Perhaps the person simply doesn’t, for whatever reason, want to take the choice that most satisfies his preference for money. For example, perhaps he wants to leave the money on the table for another person. But this want overrides the preference for more money, and the person winds up choosing according to his real preference at that moment. There is no irrationality in this decision although, to an observer, the decision may appear to be irrational.12 Furthermore, given the seriousness of systems engineering decisions, it would make sense that well-considered and rational decisions are to be preferred.

It is important to note here that we do not suggest that all systems engineering theories, approaches, etc. either do or must adhere to the conditions for the existence of a best design, nor do we limit ourselves to such cases. Rather we have examined the necessary and sufficient conditions for the existence of a best design in order that we can definitively identify theories, methods, approaches, procedures, or practices that fail to meet these conditions, examine their impact on system performance and propose corrective or ameliorative actions. We have already done this in the cases of continuous improvement and requirements flowdown processes, and the breadth of this theory will be demonstrated by the range of its applicability to such cases.

Relevant Mathematics

Unlike engineering subdisciplines that have their bases in laws of nature, the theory of systems engineering that we propose derives from mathematics, independent of the physical world. Accordingly, the validation of such a theory is obtained through rigorous derivation and the presentation of mathematical proofs rather than experimental or anecdotal evidence. Axioms comprise the basis for such proofs and, although there is no “correct” set of underlying axioms such that we could conclude that all other sets are incorrect or inappropriate, the object of the axioms that underlie much of mathematical theory is that they be self-evident and well argued. For example, such is the case for the axioms of arithmetic [37]. A mathematical theory is comprised of the set of conclusions (theorems) that one can draw from a given set of axioms. While one has considerable freedom to choose the axioms that enable the derivation of a particular mathematical theory, it is strongly preferred that axioms chosen comprise a parsimonious set and mandatory that they comprise a self-consistent set. That is, no axiom in the set may contradict any other axiom in the set. Furthermore, when one mathematical theory builds on, combines with or adds to another, any additional axioms that enable the composite theory must also be entirely consistent with all other axioms spanning the entire range of the resulting theory. It is not a valid practice to combine mathematical theories whose validity depend on axiomatic bases that are in conflict with each other. Where such conflicts occur in current systems engineering practice, we shall strive to point them out and offer evidence of errors that result from their use.

It is also necessary to recognize that it is not sufficient to merely provide the derivation of a solution procedure to a problem to assure that the procedure is valid. To assure validity, one must also provide an existence proof of the solution itself. That is, to assure that an answer to a problem is correct, we first must be certain that a solution actually exists [38]. Note that, in the absence of the existence of a solution, it is ipso facto that the solution is incorrect.

Another matter of which we must be aware is that certain procedures may enable path-dependent results, that is, results that depend on the particular sequence of computations or the path taken to obtain the results [35]. A rigorous theory of systems engineering must either avoid such possibilities or devise a plan to efficiently handle this difficulty.

In some instances, it may prove difficult or even impossible to avoid analytical or computational procedures that enable manifestations of the above behaviors. In cases such as these, it will be our goal either to derive conditions under which a procedure can be trusted to produce the desired result or to provide alternative procedures that bound the unwanted behavior.

The specific branches of mathematics that we draw upon most heavily in this work include probability theory as defined by the Kolmogorov axioms, optimization theory, decision theory, von Neumann-Morgenstern utility theory, and social choice theory.

A Fundamental Principle of Systems Engineering

We begin with the assumption that a system preference exists at least in the mind of some entity responsible for the overall system. For example, one might assert the preference, “I like money, and more is better.” This statement rank orders all monetary outcomes for this particular decision entity, and the resulting ranking is entirely independent of any perception of which outcomes may or may not be possible. Preferences are independent of outcomes. Furthermore, preference statements are particularly powerful as they divulge the goal of the decision-making entity.

Our implementation of the system preference shall start by replacing the preference statement with a more precise mathematical expression. We begin by defining some terms. Let X be a set of statements that fully describe a system through its entire life cycle with the objective of facilitating a prediction of system performance, especially as measured in terms of a preference. This set may include statements regarding sizes, shapes, colors, dimensions, procedures, practices, and even beliefs, that is, probabilistic statements. We then denote by Xi the ith instantiation (namely, the ith specific system description) of X. A complete system description X enables estimation of the outcome of a choice to instantiate X. We shall denote the outcome by the symbol Ω(X) or, for the ith instantiation of X, by Ωi(Xi) or simply Ωi. Ωi may be numeric or nonnumeric or both, and it may comprise a set of statements that describe an outcome.

For the moment, let us assume that Ω(X) is predicted with precision and certainty. Without regard to any particular instantiation, we may then define a scalar measure on the set of conceivable outcomes, v(Ω), such that if, for all i and j, ji,
ΩiΩj,vi>vj
(3)
or if
ΩiΩj,vi=vj
(4)
We note, however, with this determination of v, v exists only if preferences are rational, that is, only if they satisfy the transitivity condition. Namely, if ΩiΩjΩk, then it must be that ΩiΩk [39]. If this condition were not enforced, we could encounter a preference that requires vi > vj > vk > vi, which is clearly impossible. Conversely, a given utility function defines a transitive relationship; it imposes an ordering over elements. Because a given utility function reflects our theme of enacting a given preference, this is what we do. Given this definition, v becomes a measure on the real number line, R1, that represents a preference such that instantiation XiXj iff vivj.13 That is, the choice of Xi is preferred or at least indifferent to the choice of Xj iff vivj. It follows that the most preferred or “best” instantiation of X, namely, Xo, satisfies the condition jo,vovj.

We note again that the transitivity condition on preferences demands that preferences be deterministic. This is because transitivity cannot be guaranteed otherwise. The rationale for acceptance of this condition in the case of a single decision maker is that preferences are in the head of and belong to the decision maker and are thus known precisely to the decision maker. Accordingly, preferences must be clear and distinct. This means that the decision maker knows his or her preferences without question and therefore does not require mathematical aids for their determination. It follows that these aids are at best superfluous or, more likely, misleading. Optimization theory rests heavily on this condition. So, failure to accept this condition invalidates much of optimization theory.14 The conditions imposed here are necessary for the existence of a best system design, and they accordingly invite investigation into cases where a transitive preference would appear not to exist.

If, as is the usual case, there is uncertainty on the determination of Ω(X), v must satisfy an additional condition that accommodates the decision maker’s risk preference15 and that is determined by presenting the decision maker with a choice between two alternatives where one alternative is a lottery with two possible outcomes and the other alternative is deterministic. This is referred to as a von Neumann–Morgenstern lottery [10]. The decision maker is faced with a choice between Xa and Xb, where there are two possible outcomes of Xa, Ωa1 with probability p and Ωa2 with probability (1 − p), while Ωb is deterministic, and where Ωa1ΩbΩa2, then, since va1>vb>va2, there exists a p, 0 ≥ p ≥ 1, such that
vb=pva1+(1p)va2
(5)
where p is determined by the decision maker as that probability of achieving the more preferred outcome of alternative Xa that renders him indifferent between alternative Xa and alternative Xb. Nominally, when we use the symbol v to represent a preference value, we do so in the case when outcomes are deterministic. In the case that outcomes are nondeterministic, and when we invoke the above condition on the determination of v, we distinguish this as a nondeterministic case by use of the symbol u, and we refer to this quantity as utility.16 Thus, u encodes both the basic preference, for example, “I like money, and more is better,” and the decision maker’s risk preference. A key difference between v, which in general does not satisfy the above lottery condition, and u is that v may be an ordinal measure,17 whereas u is always a cardinal measure. This means that we cannot in general perform arithmetical operations on v, whereas we can on u.
Making use of numerical operations on u and drawing on the utility axioms of Luce and Raiffa [40], we can derive a decision rule for the choice of X when there is uncertainty in the prediction of the outcome of X, Ω(X).18 The resulting decision rule for Xo is
XoXjjoiffE{uo}E{uj}jo
(6)
where E{u} is the expected utility of the uncertain outcome Ω. Clearly, this fundamental rule of systems engineering, which we shall refer to as the decision rule, holds provided that u(Ω) exists and that we hold beliefs (probabilities) on the set of possible outcomes. The necessary conditions for the existence of u(Ω) are the same as the existence conditions for v(Ω), namely, that preferences must exist and they must be transitive. We have already argued that these conditions are met as long as there is only a single decision maker, that is, that the project has only one manager who is also the design engineer.

Conditions That Impede Application of the Decision Rule

We have already alluded to certain conditions that would impede or preclude application of the decision rule. These would include, for example, the nonexistence of a transitive system preference, nonexistence of beliefs on system outcomes, reliance on choice resulting from group interactions, design by multiple persons each of whom is acting on their personal preferences rather than the overall system preference, and inconsistent belief systems (probability estimations) across system decision makers. These all offer to induce choices that lower overall system performance. Acknowledging these impediments to optimal system design decision-making, the decision rule leads to the following theorems.

Theorem 1

System choices made against preferences other than the overall system preference cannot result in performance better than that achievable if all choices are made against the system preference and, in general, they will result in lower performance.

Proof

Let f(Ω) be an overall system preference, and g(Ω) be an alternative preference. Let Xfo be the optimal solution for the preference f(Ω), and Xgo be the optimal solution for the preference g(Ω). Since Xfo maximizes f(Ω), Xgo cannot provide a greater maximum, and it will provide an equal maximum only in the case that Xgo equals a value of Xfo that maximizes f(Ω). For any other values of Xgo, the system performance will be less than fo).

Theorem 2

Every component of an optimal system must itself be optimal as measured by the same preference under which the system is optimized.

Proof

Let Xf=[Xfa,Xfb] where Xfb is the specification of component b and where, for the optimal solution, Xfo=[Xfao,Xfbo]. If the component described by XfbXfbo, then f(Ω) ≤ fo) and, hence, the system performance cannot exceed the performance obtained by the design Xo and, in order that f(Ω) = fo), Xfb must render Xf optimal with respect to f(Ω).

While this theorem may appear to be evident and trivial, it is not. Subtle implications of this result are described in the  Appendix. Furthermore, in light of Theorem 2, lexicographic ordering can be viewed as a form of organizational design. The ranking on this ordering is imposed by the transitivity condition as required by Theorem 2.

As a result of these theorems, we see that failure of all system decision makers to use a common system preference as the basis for their system choices cannot result in system performance better than that achievable using only the common system preference and, in general, will result in a loss of performance as measured by the common system performance. This conclusion applies at all levels of systems engineering in a project. It is also the case that, to achieve system optimality, system choices must be based not only on a common system preference, but also on a common set of beliefs (namely, probabilities on all system uncertainties). Achieving these conditions will pose formidable problems in the formation of such things as incentives that promote cooperative decision-making.

The Impact of Constraints on Optimal System Choices

Consider the constrained optimization problem, maximize J=f(X) subject to constraints g(X)b, where g(X) comprise a vector of constraints imposed on the optimization of the scalar objective f(X).

Theorem 3

Constraints imposed on an objective function never lead to an optimal solution of greater performance than the solution to the unconstrained objective function and, if active, always result in an optimal solution of lower performance.

Proof

Let Xo denote the value of X that maximizes f(X). Then, if Xo is also the maximizing solution to the preference J=f(X) while satisfying the conditions g(X)b, the constraints are satisfied by the unconstrained optimal solution, and the performance of the constrained problem is equal to the performance of the unconstrained problem. In this case, we say that the constraints are inactive. On the other hand, if XcXo is the maximizing solution to J=f(X) while enforcing the constraints g(X)b, we say that the constraints are active and, since Xc does not equal Xo, we know that the constrained performance must be lower than the unconstrained performance.

Alternatively, we note that constraints do not add alternatives to the set of available choices. They only serve to remove alternatives from this set and, if they remove the unconstrained optimal choice(s), performance must be reduced. Some constraints, such as laws of nature, may be unavoidable and must be applied. But others, requirements for example [43], may inhibit alternatives when designing a system and may be avoided by better choice of the system preference. For instance, a requirement that the design of a telephone must connect by wire to the telecommunication system precludes the possibility of designing a cell phone.

Thus, we know that constraints never lead to improved performance as measured by the objective f(X) and, if the constraints are active, they always result in lowered performance as measured by this objective. As a result, we see that we must be aware of procedures that impose arbitrary constraints on system design and find ways to prevent such constraints from penalizing system performance or eliminate them altogether.

It is important that we understand the difference between the system objective or preference and constraints. Constraints are not and never can be thought of as an objective. Objective functions or preferences rank order system design choice outcomes, thus enabling design optimization whereas constraints define choices that are not allowed. Constraints do not provide information necessary to rank outcomes, and they do not enable design optimization. A good definition of a constraint is, it is something that we don’t get to decide about. Laws of nature (F = ma, for example) are an example of something we don’t get to decide about, and they legitimately constrain system design choices. Unfortunately, common systems engineering procedures often arbitrarily assign constraints (for example, requirements [43]) to facilitate the engineering process. As we see from the aforementioned proof, arbitrary constraints never enable the achievement of higher levels of performance than when such constraints are not imposed. In this regard, it should be noted that the need for requirements can often be avoided by proper selection of the system preference.

Review of Underlying Assumptions and Rationale

Before proceeding, it is worth reviewing the assumptions upon which this theory is based:

  1. Engineered systems emerge as the consequence of a set of choices made largely by the system design workforce, and the environment, structure, and rules within which the workforce performs its design tasks. Beginning in the 1960s and continuing since, engineering design and systems engineering have become increasingly viewed as a decision-making process. Decision theory, game theory, and social choice theory [44] have emerged as the important mathematics of design and systems engineering.

  2. Preferences exist. Rational system design is not possible in the absence of a preference relating to system performance. In the absence of a system preference, there can be no distinction in performance between alternative system designs and, hence, there would be no need for systems engineering.

  3. Uncertainty is a significant factor in the systems engineering process. Engineers have a tendency to consider uncertainty in systems engineering decision-making as being applicable only to their ability to predict system behavior as a function of system design. But this is only one source of uncertainty and often far from the largest. There is also uncertainty on manufacturability, system reliability, demand for the system, and uncertainty in the environment within which the system will function such as future fuel prices and regulations. It is the nonengineering uncertainties that often dominate system design considerations.

  4. Mathematical integrity is a necessity. Mathematics is the “science” of consistency. It is crucial that consistency of thought be maintained throughout as a basis for a rigorous derivation of a normative theory of system engineering.

These premises are not only reasonable but also necessary for the formulation of a normative theory of system engineering.

Fundamental Principles of Systems Engineering

The system preference, “I want the best system I can get,” and the theorems proven earlier lead to the following fundamental principles of systems engineering:

  1. The concepts of better and best exist only in the context of system preferences. In the absence of a system preference, there is no performance distinction between system alternatives and no rational basis for design choices.

  2. System decisions based on preferences other than the common system preference can result in degraded performance as measured by the common system performance. This principle results from Theorem 2 and infers that system decisions made at all levels and by all system decision makers should adhere to a specified common system preference.

  3. No theory or method should constrain a system preference in any way. The relevant project management or oversight authority should be free to define the system preference.

  4. A theory of systems engineering should apply to all system choices throughout the life cycle of a system. Every element of a theory of systems engineering must apply to all phases of the life cycle of a system unless clear and mathematically provable distinctions are made as to when this is not the case.

  5. The imposition of arbitrary constraints19 should be minimized. Theorem 3 shows that constraints never result in improved system performance. Hence, methods or procedures of systems engineering should strive to minimize the imposition of arbitrary constraints.

  6. Kolmogorov probability20 is a valid mathematic of beliefs. The Dutch Book Argument offers strong assurance that all alternatives to Kolmogorov probability pose inconsistencies that invalidate them.

  7. Methods, processes, or procedures that lead to path dependencies should be avoided were possible, and where they cannot be avoided, procedures should be sought to handle them efficiently and to prevent them from reducing system performance. Methods or procedures that result in path dependencies can lead to system designs that differ according to the path taken to get them (for example, the order of decision-making), and they can have significantly different levels of performance with no indication of nonoptimality [35].

  8. System decision makers exercise their own preference and beliefs in making system decisions; their decisions are not necessarily aligned with the common system preference. This reality generally results in a degradation of system performance. Therefore, a goal of systems engineering management should be to provide incentives that align systems engineers’ preferences with the common system preference.

  9. Existence proofs are a necessary component of the validation of a systems engineering analysis, method, process, or procedure. While the need for existence proofs is well understood in the mathematics community, it has not been equally recognized in the systems engineering community [38].21

Conclusions

The key contribution of this article is to show that fundamental principles of systems engineering can be obtained from the preference statement, “I want the best system I can get.” It is argued that this should be a universally acceptable system preference, particularly given that the definition of best is left entirely to the project manager or systems engineer. Nine fundamental principles of systems engineering resulting from this preference statement are presented. They provide a mathematically sound basis for a comprehensive and overarching theory of systems engineering. It is not so much that these principles are new or unexpected that is the key contribution, but rather that it is shown that they derive from a fundamentally sound argument based on a preference. Nor is it our intent to infer that a theory of systems engineering exists only within the context of all conditions that enable the existence of a best system design. Rather, the principles are intended to underlie and guide the further development of systems theory and, wherever such conditions are not met, caution that actions should be taken to limit penalties to system performance if complete resolution is impossible.

It is clear from the nine fundamental principles that the resulting theory will enable the derivation of validity tests based at a minimum on the following classes of extant systems engineering methods, processes and procedures:

  1. Methods, processes, or procedures that are mathematically consistent with the fundamental principles of systems engineering and are therefore valid, and

  2. Methods, processes, or procedures that are mathematically inconsistent with the fundamental principles of systems engineering and are therefore not valid.

The work that now remains to be done is to critique extant systems engineering principles, theories, processes, and procedures to validate those that are mathematically sound and to provide insights into why other systems engineering principles, theories, processes, and procedures lead to less-than-optimal results and, perhaps, to offer suggestions that will correct or enhance these principles, theories, processes, and procedures. Some such processes and procedures are noted above. Work also remains to be done to find methods that extend our ability to optimize systems engineering decision-making taking into account the diversity of factors that work to inhibit optimal decision-making, particularly in large-scale projects. Such topics include:

  1. Reduction of the negative impact of the requirements flowdown process,

  2. Setting flowdown requirements optimally in the presence of uncertainty,

  3. The derivation of incentives to assure that project engineers align their decision-making with the common system preference,

  4. Finding approaches to setting a system preference in a case where there are multiple project managers with conflicting goals,

  5. Creating a theory to enable the optimal assignment and management of tasks (e.g., subcontracts), and

  6. Deriving methods for effective use of survey data.

To the extent that work is successful, we expect that it will establish a basis for systems engineering as a mathematically rigorous engineering discipline.

Footnotes

1

The concept of a “best system” is conditioned by “I can get” to make it clear that we acknowledge that there may be constraints imposed on a project that prevent the realization of an unconstrained or absolute best system.

2

We refer here to systems engineering in the broadest context of any engineering activities related to the design, development, manufacture, operation, maintenance, and disposal of an engineered product, device, or system of any scale.

3

Dodgson studied and later taught mathematics at Oxford and was recognized for his work on decision theory.

4

The ABET definition of design is: “Engineering design is the process of devising a system, component, or process to meet desired needs. It is a decision-making process (often iterative), in which the basic sciences, mathematics, and engineering sciences are applied to convert resources optimally to meet a stated objective. Among the fundamental elements of the design process are the establishment of objectives and criteria, synthesis, analysis, construction, testing, and evaluation.”

5

As indeed all decision frameworks must be, as rational decisions are always optimizations.

6

In Alice’s interaction with the Cheshire Cat, Dodgson uses the Cat to emphasize to Alice that it is her preferences that matter, and no one else’s.

7

Note that, while the laws of division exclude division by zero, we know in advance that we must exclude the use of division for this case. Thus, division is valid for all divisors excluding zero.

8

An example is that people must be free to state their own preferences.

9

For example, the processes of requirements flowdown and continuous improvement can be shown to have the potential to degrade system performance [35].

10

An outcome is the result of a decision, particularly as viewed in terms of the preference of a decision maker.

11

In part because of possible exposure to a money pump, a person with intransitive preferences is often referred to as “irrational.”

12

It is also the case that differences in perceptions or beliefs between the decision maker and observer may lead the observer to the conclusion that the decision maker is irrational. However, the decision maker is irrational only if he consciously makes a decision contrary to his beliefs.

13

Note that v is unique only to the extent of an ordinal ranking.

14

As used here, optimization theory is taken to be a logical framework that enables selection of alternatives that provide the most preferred outcomes achievable.

15

Also, as there is uncertainty in the outcomes Ω(X), this would yield a probability distribution on v, which could lead to cases where there is no clearly defined best outcome. Thus, we must find an outcome preference measure that is deterministic even in the case of uncertainty on Ω(X). Utility theory does this.

16

Note that u may encode not only the basic preference and a risk preference but also include a time preference as well.

17

Ordinal preference measures such as v merely order outcomes in terms of desirability, and they do not measure the strength of the preference relative to other outcomes. Ordinal preference measures suffice to enable optimization in the case of deterministic outcomes. Cardinal measures provide both preference order and strength of preference. Cardinal measures of preference are required to rank alternatives whose outcomes are nondeterministic.

18

There are alternative axioms from which we can obtain the same result, for example, Bernardo and Smith [41] or Savage [42]. We refer to the Luce and Raiffa axioms here as they are quite easy to understand and because we can easily see that they lead to a unique result.

19

Arbitrary constraints are taken to be constraints that are imposed as a choice of a system designer as opposed to being imposed upon the system designer.

20

Kolmogorov probability is a framework for the logical or self-consistent consideration of beliefs based on the Kolmogorov axioms.

21

To assure correctness, before presenting a solution to a problem it is necessary that one prove that a solution exists. In cases where a solution does not exist, no solution can be correct. For example, solve for the largest positive integer. No such integer exists. Ergo, no solution that might be presented can be correct.

Acknowledgment

This work has been supported by the National Science Foundation under award CMMI-1923164.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

No data, models, or code were generated or used for this paper.

Nomenclature

=

is equal to, is the same as, x = y reads x equals or is the same as y

is not equal to, is not the same as, xy reads x is not equal to y

for all, ij reads for all i not equal to j

<

is less than, x < y reads x is less than y

>

is greater than, x > y reads x is greater than y

is less than or equal to, xy reads x is less than or equal to y

is greater than or equal to, xy reads x is greater than or equal to y

is preferred to, AB reads A is preferred to B

Is not preferred to, AB reads A is not preferred to B

is indifferent to, AB reads A is indifferent to B

is preferred or indifferent to, AB reads A is preferred or indifferent to B

iff

if and only if

Appendix: The Impact of Theorem 2

Theorem 2 may appear to be obvious, but it carries a subtle important message. Its strength can be appreciated by applying it to Arrow’s Theorem; by doing so, Arrow’s negative assertion disappears. To illustrate, according to Arrow’s Theorem, the majority vote rankings of pairs need not result in a transitive outcome. For instance, suppose of 15 voters, six have the ranking ABC, five have BCA, and four have CAB. The outcome is a cycle where AB by 10:5, BC by 11:4, and CA by 9:6. Each pair’s ranking is computed precisely, but the approach violates Theorem 2. This is because the tallying method is not compatible with the intent of having transitive conclusions.

To satisfy Theorem 2, the tallying method must be consistent with the global objective. This means the method must incorporate transitivity information. Doing so is simple, this is because with the transitive ranking ABC, not only is AC, but A is listed two above C; express this as (AC,2). More generally, let “IIIA,” the “intensity of IIA” (Arrow’s Independence of Irrelevant Alternatives), be where the ranking of a pair within a transitive ranking specifies how far the top-ranked alternative is listed above the other one; e.g., (AD,4) means that A is listed four alternatives higher than D. In the above example, six voters have (AC,2), five have (CA,1) and four have (CA,1). Computing a pair’s ranking by summing intensity levels leads to AC by 12:9, which comes from 6 × 2 : (5 × 1) + (4 × 1). By using this intensity approach, which reflects the requirements of Theorem 2, the paired rankings now define the transitive ABC with AB by 10:10, BC by 11:8, and AC by 9:6.

The difference between IIA and IIIA is that IIA does not include intensity information. This missing term drops information about the transitivity objective; this is what violates Theorem 2. The general result proved by Saari [45, Theorem 3.4.3] and [46] is that by exchanging Arrow’s Independence of Irrelevant Alternatives (IIA) with IIIA, Arrow’s negative assertion is replaced with a positive conclusion. The result follows.

This theorem captures the intent of Theorem 2. More generally, should a decision approach fail Theorem 2, even if the contributions made by the subsystems reach a high level of excellence (as determined by the isolated subsystem), it could be incompatible with the general objective.

References

1.
Kasser
,
J.
, and
Hitchins
,
D. K.
,
2011
, “
Unifying Systems Engineering: Seven Principles for Systems Engineered Solution Systems
,”
Presented at the 20th International Symposium of the INCOSE
,
Denver, CO
,
June 20–23
.
3.
Hazelrigg
,
G. A.
,
1996
,
Systems Engineering: An Approach to Information-Based Design
,
Prentice Hall
,
Upper Saddle River, NJ
.
4.
Hazelrigg
,
G. A.
,
1998
, “
A Framework for Decision-Based Engineering Design
,”
ASME J. Mech. Des.
,
120
(
4
), pp.
653
658
.
5.
Hazelrigg
,
G. A.
,
1999
, “
An Axiomatic Framework for Engineering Design
,”
ASME J. Mech. Des.
,
121
(
3
), pp.
342
347
.
6.
Hazelrigg
,
G. A.
,
2009
, “
The Cheshire Cat on Engineering Design
,”
Q. Reliab. Eng. Int.
,
25
, pp.
759
769
.
7.
Sage
,
A.
,
1992
,
Systems Engineering
,
John Wiley & Sons Inc.
,
New York
.
8.
Bernoulli
,
D.
,
1968
, “Exposition of a New Theory on the Measurement of Risk,”
Utility Theory: A Book of Readings
,
A.
Page
, ed.
John Wiley & Sons, Inc.
,
Hoboken, NJ
, pp.
199
214
.
9.
Dodgson
,
C. L.
,
1865
,
Alice’s Adventures in Wonderland
,
Macmillan, UK
.
10.
von Neumann
,
J.
, and
Morgenstern
,
O.
,
1953
,
The Theory of Games and Economic Behavior
, 3rd ed.,
Princeton University Press
,
Princeton, NJ
.
11.
Miles
,
R. F.
,
1974
, “
A Contemporary View of Systems Engineering
,”
Tech. Memorandum
,
33–667
, pp.
379
423
, 623–656.
12.
Dyer
,
J. S.
, and
Miles
,
R. F.
,
1976
, “
An Actual Application of Collective Choice Theory to the Selection of Trajectories for the Mariner Jupiter/Saturn 1977 Project
,”
Operat. Res.
,
24
(
2
), pp.
220
244
.
13.
Miles
,
R. F.
,
2007
, “The Emergence of Decision Analysis,”
Advances in Decision Analysis: From Foundations to Applications
,
R. F. M.
,
Ward Edwards
, and
D. V.
,
Witerfeldt
, eds.,
Cambridge University Press
,
Cambridge, UK
, pp.
13
31
.
14.
Sage
,
A.
,
1977
,
Methodology for Large-Scale Systems
,
McGraw-Hill
,
New York
.
15.
Tribus
,
M.
,
1970
,
Rational Descriptions, Decisions and Designs
,
Elsevier
,
Amsterdam, The Netherlands
.
16.
Brigadier
,
W. L.
, and
Hazelrigg
,
G. A.
,
1976
, “
A Decision Model for Planetary Missions
,”
Presented at AIAA/AAS Astrodynamics Specialist Conference
.
San Diego, CA
,
Aug. 18–20
, AIAA Paper No. 76-805.
17.
Howard
,
R. A.
, and
Abbas
,
A. E.
,
2016
,
Foundations of Decision Analysis
,
Pearson
,
Boston, MA
.
18.
Howard
,
R. A.
,
2004
, “
Speaking of Decisions: Precise Decision Language
,”
Decision Anal.
,
1
(
2
), pp.
71
78
.
19.
Howard
,
R. A.
,
1992
, “
In Praise of the Old Time Religion
,”
Utility Theor. Measure. Appl.
, pp.
27
55
.
20.
Hazelrigg
,
G. A.
,
1996
, “
The Implications of Arrow’s Impossibility Theorem on Approaches to Optimal Engineering Design
,”
ASME J. Mech. Des.
,
118
(
2
), pp.
161
164
.
21.
Suh
,
N. P.
,
1990
,
The Principles of Design
,
Oxford University Press
,
New York
.
22.
Saaty
,
T. L.
,
2006
,
Fundamentals of Decision Making: The Analytic Hierarchy Process
,
RWS 3 Publications
,
Pittsburgh, PA
.
23.
Saaty
,
T. L.
,
2008
, “
Relative Measurement and Its Generalization in Decision Making, Why Pairwise Comparisons Are Central in Mathematics for the Measurement of Intangible Factors, the Analytic Hierarchy/network Process
,”
Rev. R. Acad. Cien. Serie A. Mat.
,
102
(
2
), pp.
251
318
.
24.
Suh
,
N. P.
,
2001
,
Axiomatic Design: Advances and Applications
,
Oxford University Press
,
New York
.
25.
Taguchi
,
G.
,
1986
,
Introduction to Quality Engineering
,
Asian Productivity Organization, UNIPUB
,
White Plains, NY
.
26.
Phadke
,
M. S.
,
1989
,
Quality Engineering Using Robust Design
,
Prentice Hall
,
New York
.
27.
Pahl
,
G.
, and
Beitz
,
W.
,
1996
,
Engineering Design, A Systems Approach
, 2nd ed.,
Springer-Verlag
,
New York
.
28.
Pugh
,
S.
,
1991
,
Total Design: Integrated Methods for Successful Product Engineering
,
Addison-Wesley
,
Boston, MA
.
29.
Pugh
,
S.
,
Clausing
,
D.
, and
Andrade
,
R.
,
1966
,
Creating Innovative Products Using Total Design
,
Addison-Wesley Longman
,
Boston, MA
.
30.
Messac
,
A.
,
Gupta
,
S. M.
, and
Akbulut
,
B.
,
1996
, “Linear Physical Programming: A New Approach to Multiple Objective Optimization,”
Transactions on Operational Research
, Vol.
8
, pp.
39
59
.
31.
Hauser
,
J. R.
, and
Clausing
,
D.
,
1988
, “
The House of Quality
,”
Harvard Business Rev.
,
1
, pp.
63
73
.
32.
Tennant
,
G.
,
2001
,
Six Sigma: SPC and TQM in Manufacturing and Services
,
Gower Publishing, Ltd.
,
Burlington, VT
.
33.
INCOSEUK
,
2010
, “
How Systems Thinking Contributes to Systems Engineering
,” Issue 1.0, https://incoseuk.org/Documents/zGuides/Z7_Systems_Thinking_WEB.pdfhttps://incoseuk.org/Documents/zGuides/Z7_Systems_Thinking_WEB.pdf
34.
INCOSE
,
2018
, “
Guide to the Systems Engineering Body of Knowledge (SEBoK)
,” Version 1.9.1, https://www.sebokwiki.org/w/images/SEBoK%20v.%201.9.1.pdfhttps://www.sebokwiki.org/w/images/SEBoK%20v.%201.9.1.pdf
35.
Hazelrigg
,
G. A.
,
2007
, “
Continuous Improvement Processes: Why They Do Not Work and How to Fix Them, Guest Editorial
,”
ASME J. Mech. Des.
,
129
(
2
), pp.
138
139
.
36.
Cubit
,
R.
, and
Sugden
,
R.
,
2001
, “
On Money Pumps
,”
Games Econ. Behav.
,
37
(
1
), pp.
121
160
.
37.
Rudin
,
W.
,
1976
,
Principles of Mathematical Analysis
, 3rd ed.,
McGraw-Hill
,
Singapore
.
38.
Hazelrigg
,
G. A.
, and
Klutke
,
G. -A.
,
2018
, “
Models, Uncertainty, and the Sandia V&V Challenge Problem
,”
Proceedings of the ASME 2018 Verification and Validation Symposium
, Paper No. VVS2018-9308.
39.
Hazelrigg
,
G. A.
,
2012
,
Fundamentals of Decision Making for Engineering Design and Systems Engineering
,
G.A. Hazelrigg
,
Vienna, VA
40.
Luce
,
R. D.
, and
Raiffa
,
H.
,
1957
,
Games and Decisions
,
John Wiley & Sons
,
New York
.
41.
Bernardo
,
J. M.
, and
Smith
,
A. F. M.
,
2000
,
Bayesian Theory
,
Wiley
,
Chichester
.
42.
Savage
,
L. J.
,
1972
,
The Foundations of Statistics
,
Dover Publications Inc.
,
New York
.
43.
Hazelrigg
,
G. A.
, and
Stolfi
,
P.
,
2021
, “
The Cost on System Performance of Requirements on Differentiable Variables
,”
ASME J. Mech. Des.
,
143
(
5
), p.
054501
.
44.
Arrow
,
K. J.
,
1963
,
Social Choice and Individual Values
, 2nd ed.,
John Wiley & Sons
,
Hoboken, NJ
.
45.
Saari
,
D. G.
,
1995
,
Basic Geometry of Voting
,
Springer
,
New York
.
46.
Saari
,
D. G.
,
2021
, “
Seeking Consistency With Paired Comparisons: A Systems Approach
,”
Theory Decisions
(Accepted for Publication).