Design of Complex Engineered Systems and the Effectiveness of Organizational Networks

Cost and schedule overruns have become increasingly common in projects that set out to design and deliver complex engineered systems. Noting the well-established relationship between products and the organizations that design them, this study evaluates the effectiveness of different organizational networks at designing complex engineered systems using agent-based modeling. Specifically, it compares matrix and military staff organizational networks to random and multiscale networks, modeling design as an activity that requires organizations to create design artifacts and share information. It examines the nature of design, the role of product architecture, the nature of complexity and how it affects projects, and the characteristics that improve organizational robustness to congestion. Results indicate matrix organizations are particularly susceptible to congestion failure, while military staff and multiscale networks are more robust to congestion failure, with military staff networks having performance comparable to multiscale networks over a range of scenarios. Results further indicate simple changes to organizational behavior improve performance and robustness to congestion, with decentralization being especially beneficial. Finally, results confirm the utility of agentbased modeling for understanding the dynamics of complex systems.


INTRODUCTION
Cost and schedule overruns have become increasingly common in large defense programs that attempt to build systems with improved performance and lifecycle characteristics, often using novel, untested, and complex product architectures. (Murray, et al., 2011) Given the well documented relationship between product architecture and the structure of the product development organization, it is logical to examine organizational structure for causes and factors explaining the inability of design organizations to manage the complexity associated with the design of large engineered systems. This study will therefore examine the effectiveness of different organizational networks at designing complex engineered systems, modeling design as an activity that requires the creation of design products and the sharing of information and comparing the performance of real-world organizational networks to ideal ones in order to identify ways real-world networks could be modified to improve performance.

Research Motivation
A 2011 report prepared for the Defense Advanced Research Projects Agency (DARPA) concluded cost and scheduled overruns in defense programs result from "systematic mismanagement of the inherent complexity associated with the design of these systems." (Murray, et al., 2011)   reported 13 aerospace projects reviewed by the Government Accountability Office between 2008 and 2013 experienced cost growth of 55% or more. (Sinha & de Weck, IDETC/CIE 2013 More recently, major shipbuilding programs have experienced similar cost and schedule overruns. A 2015 GAO report noted the Ford-class aircraft carrier was more than $2 billion over budget and was unlikely to achieve promised performance with regard to aircraft launch and recovery rates due to unreliability of systems. (Government Accountability Office, 2015) Such problems are not unique to the defense sector. General Motors posted a $4.3 billion loss in the fourth quarter of 2009 as the cost of its new Chevy Volt approached $40,000 per car, doubling initial estimates. (Simpson & Martins, 2012) The Nature of Design Herbert Simon (1996) described design as the process of devising "courses of action aimed at changing existing situations into preferred ones," observing engineers and other designers are concerned with how things ought to function in order to accomplish goals, and arguing synthetic or artificial objects, i.e., artifacts, are "the central objective of engineering activity and skill." (Simon, 1996) A key step in the design of engineered system is establishing product architecture, the scheme that translates functions and objectives into physical components. Product architecture drives decision-making and affects product performance and defining product architecture involves three inter-related activities: identification of functional requirements and arrangement of functional elements; mapping functional requirements to physical systems or components; and defining physical interfaces between systems or components. (Ulrich K. , 1995) Organizational Structure and Product Architecture Researchers have long recognized the interplay between products and the organizations that design them. Conway (1968) argued organizations produce designs that reflect their communication structures, thus design efforts should be organized according to the need for communication. (Conway, 1968) Henderson and Clark (1990) examined the nature of innovation and concluded changes to product architecture challenge traditional firms by destroying existing knowledge embedded in the firms' organizational and communication structures. During periods of innovation, firms require the ability to develop knowledge and synthesize designs, but once a dominant design is established, firms stop investing in learning about alternative configurations and instead invest in refinements. They argue the effect of architectural innovation depends on how organizations learn and suggest the "fashion for cross-functional teams and open organizational environments" may be a response to perceptions on the challenges of architectural innovation. (Henderson & Clark, 1990) Organizational structure defines how people work together to accomplish objectives and create value, and includes formal hierarchy, the decomposition of the organization into functional elements, such as directorates, departments, divisions, work centers, and individuals; reporting relationships and lines of authority; and informal teaming relationships that cross both vertical and horizontal hierarchical lines. Given the well-established relationship between product architecture and organizational structure, one might expect firms would align the two in order to create products that better meet objectives, but in practice, firms consider a variety of business and management imperatives when setting organizational structure.

Robust Organizations
Dodds, Watts and Sabel (2003) examined the dynamics of information exchange in organizational networks and introduced an organizational network model that incrementally adds links to a hierarchical backbone according to a stochastic rule.
They identified a class of networks, which they call "multiscale networks," that exhibit "ultra-robustness," meaning they simultaneously reduce the likelihood an individual node will fail because of congestion and the likelihood the overall network will fail if congestion failures do occur at individual nodes. Multiscale networks exhibit these properties with the addition of relatively few links, which suggests "ultra-robust organizational networks can be generated in an efficient and scalable manner." (Dodds, Watts, & Sabel, 2003) Economists have long studied organizational structure, emphasizing efficiency over robustness and focusing on multilevel hierarchies, which offer advantages for exercising control, accumulating knowledge, and making decisions. These advantages assume tasks can be easily decomposed into smaller subtasks that can be accomplished independently, but modern organizations face multidimensional problems characterized by complexity and ambiguity, where problem solving becomes a collective activity characterized by collaboration among individuals, teams and organizations. Under these conditions, the chief concern is not efficiency, achieved by minimizing costly links, but robustness, achieved by preventing individual nodes from being overwhelmed and protecting the network from catastrophic failure when congestion does occur. (Dodds, Watts, & Sabel, 2003) Understanding Complexity and Attempts to Measure It Sinha and de Weck argue "today's large-scale engineered systems are becoming increasingly complex" due to demands for increased performance and improved lifecycle characteristics, but complexity is hard to quantify.   Mitchell (2009) identifies several characteristics of complex systems, including complex collective behaviors, such as self-organization and adaptation through learning or evolution, but notes no single science or theory of complexity yet exists, despite the many books and articles written on the subject. (Mitchell, 2009) Page (2009) provides a useful framework for understanding complexity, defining complex adaptive systems in terms of four necessary characteristics of the agents or elements in the system: diversity, connectedness, interdependence, and adaptation, arguing adaptation is the key characteristic separating complex systems from merely complicated ones. (Page, 2009) In fact, much of the confusion about the meaning of complexity stems from this question about what separates complex from complicated systems.
In common usage, when someone says a thing is "complex," they most often mean hard, challenging or complicated, but for complex systems, the term is also used to describe a variety of rich and unexpected behaviors, including self-organization, emergence, robustness, susceptibility to large events, and non-linear dynamics. In Micromotives and Macrobehavior, Schelling (2006) describes how individual choices affect the overall behavior of complex systems in non-obvious ways, observing: "it is not easy to tell from the aggregate phenomenon just what the motives are behind individual decisions or how strong they are." (Schelling, 2006) This kind of micromacro disconnect is central to the idea of emergence in complex systems, but a similar disconnect can occur in "merely complicated" systems when connections and dependencies are poorly understood.
In complex engineered systems, such as automobiles, aircraft, and ships, the number of connections and dependencies can quickly challenge the limits of human cognition. Even though individual elements of the system may perform in predictable ways, interactions among elements can lead to unexpected macro behaviors. Such behaviors may be predictable in theory, but not in any meaningful or practical way, thus merely complicated, large engineered systems often exhibit quasi-emergent behaviors comparable to complex adaptive systems.
Several authors have proposed methods or measures to quantify complexity, but there is no single, widely accepted metric, nor even universal agreement that complexity can be measured. Mitchell surveys different approaches and identifies several categories, including counting methods; entropy-based methods, notably Shannon entropy; algorithmic information content; logical and thermodynamic depth; statistical methods; fractal dimension; and degree of hierarchy. She concludes different measures individually capture something about the notion of complexity but have practical limitations that make them not useful for characterizing real systems. (Mitchell, 2009)

Summary
To meet demands for improved performance, designers of large engineered systems create new products with increasingly complex architectures that strain the capabilities of the design organization. Unprepared to manage the design of complex engineered systems, organizations built for efficiency may find themselves overwhelmed, leading to the kinds of cost and schedule overruns documented by DARPA and the GAO. Since multiscale organizational networks have been shown to be robust to failure, it is appropriate to compare them to other organizational networks commonly used by design organizations in order to better understand the performance of design organizations and identify ways to improve their ability to manage the development of complex engineered systems. This study will therefore compare the performance of matrix organizations and military staffs, two real-world organizational networks, to random and multiscale networks, two idealized organizational networks, using agent-based modeling (ABM).
The remainder of this dissertation is arranged in four additional chapters: review of literature, methodology, findings, and conclusions. Chapter 2 presents a review of literature, which further develops the concepts and ideas introduced earlier in this introduction. Chapter 3 presents methodology and describes the phased, building block approach used to develop and implement agent-based models to examine the effectiveness of real-world and ideal organizational networks. Chapter 4 presents findings resulting from the implementation and analysis of models of organizational networks. Chapter 5 presents conclusions and recommendations.

REVIEW OF LITERATURE
The following literature review addresses a variety of topics related to the design and development of complex engineered systems, and the proposed use of Agent-Based Modeling (ABM) to evaluate the effectiveness of different organizational networks at designing complex engineered systems. It begins by examining the nature of design, the elements of product development, and the role of product architecture, and then turns to organizational structure, organizational networks, and the relationship between organizational structure and product architecture. It then describes robust networks, a special class of organizational network that is simultaneously robust to congestion and connectivity failures, before exploring definitions of complexity and complex systems, as well as efforts to understand and cope with complexity, including qualitative and quantitative measures of complexity. The literature review concludes with a discussion of opportunities for improving project performance, a brief review of design structure matrices and their application to modeling products and organizations, and a description of agent-based modeling.
The Nature of Design Herbert Simon, declared: "everyone designs who devises courses of action aimed at changing existing situations into preferred ones." (Simon, 1996) Engineering schools have traditionally taught students how do design and make artifacts with desired characteristics, but Simon argued the mental activity that designs material artifacts is the same fundamental activity that devises plans or policies, concluding design is the foundation of professional training, separating professions from the sciences. Simon was acutely concerned by the damage to professional competence that occurred in the years following World War II, when engineering, business and other professional schools moved toward natural science and away from the "sciences of the artificial." (Simon, 1996) Simon recognized the problem lay in the notion of "artifical science," and derogatory connotations around the term "artificial." He identified four essential features of artificial things: that they are synthesized by humans; that they may imitate natural appearance; that they are characterized in terms of functions, goals, and adaptation; and that they are often described in terms of design imperatives.
Engineers and other designers are concerned with how things ought to function in order to accomplish goals, and synthetic or artificial objects, i.e. artifacts are "the central objective of engineering activity and skill." (Simon, 1996) The design of artifacts involves three related considerations: the purpose or goal to be achieved, the nature of the artifact itself, and the environment in which the artifact functions. An artifact can thus be considered the interface between its own internal structure and function and its surroundings, what Simon called the "inner" and "outer" environments." Simon claimed: "description of an [artifact] in terms of its organization and functioning-its interface between inner and outer environments-is a major objective of invention and design activity." (Simon, 1996) Goals link the inner and outer systems, with the inner system representing one of several functionally equivalent sets of capabilities that can accomplish the goals and the outer environment setting the conditions required for goal achievement. Of course, this is a bit of a simplification, which Simon recognizes, acknowledging that artifacts must obey natural laws and noting we will often have to be satisfied with designs that only partially meet their objectives.
Design problems are often framed as making a choice from among fixed alternatives, where the best, or optimum, solution is selected. Simon notes, however, that actual design decisions frequently involve finding satisfactory, rather than optimal solutions, introducing the term "satisficing" to describe such decision methods.
Satisficing methods search for solutions in a way that yields acceptable results with only modest search. Real-world problem solving and design methods must search for appropriate solutions, thus design involves the allocation of resources to ensure designers focus efforts on the most promising lines of inquiry. With satisficing goals, solutions are rarely unique, and the design effort seeks sufficent, rather than necessary, answers. (Simon, 1996) Simon describes a typical approach to search, in which possible paths are explored, with results stored in a "tree" structure that reflects the value assigned to each branch. The values guide further exploration, and the search process gathers information on problem structure that can be used to discover a solution. The search process therefore serves two complementary purposes: finding a solution and understanding problem structure. Simon identifies decomposition as a powerful tool for solving complex problems. This technique, which is foundational to systems engineering, breaks complex systems into distinct parts, often along functional lines, allowing each part to be designed somewhat independently. Simon notes, however, that "there is no reason to expect that the decomposition of the complete design into functional components will be unique," identifying organizational theory as a field keenly concerned with the "issue of alternative decompositions of a collection of interrelated tasks." (Simon, 1996) Simon also addresses the topic of problem representation, noting the importance of representations that make solutions more obvious, and the need for a better taxonomy for describing and classifying different classes of problem representations. He concludes by presenting the elements a program in design that incorporates the preceding topics, noting a number of well established design processes that refute any notion that desgin can be reduced to cookbook approaches, the same notion that once threatened to force design from the curricula of engineering and other professional schools. (Simon, 1996) Product Design and Development A product is anything sold to a customer, and product development is the set of activities that bring the product to market. By its nature, product development is cross-functional, requiring contributions from numerous functions in a firm, including marketing, design, engineering, and manufacturing. (Ulrich & Eppinger, 1995) Figure   1 presents a generic product development process showing the major activities required to transform a concept into a finished product. Of course, every organization follows a different process, but having a well defined proess offers benefits in terms of quality, coordination, planning, management and process improvement. The generic product development process has five phases: 1. Concept development, which identifies alternative concepts (descriptions of form, function and features) to meet market and customer requirements, evaluates those alternatives, and selects one for further development; 2. System-level design, which defines the product architecture and divides the product into sub-systems and components; 3. Detail design, which provides a complete specification in the form of control documentation (e.g., drawings of parts and production tooling, specifications, and fabrication plans) for all unique parts to be manufactured or purchased; 4. Testing and refinement, which evaluates prototypes to verify compliance with customer requirements; and 5. Production, which makes the intended product. (Ulrich & Eppinger, 1995) For the present study, we are primarily interested in the system and detail design phases and the interaction and communication that must occur in the design organization to create the required detail design products, termed control documentation or artifacts.  (Ulrich K. , 1995) The Role of Product Architecture Eppinger and Browning (2012) define product or system architecture as "the arrangement of components interacting to perform specified functions," noting that architecture is represented by individual components, their relationships to one another and the environment, and principles guiding design. (Eppinger & Browning, 2012) When designing products or engineered systems, one commonly decomposes the product or system into smaller elements, such as subsystems, modules, and components, that must be integrated to work together and achieve performance objectives. The discipline of systems engineering focuses on planning and controlling component interactions to deliver system-level performance. The Systems Engineering "V," shown in Figure 2, illustrates the process of designing and developing engineered systems. Ulrich (1995) provides a comprehensive survey of product architectures and articulates how architecture affects areas critical to product development. He draws on concepts from a range of fields, including design theory and operations management, and provides a useful framework for understanding the design trade-offs affected by product architecture. Ulrich defines product architecture as "the scheme by which the function of a product is allocated to physical components," and argues product architecture's importance to decision making, noting that product architecture drives performance and that manufacturing firms have flexibility when choosing product architecture. Product architecture considers three inter-related activities: identification of functional requirements and arrangement of functional elements; mapping functional requirements to physical systems or components; and defining physical interfaces between systems or components. Modular architectures have a one-to-one mapping of functional requirements to systems or components and decoupled interfaces, while integral architectures have a complex (e.g., one-to-many) mapping of functional requirements to systems or components or coupled interfaces.
A coupled interface exists when a change to one system or component requires a change to the related (i.e., coupled) system or component. (Ulrich K. , 1995) Modular architectures can be further divided into slot, bus or sectional types.
In a slot architecture, components have different interfaces such that components cannot be interchanged with one another. For example, a car radio has a different interface than the car's speedometer. Bus architectures provide a common bus to which other components connect or attach using the same kind of interface. Examples include expansion slots in personal computers and shelving systems. Finally, in sectional architectures, components use the same kind of interface, but there is no single element to which all others connect. Examples include piping systems and sectional sofas. Of course, these descriptions all represent ideal types-real products may use multiple types of architectures simultanesouly, or blur lines of distinction.
Ulrich notes manufacturing firms have significant flexibility when choosing product architecture and argues architecture may result more from incremental evolution rather than deliberate choice. He also notes many authors have argued the superiority of modular architectures, but suggests no architecture should be considered ideal. (Ulrich K. , 1995) Figure 2 -The Systems Engineering V (Department of Transportation, 2007 Organizational Structure Successful product development requires an effective development process and effective development staff. Ulrcih and Eppinger (1995) define "product development organizations" as "the scheme by which individual designers and developers are linked together into groups," noting that links can be formal or informal, and can include reporting relationships, financial arrangements, and physical layout. (Ulrich & Eppinger, 1995) Individuals in the product development organization can be classified by either function or project. Functions are areas of responsibility that generally require specialized training or skills, such as marketing, design, engineering, operations management, and manufacturing. Regardless of function, individuals use their expertise on different projects. (Ulrich & Eppinger, 1995) Organizational strcuture identifies the people in an organization, their relationships to one another and the organization's environment, and the principles governing its purpose and development. The effective development of products and engineered systems depends on the efficient and effective flow of information between people and across organizational divisions. Leaders may want to enable "more and better communication, the free flow of ideas, and the open sharing of issues and concerns, with hopes of building consensus and preempting problems," but the free flow of information can go too far, creating information overload that actually impedes effective communication. Leaders therefore seek to manage the flow of information to facilitate effective execution of complex projects through purposeful organizational structures. Rational organization design enables effective communication by improving team structure and providing insight on the application of integrative or coordination mechanisms. (Eppinger & Browning, 2012) Organizational structure defines how people work together to accomplish objectives and create value. Organizational structure includes formal hierarchy, the decomposition of the organization into functional elements, such as directorates, departments, divisions, work centers, and individuals; reporting relationships and lines of authority; and informal teaming relationships that cross both vertical and horizontal hierarchical lines. Given the well established relationship between product architecture and organizational structure, one might expect firms would align the two in order to create products that better meet objectives, but in practice, firms consider a variety of business and managerial imperatives when setting organizational structure.
The next section examines elements of organizational structure, including descriptions of structures found in real-world organizations.

Types of Organizational Networks
The Role and Nature of Hierarchies. Herbert Simon examined the relationship and interplay between hierarchies-systems composed of inter-related subsystems, which are themselves hierarchical until reaching some elemental structure-and argued hierarchy is one of the "central structural schemes that the architect of complexity uses." (Simon, 1996) Hierarchic systems explicitly include those not based on subordination; examples include formal organizations, such as firms, businesses, and government entities; societies, divided into units like families, villages, tribes, or nations; biological and physical systems, including products and complex engineered systems; and symbolic systems.
Hierarchies decompose the whole into modular parts or subsystems, where one can distinguish interactions within a subsystem from interactions between or among subsystems. In the context of the present study, this feature is seen in both the decomposition of products and engineered systems described by product architecture, as well as the decomposition of organizations into elements such as directorates or divisions.
A key property of hierarchies is near decomposability, which refers to the idea that intra-component linkages and interactions are generally stronger than intercomponent interactions. This feature separates high-frequency dynamics related to internal structure from low-frequency interactions among components. In a nearly decomposable system, inter-component interactions are weak, but not negligible. (Simon, 1996) In fact, it is these weak interactions, which are often poorly understood, that give rise to complexity, a topic explored in greater depth in a subsequent section.
In Chapter 9 of Six Degrees of Separation: The Science of a Connected Age, Duncan Watts describes how today's models and theories of organizational structure trace to Adam Smith's The Wealth of Nations, which describes the division of labor principle Smith inferred from his observations of workers. Smith noted workers performed better when collective tasks were broken into specialized subtasks, a benefit termed returns on specialization. The division of labor harnesses returns on specialization, but does not explain why production must be accomplished by firms or why hierarchical organizations emerged as the dominant type assoicated with mass production. Nevertheless, many firms did organize that way, and the consensus of economic theory has long been that hierarchies represent the optimal organizational form. (Watts, Six Degrees, 2003) Traditional economic theory argues that firms grow through the process of vertical integration, the periodic absorbing or jettisoning of hierarchies, but Sabel and Poire (1984) challege that theory, noting that it came about only after vertical integration had become the dominant organizational design. They argue, instead, flexible specialization, which exploits economies of scope using general prupose machinery and skilled workers, is beginning to replace vertical integration, and further argue such economies of scope are optimal when uncertainty and rapid change favor adaptability over scale. (Poire & Sabel, 1984) Random and Small World Networks. Much has been written about random and small world networks. This section briefly reviews key features and concepts that inform, or are otherwise relevant to, the study of organizational networks. The socalled "small world" phenomenon formalizes the anecdotal notion that "you are only ever six 'degrees of separation' away from anybody else on the planet." (Watts, Small Worlds, 1999) Watts and Strogatz (1998) coined the term "small-world networks" to describe networks that occupy the "middle ground" between completely regular and completely random, exhibiting short characteristic path lengths associated with random networks and high degrees of clustering associated with ordered networks.
They explored simple models that can be tuned through this "middle ground" and demonstrated that real world networks exhibit small world properties. (Watts & Strogatz, 1998) The study of small-w orld networks, and of networks in general, illustrates basic concepts from graph theory. A graph, G(N, m), is a set of N vertices or nodes and m edges or links. The study of small-world networks was limited to undirected and unweigthed networks, meaning links had no direction or relative weight, and to sparse graphs, meaning the number of links, ≪ , where the right-hand quantity represents the maximum possible number of links in a network of N nodes.
Distance between nodes can be characterized by a characteristic path length, L(G), such as the median of the means of the shortest distances between each node.
Clustering is the extent to which vertices adjacent to any vertex are connected to one another.
A common theme in the study of graphs is the comparison of network properties to those of random graphs. A random graph of order N consists of N vertices with an edge set of m randomly chosen edges, where m usually depends on N and G(N, p), a graph of N vertices where everyone of the * $ ) + edges exists with probability p, (0 < p < 1). Random graph theory defines conditions under which a random graph contains some property Q, for example, it is connected, in the limit where N → ꝏ. A common feature of random graphs is that most monotone properties appear suddenly at some value or function of N. (Watts, Small Worlds, 1999) Matrix and Project-Based Organizations. The defining characteristic of a matrix organization is the existence of a dual chain of command, with responsibilities assigned to functional departments, such as engineering, production and marketing, and to product or project departments. Functional departments provide specialized, internal resources, while project or product departments focus on outputs. Davis and Lawrence (1978) argue a matrix organization is more than just a matrix structure: "it must be reinforced by matrix systems, such as dual control and evaluation systems, by leaders who operate comfortably with lateral decision making, and by a culutre that can negoitate open conflict and a balance of power." (Davis & Lawrence, 1978) Ford and Randolph (1992) note terms like matrix, matrix organization, and project organization are often used interchangeably to refer to a cross-functional organizations that bring together people from different functional areas "to undertake a task on either a temporary basis (as in a project team) or on a relatively more permanent basis (as in a matrix organization)." The common characteristic is a hybrid organization form in which a traditional functional hierarchy is "overlayed" by a lateral project-based authority, as shown in Figure 3. Ford and Randolph note most authors place matrix organizations towards the center of a continuum, between purely functional organizations on the one hand, and purely project organizations on the other. (Ford & Randolph, 1992) Figure 4 illustrates typical functional, product, and matrix organizations showing how matrix organizations are a hybrid of the other two.
In general, matrix organizations can be classified as heavyweight or lightweight. In a heavyweight project organization, individual project managers report directly to the General Manager and are responsible and accountable for the success of assigned projects. Functional managers also report to the general manager and are responsible for technical excellence. Project managers control budgets and allocate resources and therefore have significant authority. In a lightweight project organization, the project manager plays more of a coordination and administrative role, but has little authority. (Ulrich & Eppinger, 1995) Figure 3 -Typical Matrix Organization (Ford & Randolph, 1992) Figure 4 -Typical Functional, Project and Matrix Organizations (Ulrich & Eppinger, 1995) Kerzner (2003) argues matrix organizations "attempt to create synergism through shared responsibility between project and functional management," but notes that achieving such synergy is often quite difficult in practice. Since no two working environments are the same, no two matrix organizations will be the same. (Kerzner, 2003) Advantages of matrix organizations include improved control over resources, independent policies and procedures for individual projects, quick adaptation to change, ability to develop a strong technical base, shared responsibility and authority, and improved ability to solve complex problems. Disadvantages include multidimensional work and information flow, dual reporting, changing priorities, potential for conflict, and role ambiguity. (Kerzner, 2003) Situations favoring matrix organizations include having a mix of products, plants and markets; short business cycles; complex and rapidly changing environments; and high technology products where scarce talent must be spread across multiple projects. (Wintermantel, 2003) Miterev, Mancini and Turner (2017) identify options available for the design of project-based organizations and explore key factors affecting those options compared to traditional organizations. They define a project-based organization as one that decides to use project management businesses practices to manage work.
They distinguish a program as being a collection of related projects, but note both projects and programs are temporary organizations. They argue an unpredictable and rapidly changing business environment drives firms to adopt "temporary organizational forms, such as projects and programs," noting the "management of innovation in the car industry now requires a project-led or project-supported organization." (Miterev, Mancini, & Turner, 2017) Reflecting on holistic models of organization design, such as the McKinsey 7-S framework and Galbriath's star model, Miterev, Mancini and Turner argue organizational designers must consider a range of factors, including "internal coherence and external fit." Noting the tendency towards disaggregation in large firms, they further argue decentralization can improve performance when searching for solutions to non-decomposable problems. They propose the design of projectbased organizations should consider five related elements: orientation, the strategic decision to be project-based; project organization, which defines the relationship between projects, programs and functions; business processes, which should be Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations. …This fact has important implications for the management of system design. Primarily, we have found a criterion for the structuring of design organizations: a design effort should be organized according to the need for communication. (Conway, 1968) Similar to Simon, Conway defined design as an intellectual activity that creates systems from varied parts. He viewed design in broad terms, including a range of activites, from the design of weapon systems to the creation of public policy. The output of design is the "structured body of information" needed to achieve the stated objective. (Conway, 1968) Conway lays out the general stages of design, which include establishing boundaries, selecting a preliminary concpet, organizing the design activity, delegating tasks based on concept, coordinating tasks, and consolidating subsystem or component designs into a final, single design. He then examines the relationship between the structure of the design organization and the architecture of the system it designs. He argues that for any node (i.e., component, sub-system) in the system, one can identify a node or group of nodes in the design organization responsible for its design.
Similarly, any link in the system design defines an interface between two nodes, necessitating communication and coordination between the responsible organizational entities. Conway concludes a structure-preserving relationship exists between system architecture and organizational structure. He asserts many alternative designs can satisfy requirements, and argues "the choice of design organization influences the processes of selection of a systems design" from those alternatives.
Since the organization is not completely flexible in terms of communication structure, it will "stamp out an image of itself in every design it produces." This phenomenon is more prominent in larger, less flexible organizations. (Conway, 1968) Conway explores the management of design and questions why design efforts fail, or "disintegrate," as he calls it. He identifies two principal problems, the tendency to "overpopulate" the design effort and "fragmentation of the design organization communication structure." Overpopulation occurs when the perceived complexity of the design exceeds limits of comprehension, leading to subdivision and delegation of tasks. Pressure to maintain schedule incentivizes managers to bring additional resources to bear, leading to further subdivision and delegation. One fallacy contributing to overpopulation is the perceived linearity of resources, the idea that 100 designers working for one week are of equal value to two designers working for a year since both have approximately equal cost in terms of man-hours, and therefore dollars expended.
Conway notes these resource allocations result in radically different organizational structures, which necessarily leads to different designs because of the structure-preserving relationship between organizational structure and system design.
Delegation and overpopulation lead to fragmentation of the communication structure.
The number of possible communication paths in a design organization is approximately equal to the square of the number of people in the organization divided by two. For design organizations of even modest size, communication must be restricted to allow time for "work." Hierarchical organizations limit communication to defined links along lines of organization and command, but the need to communicate depends on system concept. As a result, Conway argues design organizations should be "lean and flexible," and further argues in favor of management philosophies that do not equate manpower with productivity. (Conway, 1968) Architectural Innovation and the Failure of Established Firms. Henderson and Clark (1990) examine the nature of innovation and conclude that changes to product architecture, including some perceived as minor technological improvements, challenge traditional firms by destroying existing knowledge embedded in the firms' organizational and communication structures. They focus on product development and take as their unit of analysis products sold to end users that are designed, engineered and manufactured by a single development organization. They acknowledge the distinction between the product as a whole-the system-and its physically distinct components and argue that successful development requires knowledge of component design concepts and knowledge of product architecture, which defines how individual components are integrated into a coherent system. (Henderson & Clark, 1990) Examining innovation, Henderson and  Radical changes are readily recognized because they are "radical," while incremental changes tend to reinforce or enhance existing core competencies.
Architectural changes, on the other hand, are subtle and therefore hard to recognize.
Technical evolution is usually characterized by periods of experimentation followed by the acceptance or emergence of dominant desings that establish basic design decisions not reconsidered in each subsequent design. "Once a dominant design is established, the initial set of components is refined and elaborated, and the progress takes the shape of improvements in the components within the framework of a stable architecture." (Henderson & Clark, 1990) During periods of innovation, firms require the ability to develop knowledge and synthesize designs, but once a dominant design is established, firms stop investing in learning about alternative configurations and instead invest in refinements.
Architectural knowledge becomes embedded in the firms' organizational structure.
Henderson and Clark use the idea of channels, filters and strategies to describe how architectural knowledge becomes embedded. Channels refer to formal and informal reporting and teaming structures and reflect knowledge about architecture since the organization tends to be arranged and connected in the same way as the product and its components. Organizations establish filters to determine what information is important, and tend to eliminate or ignore information irrelevant to the dominant design. Designers develop strategies to solve problems based on experience.
Organizations use channels, filters and strategies to cope with complexity, and their operation becomes implicit within the organization. (Henderson & Clark, 1990) Architectural changes present two problems: the need to recognize them, and the need to apply new knowledge effectively. Such changes put a premium on exploration and integration of new knowledge, and established firms often struggle to adapt. Henderson and Clark examine the challenge of architectural innovation through a study of the development of photolithographic equipment that collected data during a two-year field study that included interviews with product development teams and reviews of internal records. They conclude that architectural innovations challenge firms because they render useless existing knowledge contained in the organization's structure and are hard to recognize because the established organizational strucuture filters out critical indicators, delaying recognition. In addition, they argue the effect of architectural innovation depends on how organizations learn, and suggest the "fashion for cross-functional teams and open organizational environments" may be a response to perceptions on the challenges of architectural innovation. (Henderson & Clark, 1990) Testing McCormack, Rusnak and Baldwin find strong support for the mirroring hypothesis, noting loosely-coupled software design organizations produced products with higer degrees of modularity than those developed by tightly-coupled design organizations. They note surprisingly large differences in modularity for products of similar size and function, finding direct dependencies give rise to many more indirect dependencies in tightly-coupled organizations. They further find product architecture is influenced by both functional requirements and contextual factors, a result with important managerial implications given that the search for new designs is constrained by the nature of the organization in which the search occurs. They identify two potential causal mechanisms. One one hand, designs may "evolve to reflect their development environments," with differences in communication between tightly-and loosely-coupled organizations leading to differences in modularity. On the other, differences may result from purposeful choices. For example, loosely-coupled organizations may require highly modular designs to succeed. In practice, both mechanisms likely play a role. (McCormack, Rusnak, & Baldwin, 2008) Managers must understand how decisions on organizational structure affect design choices in non-explicit ways related to the interplay between problem-solving methods and the scope of the design space that must be searched to find an acceptable solution. In addition, managers must recognize the cognitive problem stemming from the critical dependence of system architecture on indirect dependencies that are often difficult to see in simple "black box" representations. (McCormack, Rusnak, & Baldwin, 2008) Interplay Between Product Architecture and Organizational Structure. Ulrich analyzes the relationship between product architecture and the management of product development. He argues that modular architectures require greater emphasis on system level design to ensure interfaces and associated standards, performance requirements, and acceptance criteria are well defined. Detail design for individual systems or components can proceed independently, with design activities assigned to specialized design teams that have structured but infrequent interaction. In contrast, integral architectures require greater emphasis on detail design. System level design establishes system-level performance requirements and divides the overall system into a few subsystems. Detailed component design relies on a core team of designers who interact constantly to manage interactions. (Ulrich K. , 1995) Modular designs allow a more traditional, bureaucratic organization built around specialized groups with deep experience, but require teams with strong system engineering and planning skills. For well-understood technologies, modular design may dramatically reduce the difficulty of managing product development, and these benefits may outweigh any system performance penalties associated with a modular architecture. However, modular designs can create organizational barriers to innovation. In contrast, integral designs may offer improved performance, but require teams with strong coordination and integration skills. For this reason, integral designs often prove more difficult to manage. (Ulrich, 1995) Sinha, James, and de Weck (2012) examine how innovations, which change product architecture, affect the product development organization, demonstrating a feedback effect. They assert improvements to product performance or functional features often increase the product's complexity. Recalling Conway's Law, they note that changes to product architecture necessitate changes to orgnaizational structure and work processes, but also note organizational changes often lag technical changes.
Aligning organizational structure with product architecture should improve a product's technical performance and should also provide benefits to business objectives, such as reduced cycle times.  To evaluate the impact of innovation on organizational structure, Sinha, James, and de Weck compared two jet engine designs using design structure and multidomain matrix techniques and found the new design required a significant increase in both intra-and inter-team interactions. They observe new connections between functional groups not previously connected improved communication and problem discovery, and note the largest changes occurred in groups outside the traditional "core" disciplines, in groups playing supporting roles. The latter result suggests such support functions provide increasing benefits to overall system performance. (Sinha, James, & de Weck, 2012) Robust Organizations Poire and Sabel challenged the notion, implicit in theories of firms, that the accomplishment of complex tasks is somehow centralized and controlled from above, considering this a "convenient fiction." Instead, they argue when firms embark on new projects, the people involved know little about how to accomplish it, so design, innovation and production must occur simultaneously, and in a decentralized manner.
When the environment becomes more ambiguous and uncertain, learning and design must occur in parallel. (Poire & Sabel, 1984) When confronted by ambiguity, organizations compensate by exchanging information, thus the problem of coping with ambiguity becomes a problem of distributed communication, which involves the transmission of information in connected systems. However, organizations are intrinsically hierarchical and individual members of the organization are limited in the amount of work they can accomplish. Networks are costly in terms of time and energy, so a robust information processing network must balance production (i.e., work) and information redistribution. (Watts, 2003) Dodds, Watts and Sabel examined the dynamics of information exchange in organizationl networks and introduced an organizational network model that incrementally adds links to a hierarchical backbone according to a stochastic rule.
They identified a class of networks, which they call "multiscale networks," that exhibits "ultrarobustness," meaning they simultaneously reduce the likelihood that an individual node will fail because of congestion and the likelihood that the overall network will fail if congestion failures do occur at individual nodes. In addition, they found multiscale networks achieve "ultrarobustness" with the addition of relatively few links, which suggests "ultrarobust organizational networks can be generated in an efficient and scalable manner." (Dodds, Watts, & Sabel, 2003) Economists have long studied organizational structure, emphasizing efficiency over robustness and focusing on multilevel hierarchies. Hierarchies offer advantages for exercising control, accumulating knowledge, performing decentralized or distributed processing, and making decisions, but these advantages assume the organization's tasks can be easily decomposed into smaller subtasks that can be accomplished independently. Modern organizations "face problems that are not only large and multifaceted but also ambiguous: objectives are specified approximately and typically change on the same time scale as production itself, often in light of knowledge gained through the very process of implementing a solution." Problem solving becomes a collective activity characterized by simultaneous design and collaboration among individuals, teams, and organizations. Under these conditions, the chief concern is not efficiency, achieved by minimizing costly organizational links, but robustness, achieved by preventing individual nodes from being overwhelmed and protecting the overall network from catastrophic collapse when individual failures do occur. (Dodds, Watts, & Sabel, 2003) Dodds, Watts, and Sabel propose a model (DWS model) of organizational networks with four components: a construction algorithm, a description of the task environment, an algorithm for passing messages, and specific measures for congestion and connectivity robustness. They begin with hierarchical organizational structure defined by branching ratio, B, and number of levels, L, which yields a network with N = (B L -1)/(B -1) nodes. The construction algorithm then adds m nodes according to a stochastic rule , that governs the probability, P(i, j), that links will be added between nodes i and j: The algorithm chooses additional links without replacement. The hierarchical backbone represents the organization's formal structure, while additional links represent teaming arrangements that transmit information. The stochastic rule uses two key parameters, depth, Dij, of the lowest common node, aij, between nodes i and j, and organizational distance between nodes i and j, given by 9: = < 9 ) + : which is valid for di + dj ≥ 2. The rule also uses two tuning parameters, λ and Ϛ, which represent characteristic lengths for Dij and xij respectively. Figure 6 identifies and illustrates elements of the stochastic rule. • Random networks, R, (λ, ζ) → (ꝏ,ꝏ), in which links are added uniformly at random, without regard to lowest common ancestor rank or organizational distance; • Local Team, LT, (λ, ζ) → (ꝏ,0), in which links are added only between node pairs who share the same immediate supervisor; • Random Interdivisional, RID, (λ, ζ) → (0,ꝏ), in which links are added between nodes whose lowest common ancestor is the node at the top of the hiearchy, i.e., between nodes in different major divisions of the hierarchical organization; and • Core-Periphery, CP, (λ, ζ) → (0, 0), in which links are added only between subordinates of the top node, resulting in a fully connected central core with pure branching hierarchies below.
Multiscale networks, MS, correspond to moderate values of λ and Ϛ (i.e., λ = Ϛ = 0.5) and combine features of the four other network classes. Multiscale network connectivity is not dominated by a single factor or scale. Instead, they show connectivity at mulitples scales at the same time, but do not show uniform density at all scales, which distinguishes them from small-world networks. These features improve information exchange compared to hierarchical networks, which tend to put the burden of information sharing on nodes at higher ranks. Figure 6 Schematic Illustration of the Construction Algorithm (Dodds, Watts, & Sabel, 2003) Figure 7 Classes of Organizational Networks (Dodds, Watts, & Sabel, 2003) The DWS model represents the task environment based on the rate and distribution of messages exchanged in the process of completing a global task. Stable environments have low rates of information exchange, µ, defined as the average number of messages initiated by a node per time step. The task environment also allows different degrees of task decomposability. Tasks with a high degree of decomposibility only require message passing within the same group, that is nodes with the same immediate supervisor, while tasks that cannot be decomposed require communication with distant nodes. For a given source node, s, transmitting messages at rate µ, the task environment model selects a target node, t, at random by weighting all nodes at distance x using the factor & A B . When ξ = 0, tasks display a high degree of decomposability and all messages are passed locally. When ξ → ꝏ, tasks are not decomposable and the target is chosen at random. Messages are passed from source to target through intermediaries, with each node in the chain passing the message to an immediate neighbor who has the lowest common ancestor with the target. This method assumes each node has complete information on its own location and the locations of its neighbors, a condition called "pseudoglobal knowledge." (Dodds, Watts, & Sabel, 2003) The DWS model uses two measures of network robustness, congestion centrality and connectivity robustness. Congestion centrality of an individual node is the probability that any message will be processed by that node. The rate of information processed by node i is therefore given by 9 = 9 . A node will remain free of failure only if its capacity, Ri, exceeds ri. Dodds, Watts, and Sabel argue a robust organizational structure reduces congestion centrality, thus they associate The upper contour plot demonstrates multiscale networks correspond to moderate values of λ and Ϛ, while the lower plot demonstrates that multiscale networks reduce maximum congestion centrality with fewer team links, m, than other networks. Coreperiphery networks exhibit lower values of maximum congestion centrality, but also exhibit greater variability and sensitivity to initial conditions. Multiscale networks do not exhibit this volatility, making them a more reliable solution for improving congestion robustness. Figure 9 illustrates the scaling of congestion centrality with network size and demonstrates that congestion centrality continues to decrease as network size increases for multiscale networks, while for other networks, congestion centrality decreases only to a plateau or limiting value. Figure 10 presents connectivity robustness results and shows that random and random interdivisional networks have the best connectivity robustness. However, multiscale networks have comparable connectivity robustness, and significantly better congestion robustness, making them the overall most robust choice. (Dodds, Watts, & Sabel, 2003) Multiscale networks "display a remarkable combination of properties," including low likelihood of congestion failures over a range of environmental conditions, resilience to disconnection if node failures occur, and ultrarobustness, meaning simultaneious congestion and connectivity robustness not exhibited by any other network class. In addition, multiscale networks achieve these benefits when only a small number of additional team links are added to the hierarchical backbone.

MS R, RID & LT
LT R RID CP MS Figure 10 Connectivity Robustness of Networks (Dodds, Watts, & Sabel, 2003) Magee ( "systematic mismanagement of the inherent complexity associated with the design of these systems." The report notes complexity is hard to quantify, but argues complexity is related to the number of design parameters and the interactions among them, which are often poorly understood. (Murray, et al., 2011) Building on the 2011 DARPA report's conclusions, two of its contributors, Kaushik Sinha and Oliver de Weck 2013), argue "today's large-scale engineered systems are becoming increasingly complex" due to demands for increased performance and improved lifecycle properties.  They report 13 aerospace projects reviewed by the Government Accountability Office (GAO) showed cost growth of 55% or more, and attribute such cost overruns to "our current inability to characterize, quantify and manage complexity."  They assert complexity results from the number and variety of elements in a system and their connectivity, and further assert complexity is a "measureable system characteristic."  The 2011 DARPA report is correct when it says the term complexity is "difficult to quantify and often abused." (Murray, et al., 2011) Melanie Mitchell, External Professor at the Santa Fe Institute, notes in her book Complexity: A Guided Tour that no single science or theory of complexity yet exists, despite the many books and articles written on the subject. She identifies common properties of complex systems, including complex collective behaviors, such as self-organization and emergence; signalling and information processing; and adaptation through learning or evolution. Her definition of complex system incorporates these characteristics: a complex system is one "in which large networks of components with no central control and simple rules of operation give rise to complex collective behavior, sophisticated information processing, and adaptation via learning or evolution." She further describes self-organizing systems as those where organized behaviors arise without an internal or external controller or leader, and emergent behaviors as those that arise from simple rules in unpredictable ways. (Mitchell, 2009) As the title suggests, her book provides a guided tour of the subjects and ideas central to complexity, including dynamics and chaos, information and computation, evolution, genetics, cellular automata, and networks.
Scott Page provides a useful framework for undestanding complexity, defining complex systems in terms of four necessary characteristics: diversity, connectedness, interdependence, and adapation. Diversity refers to the number and variety of different agents or elements in the system. These agents are connected and interdependent, that is the actions and behaviors of individual agents affect and are affected by those of other agents. Finally, complex systems change over time due to adaptation and selection. Page argues adaptation is the key characteristic separating complex systems from complicated ones. As an example, he says a watch is complicated because it has diverse, connected and interdependent parts, but it is not complex because those parts do not adapt. The watch operates in a fixed and predictable manner and does not exhibit the kinds of behaviors associated with complex systems. (Page, 2009) Mitchell likewise notes that some definitions of complexity omit adaptation, with the term complex adaptive system being used to distinguish systems in which adaption plays an important role. (Mitchell, 2009) In fact, much of the confusion about the meaning of complexity stems from two related questions: whether to include adaptation in definitions of complexity and how to differentiate complex systems from those that are merely complicated. In common use, when someone says a thing is "complex," they most often mean it is hard, challenging, or complicated, but as we have already seen, the term complex is also used to describe a variety of unexpected or "complex" system behaviors. Peter Senge (1990)

addresses this disparity in The Fifth
Discipline, where he distinguishes detail complexity, the usual kind characterized by many variables, from dynamic complexity, in which cause and effect are subtle, and the effects of interventions over time are not obvious. (Senge, 1990) Using Senge's categories, detail complexity would equate to complicated systems, while dynamic complexity would equate to complex systems or complex adaptive systmes, those specifically characterized by adaptation. The definition offered by Sinha and de Weck focused on the number of elements and their dependencies and therefore represents a form of detail complexity. Later in their article, they adopt the term "structural complexity" to emphasize they are principally interested in non-adaptive characteristics. This paper will use structural complexity to mean the detail complexity associated with the architecture of a system, characterized by the number and variety of elements in the system and their connections and interdependencies, and complex adaptive system to mean systems additionally characterized by adaptation and selection when the difference is important.
While this distinction seems clear, confusion can also occur when trying to separate the characteristics of complex systems from their behaviors. Mitchell identified self-organization and emergence as behaviors that distinguish complex adaptive systems. (Mitchell, 2009) To this list, Page adds several additional items, including robustness, susceptibility to large events, and non-linear dynamics. (Page, 2009) He argues complex adaptive systems are robust, meaning they can withstand disturbances. Returning to his watch analogy, he notes that a watch, while complicated, will cease to function if elements are removed. In contrast, a complex adaptive system will continue to function because it is adaptive. Paradoxically, complex adaptive systems often produce the kinds of "large events" to which they are robust. Nassim Taleb famously called such events "Black Swans." He defines a Black Swan as an event with three charactersitics: "rarity, extreme impact, and retrospective predictability." The third characteristic refers to the human tendency to identify, post facto, explanations that would have made the event predictable. (Taleb, 2010) Complex adaptive systems also exhibit non-linear dynamics such as phase transitions, the sudden change from one condition to another sometimes called a tipping point. Among the behaviors of complex adaptive systems, emergence is perhaps the most important. Emergence, or emergent behavior, refers to the situation where macro behavior differs from, and cannot be easily predicted by, the micro behaviors of agents in the system. One common type of emergence is self- interests, and often impinged on by only a local fragment of the overall pattern." (Schelling, 2006) Chapter 4 famously demonstrates how a slight and non-malicious preference towards having neighbors of the same race ultimately leads to segregated neighborhoods. Despite a relatively high degree of tolerance at the micro level, the overall result is segregation. Schelling observes: "it is not easy to tell from the aggregate phenomenon just what the motives are behind the individual decisions or how strong they are." (Schelling, 2006) This kind of micro-macro disconnect is central to the idea of emergence in complex adaptive systems.

Complexity in Projects
A similar disconnect can occur in "merely complicated" systems when connections and dependencies are poorly understood. Sinha and de Weck note "a perpetually occurring theme" affecting the design of large engineered systems is the idea that designers create more complex product architectures when they "stretch the limits of efficiency and attempt to design more robust systems."  In large engineered systems, such as automobiles, aircraft and ships, the number of connections and dependencies can quickly challenge the limits of human cognition. Even though individual elements of the system may perform in predictable ways, interactions among elements can lead to unexpected macro behaviors. Such behavior may be predictable in theory, but not in any meaningful or practical way. As a result, merely complicated, large engineered systems often exhibit quasi-emergent behavior comparable to complex adaptive systems. Patanakul, et al (2016), analyzed 39 public projects undertaken in the United States, United Kingdom and Australia and identified six key characteristics affecting project performance. Among these, they identified project complexity as a root cause of poor performance, noting a positive correlation between project size and complexity. They argue project complexity results from both tehnical challenges and from an array of "ambiguous and uncertain external and internal forces." They identify improper governance structures and poor project management approaches as key factors leading to poor project performance. (Patanakul, Kwak, Zeikael, & Liu, 2016) Floricel, Michela and Piperca investigated how complexity affects project performance and provide a theoretical basis for understanding the relationship between complexity and project performance. They propose a framework that characterizes project complexity using structural-dynamic and intrinsicrepresentational dichotomies as illustrated at Table 1. The structural-dynamic dichotomy corresponds to previous definitions, with structural complexity referring emergent behaviors that result from poorly understood interactions among system entities and dynamic complexity referring to temporal behaviors that produce sudden changes that can be radical and unpredictable. The intrinsic-represnetational dichotomy refers to differening perspectives around whether complexity is an intrinsic characteristic of reality or results from our inability to recognize and represent it. The intrinsic-representational distinction implies "planners see complexity aspects as intrinisc in the 'world out there' or as resulting from imperfections in their own representations." Applying these distinctions results in the indicators of project complexity shown in the four quadrants of to cope with complexity: use of existing knowledge, creation of new knowledge, separated organization, and integrated organization. The first two categories represent a choice between using existing knowledge as captured in databses, models and rules and creating new knowledge through experimentation, simulation and prototyping.
The second two categories represent a choice between "decomposing relevant objects and tasks into stand-alone blocks and allocating the execution to distinct organizations or teams" and increasing "the density and strength of communication tieks throughout a project organization by stimulating collaborative work." (Floricel, Michela, & Piperca, 2016) Floricel, Michela and Piperca analyzed data from 81 projects from across a range of sectors and found project complexity negatively affects completion performance as expected. Specifically, they found technical complexity negatively effects project performance, but also found mixed results for other performance aspects, including innovation and value creation. They argue for a "more careful consideration of complexity effects," noting "perceptions of high complexity may generate more intense representation efforts, followed by implementation of special strategies." (Floricel, Michela, & Piperca, 2016) Measuring Complexity Several authors have proposed methods or measures to quantify complexity, but there is no single, widely accepted metric, nor even universal agreement that complexity can be measured. Melanie Mitchell surveys different approaches, taking as her point of departure a 2001 paper in which physicist Seth Lloyd proposed three features affecting the complexity of an object or process: the difficulty describing it, the difficulty creating it, and its degree of organization. Lloyd identified forty-odd measures of complexity from dynamical systems theory, thermodynamics, information theory, and computation. (Lloyd, 2001) Counting Methods. Size is the simplest, and perhaps most commonly used, measure of complexity. For engineered systems, counting methods, that describe characteristics like the number of components in the system, provide insight, but size is generally not a good measure of complexity. For example, the human genome has 250 times more DNA base pairs than the yeast genome, but single-celled amoeba have 250 times more base pairs than humans. Clearly, counting the number of DNA base pairs would tell you little about why humans are more complex than amoeba.

Shannon Entropy. A second commonly proposed measure of complexity is
Shannon Entropy, defined as the average information content in a series of messages between source and receiver. Shannon(1948) proposed entropy as a measure to quantify how much information is produced, or at what rate, by an information source.
For a discrete, noiseless channel, the Shannon Entropy, H, is given by: Where K is a constant to account for units of measure and the base 2 logarithm is used to quantify information in binary digits, or bits. Shannon concluded measures of this form "play a central role in information theory as measures of information, choice and uncertainty." (Shannon, 1948) The form of H recalls formulations from statistical mechanics, and is identical to the form proposed by Boltzman. Shannon entropy has several interesting properties. It tends to zero when the probability of a particular outcome approaches unity, and has maximum value when all possible conditions have equal value equal to 1/n. Figure 11 presents a plot of Shannon Entropy versus probability, p, for the case of two probabilities, p and (1-p) and demonstrates that Shannon Entropy takes on a maximum value when either condition is equally likely. (Shannon, 1948) Figure 11 Shannon Entropy (bits) in the case of two possibilities with probabilities p and (1-p) (Shannon, 1948) Shannon Entropy seems appealing as a measure of complexity because the behavior illustrated in Figure 11 appears consistent with the intuitive sense that maximum complexity should occur somewhere in the transition between order and disorder. However, Mitchell notes it also has drawbacks that challenge its use as a measure of complexity. First, it is not always possible to describe a system as a series of messages. For example, it is not clear how one might use Shannon Entropy to measure the complexity of the human brain. Second, maximum entropy corresponds to a random system, where all conditions are equally likely. Mitchell concludes Shannon Entropy fails to capture the intuitive concept of complexity because the most complex systems are neither the most ordered nor fully random, falling instead somewhere between. (Mitchell, 2009) Wilhelm and Hollunder (2007) propose a similar information theoretic approach for classifying networks based on the metric medium articulation (MA), which is a measure of network complexity. Medium articulation is given by: Where R(A, B) is redundancy, given by: And I(A, B) is mutual information, given by: ( , ) = L L 9: log 9: And Tij is the normalized flow from node I to J. (Wilhelm & Hollunder, 2007) They demonstrate that networks with a medium number of links, L ~ n 1.5 , show , where n is the number of nodes in the network. They consider a network complex if its MA is larger than the MA of a randomized network and also differentiate democracy networks, in which information cycles, from dictatorship networks, in which information flows from sources to sinks. Figure  links (Wilhelm & Hollunder, 2007) Wilhelm and Hollunder establish a clear criteria for classifying a network as complex, but the distinction appears abitrary. Interestingly, they investigate food webs and neural networks, two classic examples of complex adaptive systems that exhibit emergent behavior, yet classify them as non-complex, illustrating the challenge of differentiating complex structure from complex behavior. (Wilhelm & Hollunder, 2007) Algorithmic Information Content, Logical and Thermodynamic Depth. As an alternative to simple entropy, Kolmogorov, Chaitin and Solomonoff independently proposed algorithmic information content, the size of the shortest computer program that could generate a complete description of the system, as a measure of complexity.
For example, a repeating sting of characters, such as "ACACACAC…" could be generated more simply than a ramdom string, such as "ATCTGCAAC…" The first string is said to be compressible, but the second is not and therefore contains more information content. Similar to simple entropy, algorithmic information content allots higher content to random systems than those one would intuitively consider complex.
Physicist Murray Gell-Mann proposed a similar measure, "effective complexity," that characterizes a system in terms of regularities and randomness. For example, the first string above has simple regularity, but the second, random string has none. To calculate effective complexity, one must find the best description of the regularities; effective complexity is then the information content of the regularities.
A related pair of complexity measures, logical depth and thermodynamic depth, relate complexity of a system to the difficulty of creating it. Such methods equate complexity with either the amount of information processed, or the thermodynamic or information resources required to create it. (Mitchell, 2009) Measures of this sort hold intuitive and theoretical appeal, but they tend to be arbitrary in the sense that they depend on subjective descriptions of a system. In addition, they are more a process to characterize a system than a measure in the truly quantitative sense.
Statistical Complexity. James Crutchfield and Karl Yound defined statistical complexity as "the minimum amount of information about the past behavior of a system that is needed to optimally predict the statistical behavior of the system in the future." Like Shannon Entropy, statistical complexity quantifies system behavior in terms of discrete messages. To predict future behavior, a model of the system is created such that the behavior of the model is statistically indistinguishable from the system's behavior. Statistical complexity matches intuitive expectations in that it is low for ordered and random systems and high for those in between. However, like other measures already discussed, it is difficult to apply if the system cannot be easily represented as a message source. Still, investigators have successfully measured statistical complexity of complicated crystals and other phenomena. (Mitchell, 2009) Fractal Dimension. Unlike previous measures that rely on concepts from information or computation theory, fractal dimension relies on concepts from dynamical systems theory. French mathematician Benoit Mandelbrot coined the term fractal to describe real-world objects, such as coastlines, trees, and snowflakes, with self-similar structues. In general, a fractal is a geometric shape that has the same structure at every scale of observation. For example, coastlines have similar, rugged structure at all scales of observation. Mathematicians have proposed numerous fractal models. For example, the Koch curve is created by application of the following rule: starting with a straight line, at each step, replace the middle third of the line with two sides of a triangle. Figure 13 illustrates the result. Fractals challenge traditional notions of spatial dimension. For example, if you repeatedly bisect a line n times, you get 2 n smaller copies after n steps. In general, when you repeatedly divide an object into x new objects, each level is made up x dimension copies of the previous level. For the example of bisecting a line, the dimension is 1, but for the Koch curve, the dimension is 1.26. 1 To summarize, "fractal dimension quantifies the number of copies of a self-similar object at each level of magnification of that object. Equivalently, fractal dimension quantifies how the total size (or area, or volume) of an object will change as the magnification level changes." Fractal dimension finds appeal as a measure of complexity because it captures the idea that complex systems have interesting details at all levels of observation, and it provides a way to quantify how interesting that detail is. However, level of detail is only one interesting aspect of complex systems, so fractal dimension is only a partial measure of complexity. (Mitchell, 2009) Degree of Hierarchy. Herbert Simon argued hierarchy is one of the "central structural schemes" of complex systems, noting "the frequency with which complexity takes the form of hierarchy-the complex system being composed of subsystems that in turn have their own subsystems, and so on." (Simon, 1996) Simon identified a number of social, biological, physical and symbolic systems with hierarchic structures.
For example, biological systems are often described using cells as the fundamental building block, with cells organized into tissues, tissues into organs, organs into systems, and so forth. The cell is likewise composed of structured subsystems, such as the nucleus, cell membrane, and mitochondria.
Simon examines the dynamics of hierarchical systems and identifies a key property of hierarchic systems: near decomposability, defining nearly decomposable systems as ones "in which the interactions among the subsystems are weak but not negligible." (Simon, 1996)  Hierarchic representations provide information about the relationships among the major elements of a systems, as well as information about the relationships among the parts that make up each element. Information about relationships between parts in different elements is lost, but this loss of information is not signficant because elements interact in an aggregate manner. Hierarchic representations also enable our ability to recognize, describe, and comprehend complex systems. (Simon, 1996) Mitchell notes several authors have explored the use of hierarchy to measure complexity. For example, Daniel McShea proposed to measure the complexity of biological organisms using a hierarchic measure based on nestedness, the idea that one entity contains as its parts entities at the next lower level. He showed organisms become more hierarchic as they evolve, but noted the challenge of objectively determining what constitutes a part or level. (Mitchell, 2009) Analysis and Critique. Mitchell notes the large number of complexity measures that have been proposed and concludes "each of these measures captures something about our notion of complexity but all have theoretical and practical limitations, and have so far rarely been useful for characterizing any real-world systems." Like the idea of complexity itself, the variety of measures suggests complexity has many different dimensions not readily captured by a single metric. (Mitchell, 2009) Feldman and Crutchfield (1998) reached a similar conclusion in their review of several measures of statistical complexity. They note many functions satisfy the intuitive criteria for measures of complexity, that they vanish at the extremes of order and disorder, and conclude this property is not sufficient. They then suggest two criteria for measures of complexity. First, the measure must have clear interpretation, that is it must specify what precisely is being measured. Second, it must consider motivation and define how it will be used and what questions it will answer. Many individual measures of complexity meet these criteria, but no single measure fully captures the nature or behavior of complex systems. (Feldman & Crutchfield, 1998) Vincent Vesterby (2007) offers a stronger critique of efforts to measure complexity, arguing no current method is up to the task because the nature and magnitude of complexity render quantitative and qualitative methods inadequate. He further argues methods that simplify will fail because they "ignore what complexity is." Since current measures are inadequate, he recommends an approach that focuses effort on understanding complex systems rather than measuring them, suggesting that knowledge about how a system operates simplifies the task of measurement, making it more practical and specific to attributes like prediction and management. (Vesterby, 2007) A Weck note their proposed measure of structural complexity has the same functional form as measures used in quantum mechanical analysis of molecular systems where the system's Hamiltonian (total energy) is the matrix of interest. Using this analogy, they propose topological complexity is captured by the graph or matrix energy of the adjacency matrix A, representing the system architecture.
The adjacency matrix of a network, here the network defined by system architecture, is the n x n matrix, A, where Aij = 1 when nodes i and j are connected and 0 otherwise. The associated matrix energy of the network is defined by the sum of the singular values of the adjacency matrix, obtained from singular value decomposition: where σi represents the i th singular value. Sinha and de Weck note the matrix energy represents the "intricateness" of the structural dependencies among system components. They also note topological complexity increases as architecture moves from centralized to distributed structures. Distributed architectures cannot be reduced easily, but may offer improved performance and robustness. The full form of the proposed measure of structural complexity is given by: = L 9 + cL L 9: 9: where αi estimates the complexity of individual components, βij estimates the complexity of each component-to-component interface, and γ ~ 1/n is a scaling factor for graph energy. Sinha  supporting systems, such as lubrication and engine control, were principal contributors to topological complexity, with corresponding impacts to system integration efforts.
They conclude simple components can have a greater effect than complex components due to their impact on overall system architecture.
Sinha and de Weck note the need for empirical validation of their proposed measure of structural complexity and also note the lack of direct measures of complexity. As a result, validation must rely on indirect measures or observables, such as development cost. They hypothesize that development cost should increase super-linearly with structural complexity and test their hypothesis using literature data for simple and complex systems. They demonstrate that development cost follows a power-law relationship, Y = aX b but caution their findings are based on limited data.
They also conducted simple experiments in which human subjects were asked to build ball and stick models of molecules and found assembly time, a surrogate for cost, followed a power law relationship.
Sinha and de Weck also explored the factors affecting the distribution of structural complexity and found modular architectures do not necessarily reduce structural complexity, contrary to conventional wisdom. In fact, structural complexity can increase even as modularity increases. They conclude "knowledge of overall system architecture is absolutely critical to be able to quantify and track the complexity during the system development activity." Taking development of the Boeing 787 Dreamliner as a case study, they note Boeing outsourced much of the deveopment work and lost control of the development process. As a result, Boeing failed to understand total structural complexity as the system evolved. In order to successfully manage the development of large engineered systems, design teams must track evolving architectures to ensure subsystem complexities remain within sustainable limits.  The measure of structural complexity proposed by Sinha and de Weck is useful because it provides a logical framework for understanding structural complexity and the role of individual components, interfaces, and system architecture. In addition, they demonstrate, based on preliminary data, that development cost should follow a power law relationship with structural complexity, and that increasing modularity may not decrease structural complexity. However, the lack of objective ways to estimate component and interface complexity, and the reliance on expert assessments for practical applications, may limit the measure's utility.

Improving Project Performance
Zhu and Mostafavi (2017) propose a framework for understanding complexity and managing emergence in projects. Drawing on contingency theory, they argue "the efficiency of a project is contingent on congruence between the project system's capability to cope with complexity (i.e., project characteristics) and the level of complexity." They characterize complexity using the framework proposed by Senge, that is in terms of detail and dynamic complexity, and identify three capacities that improve the project system's ability to cope with complexity: absorptive capacity, which relates to the ability to mitigate the effects of disruptions in advance; adaptive capacity, which relates to the ability to react to disruptions; and restorative capacity, which relates to the ability to recover from disruptions. (Zhi & Mostafavi, 2017) Reinersten (2009) claims "the dominant paradigm for managing product development is fundamentally wrong" and recommends a new paradigm that aims to achieve flow in the product development process similar to that achieved in lean manufacturing. He identifies twelve problems with the current "product development orthodoxy:" 1. Use of the wrong economic objectives, that is a focus on proxy measures, like cycle times, rather than life-cycle profits; 2. Failure to recognize the importance of or measure queues, which lead to high volumes of in-process design "inventory;" 3. Inappropriate focus on efficiency, which leads to processes loaded to unreasonable utilization factors; 4. Failure to understand the role and value of variability, a practice that impedes innovation; 5. Overemphasis on conformance to plans at the expense of understanding new information; 6. Processes that institutionalize large batch sizes, such as phase-gate processes; 7. Failure to use cadence and synchronization; 8. Managing to timelines instead of managing queues, and failing to appreciate the implications and effects of variability; 9. Absence of limitations on work in process (WIP), as seen in lean manufacturing; 10. Inflexibility of resources, people and processes, which hinders responsivemeness to variability; 11. Failure to appreciate the cost of delay; and 12. Centralized control built on centralized information systems.
Drawing on concepts and ideas from a number of sources, including lean manufacturing, economics, queueing theory, statistics, control engineering, and military doctrine, he identifies 175 principles to address these problems.  Reinersten identifies queues as the most important factor causing poor    Figure 14 is much steeper at 90% utilization than at 50%). Turning to the economics of queues, Reinersten observes one can trade queue size against capacity using the theoretical optimum capacity, which for an M/M/1/¥ queue is given by where CC and CD are the costs of capacity and delay respectively. He recommends several principles for managing queues, chief among them two imperatives: first, to monitor and control queue size rather than capacity utilization because neither demand nor capcity can usually be estimated accurately in product development; and second, to take prompt action to resolve high queue states because they are so damaging.  Figure 15 shows a simple DSM, showing how they illustrate interactions between generic system elements. DSM find application in the design of complex engineered systems, and can be used to model system architecture, organizational structure, and process arrangement.  (Browning, 2001) DSM can be classified into four types with three main categories. The first cateogry includes static architecture models, usually used to represent products or artifacts whose components interact with one another, or organizations whose members interact with each other. To reduce potential confusion, this report will use product or system architecture to refer to the physical arrangement of components in a product or complex engineered system, and organizational structure to refer to the arrangement of personnel within an organization, such as the engineering and design team responsible for the design and development of a product or engineered system.
The second category includes temporal flow models that represent processes where system elements change or interact over time. The third category includes multidomain matrices (MDM) that combine multiple DSM, such as product architecture and organizational strucutre, in a single matrix. (Eppinger & Browning, 2012) The following paragraphs examine each type of DSM in greater detail.
Product Architecture DSM Models. DSM aid both the design of system architecture, the "down" side of the Systems Engineering V, and the integration of components and subsystems, the "up" side of the V. (Eppinger & Browning, 2012) Figure 16 shows a product architecture DSM for a climate control system. The process for creating a product or system arcthitecture DSM involves decomposing the system into subsystems or components; laying out the elements on a square DSM, grouping subsystems or modules when appropriate; and identifying and marking interactions among elements.
When modeling system architecture with DSM, the user should consider several factors. First, the limits of the system may be poorly understood, so system boundaries should include the relevant components and interactions to be modeled.
Second, the user must clearly identify the types of relationships and interactions relevant to the system, such as physical adjacency or spatial arrangement, material or  (Browning, 2001) Building the DSM provides insight, but the real benefit comes from the analysis of system architecture using clustering techniques that reorder or group system elements according to some objective, often related to the number and strength of interactions. In that regard, clustering is a type of assignment problem that seeks to optimize the allocation of N elements to M clusters using objective functions that trade off competing goals of minimizing the number or strength of interactions outside clusters against cluster size. For example, the following objective function could be used: Where α and β are constants, Ci is the size of cluster i, and I0 is the number of interactions outside a cluster. In addition, clustering techniques generally try to choose modules that are as independent as possible, although complex engineered systems often exhibit both modular and integrative subsystems. (Eppinger & Browning, 2012) Figure 17shows a clustered DSM for the climate control system shown in Figure 16. Figure 17 Clustered DSM for a Climate Control System (Browning, 2001) Eppinger and Browning argue product architecture DSM provide effective representations of components and their relationships, illustrating decomposition and interactions. Clustering analysis identifies alternative groupings of components into modules, improving understanding and facilitating innovation. DSM are particularly helpful for large systems where system complexity "makes it impossible for any single individual to have a complete, detailed, and accurate mental model of the entire system." (Eppinger & Browning, 2012) Organizational Structure DSM Models. Organizational structure DSM capture organizational elements, such as individuals, groups or departments, as rows and columns, and interactions and communication pathways in off-diagonal cells. The process for creating an organizational structure DSM involves decomposing the organization into elemental units, such as departments, divisions or individuals; laying out the DSM with organizational elements along the rows and columns, grouped as higher-level elements if appropriate; and identifying and marking actual or desired interactions between elements in the off-diagonal cells. The considerations for organizational structure DSM are similar to those for product architecture DSM (Eppinger & Browning, 2012) Similar to product architecture DSM, the analysis of organizational structure DSM relies on clustering techniques that typically focus on grouping people with the greatest need to communicate since the need to communicate often suggests the application of integrative mechanisms, like co-location, meetings, or distribution lists.
Analysis of an organizational DSM may explore several scenarios and trade off advantages and disadvantages of different potential structures, including both political and practical considerations related to group size, location, or composition.
Organizational structure DSM provide intuitive visualization and facilitate discussions around the flow of information, while clustering analysis generates alternative perspectives to improve understanding, facilitate innovation, and inform use of integrative mechanisms. (Eppinger & Browning, 2012) Figure 18 shows an original and clustered DSM for an automobile engine product development team.

Figure 18 Organizational Structure DSM for an Automobile Engine Product Development Team:
Original and Clustered (Browning, 2001) Process Architecture DSM and Multi-Domain Matrices. DSM can also be used to model temporal processes. Such DSM represent the activities in a process and their interactions, and are known by many names: process architecture DSM, process DSM, process flow DSM, activity-based DSM, and task-based DSM. The process for building a process flow DSM involves decomposing the process into activities; laying out the DSM with activities on the rows and columns, grouped into subprocesses or states, if appropriate; and identifying interactions between activities. A unique feature of process flow DSM is the use of markings and designators to represent one of four fundamental types of relationships: sequential activities; parallel activities; coupled activities, meaning those that must converge to a mutually satisfactory result; and conditional activities that depend on upstream activities. Wilensky and Rand (2015) describe agent-based modeling (ABM) as a computational approach in which phenomena are modeled "in terms of agents and their interactions," and argue ABM represents a transformational technology that enables better understanding of familiar topics while facilitating exploration of previously unexplored topics. Taking predator-prey interactions as an example, they note such interactions can be modeled using a system of coupled differential equations. Though relatively straigthforward to solve, the equation-based approach provides no insight into individual behavior, and embeds a simplifying assumption that agents are sufficiently homogeneous to permit use of average quantities. Agentbased representations accommodate heterogeneity and may be simpler to understand: "agent-based representations are easier to understand than mathematical representations of the same phenomenon," because agent-based models are built from individual objects (i.e., agents) and simple behavior rules. (Wilensky & Rand, 2015) Wilensky and Rand also explore the challenge of understanding complex systems and emergence, noting the need for both integrative and differential understanding. Integrative understanding relates to discerning aggregate patterns when individual behaviors are known, while differential understanding relates to discerning individual behaviors when the aggregate pattern is known. Agent-based modeling addresses both challenges because it provides a way to explore how the actions and interactions of individual agents affect aggregate system behavior. (Wilensky & Rand, 2015) Description of Agent-Based Models. Agent-based models are based on the idea that many phenomena can be represented by agents, their environment, and rules governing agent-to-agent and agent-to-environment interactions. where agent-based models provide the greatest benefit: • Systems with a moderate (tens to millions) of interacting agents; • Systems comprised of heterogeneous agents; • Systems characterized by complex, history-or property-dependent, and local agent-to-agent interactions; • Systems involving rich environments, such as social networks or geographical systems; • Systems that exhibit time-dependent, i.e., step-wise, behavior; • Systems where agents adapt over time, such that future behavior depends on past behavior and agents change behavior based on experience.
Among these criteria, time dependence is considered a necessary condition, while adaptation is considered a sufficient condition. Virtually all agent-based models evaluate system behavior in discrete time steps, making time-dependence a necessary condition. Furthermore, few other approaches accommodate adaptive agents, making adaptation a sufficient condition for using agent-based models. (Wilensky & Rand, 2015) Despite their power, however, agent-based models have important limitations.
First, agent-based models can be computationally expensive, requiring extensive computational power to simulate many individual agents. Second, the modeler must use judgment when deciding which variables to model, so must have some knowledge about how the system operates. Finally, most agent-based models require some knowledge of individual agent behaviors. (Wilensky & Rand, 2015) Creating Agent-Based Models. Wilensky and Rand explore issues related to designing, building, and examining agent-based models. They identify two major categories of models: phenomena-based and exploratory. Phenomena-based models start with a phenomenon that exhibits a known, or reference, pattern, and then create a model-a set of agents and the rules governing their behavior-that generates the reference pattern. Exploratory models start with agents and their behaviors and then explore the patterns that emerge. A related feature of modeling methodology is the degree to which the model seeks to answer a specific question. At one extreme, one might formulate a specific research questions, such as "How do organizations effectively manage the design of complex engineered systems?" At the other extreme, one might start with only a desire to model organizational structures. Another dimension of agent-based modeling is the relationship between the conceptual model and the code written to implement it. In some cases, a top-down approach is appropriate. In top-down models, the conceptual model is fully specified-agents, environment, rules govnerning behavior and interactions-before writing any code to implement it. In other cases, a bottoms-up approach is better. In bottoms-up models, the conceptual model and code evolve together.
Wilensky and Rand identify an essential design principle for agent-based models: "start simple and build toward the question you want to answer." An agentbased model should start with the simplest set of agents and rules possible, and should avoid adding anything that detracts from answering the question motivating the model.
To help modelers implement this principle, Wilensky and Rand identify seven critical design choices. The first choice involves identifying the question to be answered.
Noted before, models and questions often co-evolve, but it is important to confirm the phenomenon and system are suited to agent-based modeling using the guidelines given before. The second choice involves identifying the agents to be used in the model.
Since every entity can be subdivided into several smaller entities, it is important to match the granularity or scale of agents to the temporal scale of interest. In addition, the need for any proto-agents should be identified. Proto-agents do not have their own rules or behaviors. Instead, they take on characteristics from a global agent type. measures required to answer the question of interest. It is often wise to limit the number of measures used to prevent data overload. (Wilensky & Rand, 2015) Analyzing Agent-Based Models. Agent-based models present unique analsysis and interpretation challenges compared to equation-based models because agent-based models allow users to control many agent characteristics, which often results in large numbers of inputs and outputs. While this flexibility is one feature giving agent-based models their power, it also creates concerns. For example, using more inputs means there are more parameters to validate against real-world data, while more outputs can lead to data overload and make it difficult for users to discern clear patterns of behavior since modelers must often examine many different relationships between inputs and outputs to identify key relationships. Wilensky and Rand identify four classes of data commonly associated with agent-based models: statistical, graphical, network, and spatial. Statistical results include standard measures like mean, variance, median and other measures. An important consideration for analyzing agent-based models is the need for multiple runs and statistical analysis of results because agents commonly exhibit stochastic behavior. Graphical results present outputs in the form of plots and graphs, rendering them more understandable. Network measures, like clustering coefficient and path length, are useful for network-based models. Finally, spatial measures help identify patterns in one-, two-or higher dimensional space. (Wilensky & Rand, 2015) Verification and Validation. George Box famously said, "all models are wrong, but some are useful." Verification and validation evaluate the accuracy of models to ensure they adequately represent real-world behavior and provide outputs useful to the model's user. Verification confirms the implemented model corresponds to the conceptual model, to ensure that you built the model you meant, while validation confirms that the implemented model explains and corresponds to realworld phenomena, that you built the "right" model. Figure 19 illustrates these relationships. Verification and validation increase confidence in the "correctness and exploratory power of both the conceptual and implemented models." (Wilensky & Rand, 2015) Figure 19 Relationship between model verification and validation (Rand, 2016) Rand and Rust propose guidelines for rigorous verification and validation of agentbased models, arguing both activities should be performed to the extent necessary to convince the target audience of the model's accuracy.

METHODOLOGY Overall Approach and Research Questions
This study will investigate the effectiveness of different organizational structures (organizational networks) at designing complex engineered systems. Specifically, it will evaluate and compare the ability of different organizational networks to deliver design products and share information in the presence of complexity using agent-based modeling (ABM). A phased, building block approach will be followed. Phase 1 will examine information exchange models and implement the model of information exchange proposed by Dodds, Watts and Sabel to confirm the model can be successfully implemented using ABM. Phase 2 will examine artifact models and extend the information exchange model to include the processing of work products, termed artifacts.
Phase 3 will examine smart team models, which include alternate network construction algorithms and alternative methods for processing work products. Phase 4 will apply information exchange and artifact models to a real-world organization. The following research questions will be answered: • How do random, multiscale, military staff and matrix organizational networks perform in the information exchange and artifact task environments and how does increasing the degree of complexity affect performance?
• How do military staff and matrix organizational networks (real organizations) perform compared to one another and to random and multiscale networks (ideal organizations)? How does increasing degree of complexity affect performance and which structure is preferred for organizations that design complex engineered systems?
• How can organizational networks be modified to improve performance?

Organizational Structures and Networks Examined
Organizational structure defines how people work together to accomplish objectives and create value and includes the hierarchical structure that defines an organization's functional decomposition, lines of authority and responsibility, and formal reporting relationships, as well as the teaming structures that cross horizontal and vertical lines and exist to facilitate communication, problem solving, and task accomplishment.
Given the well documented relationship between product architecture and the structure of the product development organization, it is logical and appropriate to examine organizational structure for causes and factors explaining why design organizations sometimes fail to effectively manage the design of complex engineered systems.
Dodds, Watts, and Sabel identified a class of networks, multiscale networks, that simultaneously reduce the likelihood an individual node will fail because of congestion and the likelihood the overall network will fail if congestion failures do occur at individual nodes. (Dodds, Watts, & Sabel, 2003) Because of their robustness to failure, multiscale networks represent an ideal type and provide a basis for comparing and evaluating real-world organizational networks. Random networks represent another ideal type and likewise provide a basis for evaluating and comparing real-world organizational networks. This study will compare the effectiveness of matrix and military staff organizational networks to multiscale and random networks in order to understand the factors affecting the ability of design organizations to manage the design of complex engineered systems, and to identify ways performance can be improved.

Agent-Based Modeling and NetLogo
Agent-based models represent phenomena using agents, their environment, and rules governing agent-to-agent and agent-to-environment interactions. Organizational networks satisfy the criteria for selecting ABM proposed by Wilensky and Rand, thus ABM is an appropriate tool for evaluating the effectiveness of organizational networks. The model then adds m additional team links according to a stochastic rule in which the probability that a new link forms between two nodes, i and j, P(i,j), depends on the organizational distance between the two nodes, xij, and the rank of the two nodes' lowest common ancestor, Dij. The model employs two tunable parameters, λ and ζ, which correspond to ancestor rank and organizational distance respectively. The resulting stochastic rule: In addition to the construction algorithm, the DWS model includes a description of the task environment, a method of information exchange, and a measure of performance. (Dodds, Watts, & Sabel, 2003) The DWS model describes the task environment in terms of the rate and distribution of messages to be exchanged between individual nodes in the organizational network. The information exchange rate, µ, is the average number of messages originated by each node at each time step, and µN is the total number of messages originated across the network at each time step. Message routing considers task decomposability. Tasks that are nearly decomposable require communication only within the same team, meaning nodes with the same immediate superior, whereas tasks that are decomposable require communication across the network. For a given source node, s, a target node, t is selected based on the distance between the two nodes, xst, using the following stochastic rule: ( , ) ∝ &6 wx y When ξ = 0, local dependencies prevail; when ξ = ꝏ, global dependencies prevail. (Dodds, Watts, & Sabel, 2003) Messages pass from source to target through a chain of intermediate nodes.
During each time step, nodes pass messages they initiate or receive to an immediate neighbor with the lowest common ancestor with the target node. This method reflects an assumption termed "pseudo-global knowledge," which assumes individual nodes understand their own locations, and the locations of their immediate neighbors, and have general information about nodes beyond their immediate neighborhood. (Dodds, Watts, & Sabel, 2003) The DWS model adopts congestion centrality as a measure of network performance. Assuming each node can process up to Ri messages per time step, an organizational network will, on average, remain free of congestion when 9 > 9 = 9 , where ρi, the congestion centrality of node i, is the probability that any given message will be processed by node i. Maximum congestion centrality across the organizational network, ρmax, is a measure of robustness to congestion failure. (Dodds, Watts, & Sabel, 2003) Phase one extends the DWS model to matrix and military staff organizational networks by altering the network construction algorithm. These networks begin with the same underlying hierarchical network but employ different methods to add team links.
The task environment, method of information exchange, and use of maximum congestion centrality to measure network performance remain unchanged. Phase one further extends the DWS model to account for the effect of complexity on how information is exchanged. Complexity affects the decomposability of tasks performed by the design organization. When the system being designed is more complex, tasks tend to be less decomposable because the system has more interactions, which are often poorly understood. As a result, tasks require greater cross-functional collaboration. Conversely, when the system being designed is less complex, tasks tend to be decomposable and require little cross-functional collaboration.
The model implements the effect of complexity by adding a complexity input that allows the user to rate complexity on a scale of 1 to 10. When the model creates new messages, it compares a random number to the complexity rating. If the random number is less than the complexity rating, the situation is considered complex and the target node is selected at random from other nodes across the hierarchy. If the random number is greater than or equal to the complexity rating, the situation is considered routine and the target node is selected at random from other nodes in the same major branch of the hierarchy (i.e., same functional organization). Although the complexity rating employs a numerical scale, it is meant to provide a qualitative, not quantitative, representation of complexity. Recalling the task environment of the DWS model, a high complexity rating corresponds to global dependencies, ξ → ꝏ, while a low complexity rating corresponds to local dependencies, ξ → 0. During a given time step, nodes process a number of artifacts and information requests up to their capacity. If a given node has only RFIs or artifacts available, it processes them, but if both are available, it decides which to process by comparing a random number to an artifact preference rating, in the range [0,1]. When the artifact preference rating is higher, it is more likely the node will select an artifact than an RFI.
An artifact rating of 0.5 represents a "coin flip," with the node choosing RFIs half the time, and artifacts the other half.
The artifact model adopts artifact completion rate (number of artifacts completed divided by the total number of artifacts created) as a measure of organizational network performance. If the organizational network is able to keep pace with the demand for artifact processing and information sharing, artifact completion rate will tend to unity, with a small deviation resulting from the number of artifacts being processed during any particular time step. However, if the organizational network fails to keep pace with demands for artifact processing and information sharing, the artifact completion rate will drop and the organization will fall further and further behind.
Congestion centrality remains an important indicator of network performance, but separate centralities must be considered. For any node: The "A" subscript refers to artifacts, while the RFI subscript refers to RFIs. In addition, where pnew is the probability that a new member will be added to the team, S is the current size of the team, and c is a scaling factor for team size. As team size increases in relation to the scaling factor, it is increasingly less likely that new members will be added and instead a new link will be added between existing team members. The difference between total artifacts and completed artifacts can be used as a test for congestion. The smart team model defines the variable delta slope, d, as where t refers to time step and y is the difference between total artifacts and completed artifacts at a given time step. Readers will recognize delta slope is the least-squares slope of a line fit to a plot of y versus t. When a network is free of congestion, d will tend to zero. The smart team model calculates delta slope over the last ten time ticks.

PW-4098 Case Study
The fourth and final phase applies information exchange and artifact models to  (Rowles, 1999) Modeling the PW4098 Design Organization. Two models extend the information exchange and smart team artifact models to the PW4098 organization. The design structure matrix of Figure 22 illustrates cross-functional relationships among teams.
Models assume interactions occur between individual team members, thus the models modify the organizational structure shown in Figure 21 by adding five team members to each organization. Models allow testing of all links or only strong links.     ,4,6,9,14,22,35,55,86,136,216,341 Replicates 10 Basis: detect a ρmax difference of 0.1 with a confidence of 0.95 and target power of 0.9 using standard deviation estimates for ρmax obtained from preliminary investigations.  (Dodds, Watts, & Sabel, 2003)) Evaluation of congested node data yielded an unexpected result in that multiscale networks tend to push the congested node lower in the hierarchy and do so with fewer team links than random networks.  decentralization of congestion is a significant factor for improving network performance.
It further suggests that even within multiscale networks, networks with particular configurations that decentralize congestion will out-perform other networks.  NetLogo feature that allows the user to vary input parameters and run experiments in a batch-wise manner, randomizing the order in which runs are performed. Table 6 summarizes the experimental design and Figure 24 summarizes the results, plotting maximum congestion centrality for each network type against the number of team links added, as log m/N.     Congested node results indicate multiscale and military staff organizational networks push the congested node down the hierarchy, with multiscale networks achieving this affect with fewer team links added. Table 8 shows the number of congested nodes at each level of the hierarchy over the range of team links added.
Multiscale networks decentralize congestion more quickly, and the faster and more extensive decentralization in multiscale networks as m tends to N helps to explain why the maximum congestion centrality of military staff organizational networks diverges from multiscale networks in this range.     Table 9 summarizes the experimental design, while  In summary, the validation experiment confirms that different organizational networks behave differently at low and high complexity. The model is considered valid for comparing the behavior of organizational networks. Basis: detect an artifact completion rate difference of 0.1 with a confidence of 0.95 and target power of 0.9 using standard deviation estimate obtained from preliminary results.    This result is also comparable to that seen in information exchange networks.
Military staff organizational networks do not exhibit the same sharp decrease in effective congestion centrality as multiscale networks, and this divergence corresponds to the divergence in in artifact completion rates described above. Still, military staff organizational networks out-perform random and matrix organizational networks. Figure 30 demonstrates a complicated relationship among RFI, artifact and effective congestion centralities. RFI congestion centrality decreases as the number of team links added increases, while artifact congestion centrality increases as the number of team links added increases. As team links increase, networks become more effective at exchanging information. RFIs are answered more quickly, which allows artifacts to be processed more quickly, leading to an increase in artifact congestion centrality. The point at which effective congestion centrality begins to decrease rapidly corresponds to the crossover point at which RFI congestion centrality equals artifact congestion centrality, which suggests RFI congestion is the key factor leading to congestion failure in organizational networks at high complexity.

Evaluation of congested node results confirmed multiscale and military staff organizational networks achieved decentralization of RFI congestion comparable to that seen in information exchange networks.
Table 12 compares the depths of artifact and RFI congested nodes for multiscale and military staff artifact networks at high complexity and shows that both achieve decentralization with respect to RFIs, with multiscale networks achieving decentralization more quickly, and to a greater extent, than military staff networks. Note that the artifact congested node is always at level 1 because all artifacts must be approved by the manager at level one, thus the manager is the natural congestion point for artifacts.   relatively high and artifact closure rate is low, while RFI closure rate is relatively high.
As team links added, effective congestion decreases and RFI closure rates increase, with artifact closure rates increasing in turn. As the number of team links added increases beyond -1, as log m/N, effective congestion centrality decreases more rapidly, RFI closure rate begins to stabilize, and artifact closure rate begins to rise more sharply. Interestingly, as artifact closure rate rises, so does the RFI arrival rate. Figure 33 shows that RFI arrival rates increase linearly with artifact closure rates. 3 As congestion decreases, RFIs and artifacts are processed more quickly, but faster processing of artifacts mean that more artifacts are in the system, thus there is greater likelihood RFIs will be created. This result demonstrates a positive feedback with regard to RFIs in that reduced congestion and improved processing of RFIs leads to greater demand for RFI processing.

Figure 32 -RFI and Artifact Closure Rates with Effective Congestion Centrality for Military Organizational Networks at High Complexity
3 Note that RFI arrival rate is expressed in terms of task arrival rate. For example, RFI arrival rate of 2 corresponds to 20 RFIs per time interval when task arrival rate is 10.   Delta slope could possibly be improved by using a longer ranger of time to smooth out variations, but examination of validation experiment results indicates artifact completion rate is a reliable indicator of congestion. All networks with artifact completion rates less than 0.85 were congested, while all networks with artifact completion rates greater than 0.90 were free of congestion. This suggests a simple scheme for characterizing congestion. If artifact completion rate is greater than 0.90, the network is free of congestion. If the artifact completion rate is between 0.85 and 0.90, the network is on the verge of congestion. If the artifact completion rate is less than 0.85, the network is congested.        Figure 40 summarizes the artifact performance of the PW4098 design organization for RFI, balanced, and artifact processing preference; centralized and decentralized approvals; and low, moderate, and high complexity. Figure 40 presents artifact completion rates with overall groups based on artifact preference, with further subdivisions for approvals and complexity. The PW4098 design organization performs well at low and moderate complexity, but experiences congestion failure at high complexity. Furthermore, the PW4098 design organization achieves artifact completion rates well below those of all other organizational networks at high complexity, which suggests the PW4098 design organization is unprepared to manage the design of a complex engineered system and is therefore susceptible to the kinds of cost and schedule overruns that often plague programs that attempt to deliver them. This result is concerning because the PW4098 design organization reflects mainstream thinking around the design of engineered systems. First, it is a matrix organization composed of crossfunctional teams. In fact, Rowles reported Pratt & Whitney had abandoned functional organization, having replaced them with discipline centers to maintain technical expertise. Second, Rowles characterized the PW4098 design organization as a heavyweight project matrix organization, which is wholly consistent with mainstream project management practice. (Rowles, 1999) Rowles provides a key insight into the susceptibility of the PW4098 design organization to congestion failure, noting approximately one-third of integrated product team interactions occurred outside of the team's hierarchical group, and that approximately one-fourth of integrated product team interactions did not correspond to design relationships. (Rowles, 1999) The PW4098 design organization's structure implicitly assumes knowledge of the design relationships. The composition and arrangement of integrated product teams reflects these assumed relationships, but complex engineered systems are considered complex because system interactions, and therefore design relationships, are poorly understood. It is not surprising, then, that a relatively large number of interactions would occur outside a hierarchical arrangement based on known or predicted design relationships. The ability of an organizational network to withstand complexity depends on its ability to cope with these unanticipated relationships and the interactions that result. As complexity increases, these unexpected interactions become more frequent, putting strain on the organizational network. PW4098 design organization diverged from that of matrix networks-the next worseperforming organizational network-by about 20%. A comparable divergence in artifact completion rates was seen at high complexity. It is worth recalling that the complexity scale used in the information exchange and artifact models is qualitative despite its use of a numerical scale. From a qualitative and predictive perspective, the information and artifact models represent real-world behavior in a useful manner.

CONCLUSION
This chapter discusses findings, and presents conclusions and recommendations, including opportunities for further research.

Task Environment Matters
All organizational networks performed reasonably well when the task environment was limited to information exchange. Although matrix networks performed poorly compared to other organizational networks, all demonstrated satisfactory performance and remained free of congestion, even at high complexity. All organizational networks also remained free of congestion when the task environment was modified to include artifact processing, at low and moderate complexity. However, at high complexity, all organizational networks experienced congestion failure. This finding demonstrates how a simple change to the task environment alters network dynamics in important and unexpected ways. These kinds of subtle changes to network dynamics are a hallmark of complex systems.
Simon argued the creation of artifacts was the central activity of design organizations. (Simon, 1996) Information exchange is essential to the function of a design organization, but it is through the creation of artifacts that design organizations achieve their purpose. Organizations may be effective at information exchange, but that matters little if they are not effective at delivering artifacts. In other words, effective information exchange is a necessary, but not sufficient, condition for success in design. Table 12 shows military staff and multiscale networks decentralize information exchange to an extent comparable to that achieved in the information exchange environment.

Findings provide insight into why organizational networks experience congestion failure at high complexity. For the artifact task environment,
However, it also shows that the congested node for artifact processing is always at level one, that is artifact approvers (managers) are the congested node for artifacts and limiting factor for artifact performance. Decentralizing information exchange improves the processing of information requests, but it does not change the fact that all artifacts have to go to a manager for approval. At low and moderate complexity, the centralized approval of artifacts does not cause congestion failure, but at high complexity, the combination of centralized artifact approval and increased demand for information exchange leads to congestion. These findings support recommendations for decentralized authority in design organizations.
Findings also confirm the damaging effects of high queue states. Figure 29 demonstrates that congested nodes in all organizational networks have high effective congestion centrality until the number of team links added approaches the number of nodes, that is m tends to N. Recalling that €•• = ‰ Š ‹‰ OE•Ž g$ , one sees reff indicates capacity utilization because the right-hand side is the ratio of work done (rate of artifacts and information requests processed) to arrival rate. At high complexity, congested nodes are operating at capacity utilization factors above 0.9. As shown in Figure 14, this corresponds to high queue states. In other words, at high complexity, congested nodes are operating at high capacity utilization factors and high queue states.

The Pernicious Nature of Complexity
Despite variations, all organizational networks exhibit satisfactory performance at low to moderate complexity but suffer congestion failure at high complexity. This sort of tipping point behavior, shown in Figure 31, is another hallmark of complex systems and illustrates the pernicious nature of complexity. In the artifact task environment, increasing complexity of the system being designed has two compounding effects, both related to the concept of decomposability. At high complexity, it is less likely the task can be neatly decomposed and assigned to a single organization, so it is also less likely the individual responsible for the artifact, the originator, has sufficient information to complete the artifact alone. As a result, the originator puts the artifact on hold while soliciting assistance from others. Because the task is not decomposable, it is more likely information is needed from another worker outside the originator's department or immediate neighborhood, which means it will take longer for the information request to reach its target and be answered. The combined effect is high queue states, extended service times, and ultimately increased congestion.
When complexity is low to moderate, decisions on organizational structure are less important, from a congestion perspective, because a range of possible organizational structures will remain free of congestion and therefore have satisfactory performance. Of course, it is still possible to have poor organizational design and corresponding poor performance, but that poor performance would not be the result of an inherent susceptibility to congestion. However, at high complexity, an organizational structure that otherwise works perfectly well at low to moderate complexity can easily experience congestion failure, leading to the kinds of cost and schedule overruns that are increasingly common in projects that set out to design and deliver complex engineered systems.
Organizations that work reasonably well at low to moderate complexity may find themselves unprepared for high complexity. This situation is similar to the one described by Henderson and Clark, where organizations may find themselves unprepared for the effects of architectural innovation. Interestingly, they suggest the trend towards crossfunctional organizations may reflect an understanding of the challenges of architectural innovation. In fact, cross-functional organizations, especially matrix organizations, may find themselves unprepared for innovation when that innovation increases complexity.
Not surprisingly, organizational networks exhibit properties of complex adaptive systems, with two examples having already been noted, namely the noteworthy change in network dynamics resulting from a simple change to the task environment and the tipping point behavior exhibited by artifact closure rate in response to increasing complexity. In addition, Figure 33 demonstrated a positive feedback affecting RFIs. As team links are added, effective congestion centrality is reduced, which results in improved RFI and artifact processing, but as artifact processing improves, there are more artifacts in the system and greater opportunity for RFIs to be generated. RFI arrival rate increases in proportion to artifact closure rate.
Emergence, the idea that complex systems exhibit collective behavior not easily discerned from the behavior of individual system elements, is generally considered the defining characteristic of complex systems. Results demonstrate organizational networks exhibit emergent behavior. Agents in the organizational networks follow simple behavioral rules. In a given time period, workers examine their RFI and artifact queues and flip a coin to decide whether to process an RFI or artifact when both are present. The interesting dynamics, tipping point, and positive feedback effect already described could not be predicted from this simple behavioral rule.
In comparison, the co-called "complex" engineered system the organizational network is designing would be considered, strictly speaking, a merely complicated system because the elements in the engineered system are not adaptive. It was previously argued that complex engineered systems exhibit quasi-emergent behavior because the number and nature of system interactions are often poorly understood or exceed the limits of human cognition. From a practical perspective, this is an accurate characterization, and when engineers are being careful with their terminology, they will clarify that they mean structural complexity when referring to complex engineered systems. Of course, the design organization is inextricably linked to the engineered system being designed.
Introduction of an adaptive agent, namely the human designers, necessarily makes the design organization, represented by an organizational network, a complex adaptive system.

Susceptibility of Matrix Organizations to Congestion Failure
The defining characteristics of a matrix organization are, in the first instance, the dual assignment of individual workers to both functional and project chains of command, and in the second instance, the assignment of project managers to their own branch in the overall organizational hierarchy. Conway's Law argues design organizations should be organized around the need for communication, and matrix organizations implicitly assume knowledge of communication requirements. In the specific case of a design organization, the matrix structure assumes the need for communication correlates to product architecture since architecture describes the relationship among components in the system being designed. For example, Browning describes the trend toward integrated product development, which brings together representatives from relevant functions using integrated product teams that own a product throughout its lifecycle. He describes the design for integration principles, which include assigning integrated product teams to system elements based on knowledge of system architecture. (Browning, 1996) Ford and Randolph argue matrix organizations should improve information processing capability due to increased cross-functional collaboration. (Ford & Randolph, 1992 Second, results from phase 2 demonstrate that when the task environment is extended to artifact processing, ineffective information exchange leads to artifact backlogs with corresponding poor artifact completion rates at high complexity. Finally, results from phase 3 demonstrate that smart team remedies do not improve the performance of matrix organizational networks to the same extent as other networks. The PW-4098 case study provides critical insights. Rowles noted one-third of integrated product team interactions occurred outside the team's hierarchical group, and that one-fourth of interactions did not correspond to design relationships. (Rowles, 1999) Recall also Sinha and de Weck compared two jet engine designs and found the newer and more complex design required a significant increase in both intra-and inter-team interactions, including new connections between groups not previously connected.  In divergence is a recurring theme and will be further explored shortly. Table 8 demonstrates  Results from previous phases predicted the value of decentralization. For example, Table 4 showed multiscale and random organizational networks decentralized the congested node in the information exchange environment, while Table 5 showed decentralized congested nodes had lower maximum congestion centralities-much lower in some cases. Similarly, Table 12 showed how multiscale and military staff organizational networks decentralized RFI congestion in the artifact environment. Table 12 also showed neither organizational network decentralized artifact congestion.
Adding decentralized artifact approvals to the Smart Teams model facilitated decentralized approvals and improved organizational network performance.
Results confirm Reinersten's assertions regarding the value of decentralized control. Organizations interested in improving performance will be interested in his principles for implementing decentralized control and maintaining organizational alignment.

Value of Agent-Based Modeling
Results confirm the value of agent-based modeling (ABM) for evaluating and understanding complex systems. For example, the validation of information exchange networks using MATLAB demonstrated models of organizational networks could be implemented using either ABM or more traditional programming tools, such as MATLAB. However, the NetLogo interface aids visualization and improves understanding relative to the purely numerical results obtained from MATLAB. To be fair, MATLAB can also be programmed to provide visual depictions, but NetLogo provides them as an inherent feature of its modeling environment. Figure 41 demonstrates the value of visualization because it is the visual comparison of multiscale and military staff organizational networks that suggests the idea that structural similarities between the two networks contributes to the multiscale-like behavior seen in military staff networks. In addition, visual observation of model execution, especially using the "go once" feature, which allows step-wise execution, aids understanding of organizational network behavior. In particular, observation of artifact backlogs at provides understanding of why organizational networks experience congestion failure at high complexity.
Visualization and the ability to observe temporal behavior also provided insights into network dynamics. Observations showed networks generally did not exhibit equilibrium behavior. For example, Figure 36 plotted the difference between open and completed artifacts and showed artifact closure rate did not converge to an equilibrium value. Instead, it continued to vary over time. In this regard, it would be more appropriate to say organizational networks are under control than at equilibrium. This observation confirms one of the key features of ABM. Equation-based models tend to predict average or equilibrium behaviors at the expense of dynamics, while agent-based models illustrate dynamic behaviors. Both types of models are useful, but in this case, use of ABM provided useful insight into the dynamic behavior of organizational networks.
All software tools have advantages and disadvantages. NetLogo provides powerful visualization tools and a syntax that facilitates creation of agent-based models, but it performs some basic computer functions quite poorly. In particular, activities that require loops or recursive searching are not easily implemented in NetLogo or tend to slow model performance significantly. This was particularly evident when first attempting to implement the DWS stochastic rule. This action essentially requires searching through the network for a pair of nodes that has a sufficiently high probability of forming a team link. Since the number of possible links is on the order of N 2 , hundreds or thousands of node pairs must potentially be tested for each team link added, even for relatively small networks like those tested here. In addition, for each pair tested, the lowest common node between them, Dij, must be identified through recursive search.
The initial approach repeated this recursive search for every pair tested.
Implementation of the information exchange model in MATLAB yielded the critical insight that Dij is a property defined by the hierarchical structure of any given network, thus the Dij values between every node pair could be calculated in advance and stored in a file as an N x N matrix. MATLAB was able to handle this task with ease and use of the Dij matrix as an input to NetLogo improved model performance significantly.
The benefit was two-fold because Dij values are needed to route information requests, thus having the values stored in a matrix prevented the need to calculate them at each step of routing every single information request. The integration of MATLAB and NetLogo proved quite useful, and NetLogo users may find value in a NetLogo-MATLAB application programming interface (API) similar to other APIs (called extensions in NetLogo) already provided.
The PW4098 case study demonstrated the utility of ABM for analyzing and predicting the performance of real organizations. When tested with the information exchange and artifact models, the PW4098 organization exhibited performance comparable to other organizations. Models represent the design process using relatively simple task environments and methods of information exchange, consistent with the "keep it simple" design principle articulated by Wilensky and Rand but deliver meaningful results consistent with real-world behavior. (Wilensky & Rand, 2015) Variations in Multiscale Networks The ability of multiscale networks to decentralize congestions has already been mentioned but warrants additional discussion. Dodds, Watts and Sabel demonstrated the robustness of multiscale networks to congestion, using maximum congestion centrality, rmax, as a key indicator of robustness. It is especially noteworthy, then, that even within the multiscale class, different network configurations can have quite different values of rmax for the same number of team links. Focusing on single line from information network results, m = 9 and log m/N = -1.6 for multiscale networks: Data demonstrates the importance of decentralization. In six of ten runs, the congested node was at level zero, the top of the hierarchy, but in four of ten, the congested node was one level lower. The overall average of rmax combines these results, but it is clear that the decentralized nodes put downward pressure on the overall average.
Decentralized nodes have significant impact on overall maximum congestion centrality for a given value of m, and this result demonstrates that even within multiscale networks, there are subclasses of networks with better performance. Military staff organizational networks exhibited the same phenomenon. This matter warrants further investigation.

Conclusions
Referring to the research questions set out in Chapter 3, findings support the following conclusions: • In the information exchange task environment, all organizational networks perform well and remain free of congestion at low, moderate and high complexity.

•
In the artifact task environment, all organizational networks perform well at low to moderate complexity, but all are susceptible to congestion failure at high complexity.
• At low to moderate complexity, military staff and matrix organizational networks perform well, or well enough, and remain free of congestion, but military staff • Matrix organizational networks tended to exhibit poor performance compared to all other networks, often being out-performed by even random networks, especially at high complexity.
• Since military staffs have performance comparable to multiscale networks across a range of situations, they are the preferred organizational form for organizations that design complex engineered systems.

Summary and Recommendations
This study set out to understand why some organizations fail to effectively manage the design of complex engineered systems. It used agent-based modelling to evaluate and compare the effectiveness of random, multiscale, matrix and military staff organizational networks, modelling design as an activity that requires organizations to balance competing demands to complete artifacts and share information. Complexitystrictly speaking, structural complexity-results from the number and diversity of elements in the system being designed, and their interactions, which are often poorly understood. Increasing complexity challenges the design organization's ability to keep artifacts and information-sharing in balance by increasing the frequency and extent of cross-functional collaboration required. The study found all organizational networks perform well, or at least well enough, at low to moderate complexity, but also found that all are susceptible to congestion failure at high complexity. As congestion builds, the organization falls further and further behind, leading to the cost and schedule overruns that seem to plague projects that set out to design complex engineered systems like ships and aircraft.
Conventional wisdom argues projects should be organized around matrix organizations because they improve communication and cross-functional collaboration relative to traditional, functional hierarchies. However, results indicate matrix organizations are particularly susceptible to congestion failure. Compared to multiscale, military staff and even random organizations, matrix organizations are not effective at exchanging information because they overlay a project management hierarchy on top of an existing functional hierarchy. The resulting structure fails to create the conditions for effective cross-functional communication when increasing complexity requires collaboration outside established channels. As a consequence, matrix organizations experience congestion failure when challenged by high complexity.
Military staff organizational networks demonstrated performance properties comparable to multiscale networks over a range of conditions. They are not multiscale networks but have structural similarities to them. They therefore represent a practical approach to creating an organization with multiscale properties. Unlike matrix organizations, military staff organizations embed team leaders in the functional hierarchy, which makes them more effective at cross-functional communication.
Conway argued design organizations should be structured around the need to communicate (Conway, 1968), but the essence of complexity is the inability to fully appreciate the interactions in the system being designed, which likewise makes it impossible to predict in advance which elements of the organization need to communicate. Sinha and de Weck examined how changes to product architecture affect design organizations, demonstrating a feedback effect. Performance and feature improvements often increase a product's complexity, necessitating organizational changes, but those organizational changes often lag behind design changes.  Poire and Sabel argued organizations know little about how to accomplish a project when they first embark on it, so learning and design must occur in parallel. Sabel. (Poire & Sabel, 1984) Organizations that design complex engineered systems should organize themselves around the military staff model, but implementation will require cultural change, and that is no trivial task. Success will depend on having personnel capable of performing, and comfortable with, project and technical roles, and that capability must be developed and encouraged over time. Organizations that invest in such capabilities will reap rewards in terms of organizational resilience to congestion. An exploration should also examine ways to preferentially generate networks that decentralized the congested node since they have improved robustness to congestion.

Opportunities
Second, actual military staff organizations could be characterized to evaluate their performance and confirm they are robust to congestion, especially compared to matrix organizations. Finally, the models developed for this study could be extended to evaluate other variations in network construction algorithm, task environment and routing method, or even to other similar activities. For example, different artifact preference models could be explored. As implemented, artifact preference was shown to not be a significant factor affecting network performance, but different rules, especially those that choose preference dynamically, could yield different results. In addition, variations in worker capacity could be explored, including variations resulting from the number of team links a particular node has. Maintaining team links can be time and resource intensive and can detract from a worker's ability to get work done. Think, for example, of time spent in meetings and other collaborative activities. If team links had a capacity cost, then the network would balance worker capacity against cross-functional collaboration.

APPENDIX 1: Robust Networks Information Exchange Model
Elements of the Information Exchange, Version 1, Model Agents Workers, representing the individuals within the hierarchy. Depth (level); Department (major division of the hierarchy); Supervisor (immediate superior in the hierarchy); Team (team assignment, for matrix and military staff organizational networks); Capacity (the amount of work the worker can perform in one time step); RFI queue (list of RFIs to be processed, i.e., worker's "in box"); and RFI count (number of RFIs processed by worker).
Requests for Information (RFIs), representing messages passed between workers. Originator (worker who originated the RFI); Target (worker to whom the RFI was sent); Status (status of the RFI: open, answered, or complete); and Age (age of RFI). Links Organization Links, representing the hierarchical structure. Team Links, representing the cross-functional team links added to the hierarchy. Environment The environment is defined by the backbone hierarchical network, the team links added to the hierarchical backbone, and the task environment. The DWS model describes the task environment in terms of the rate and distribution of messages to be exchanged. The information exchange rate, µ, is the average number of messages originated by each node at each time step, and µN is the total number of messages originated across the network at each time step. Message routing considers task decomposability. Tasks that are nearly decomposable require communication only within the same team, meaning nodes with the same immediate superior, whereas tasks that are decomposable require communication across the network. For a given source node, s, a target node, t is selected based on the distance between the two nodes, xst, using the following stochastic rule: When ξ = 0, local dependencies prevail; when ξ = ꝏ, global dependencies prevail. Information Exchange, Version 3-0 assumes global dependencies. Time Behavior At each time step, workers create and/or process RFIs. RFIs arrive according to a random Poisson process with mean equal to the userspecified RFI arrival rate. RFIs are assigned source (originator) and target nodes at random. Messages pass from source to target through a chain of intermediate nodes. At each time step, worker nodes pass RFIs they initiate or receive, up to their capacity, by selecting an immediate neighbor with the lowest common ancestor with the target node Inputs Network parameters: Levels, branching ratio; Network type: random, multiscale, matrix or military staff (BCCWG); Number of Teams, for matrix and military staff organizational networks; Dij Name, the file containing a matrix of the depths of lowest common ancestors; Team links added, m; RFI Arrival Rate; and Worker Capacity Outputs The principal output and measure of performance is maximum congestion centrality, ρmax. Assuming each node can process up to Ri messages per time step, an organizational network will, on average, remain free of congestion when 9 > 9 =

Validation of the Information Exchange 1 Model Face Validation
Micro-Face Validation Macro-Face Validation Principal elements of model are the agents representing workers in and organization, and the organizational and team links that connect them. Organizations with these characteristics are ubiquitous across any number of disciplines. The model uses values of λ and ζ corresponding to different classes of organizational structures. The model assumes RFIs arrive according to a random Poisson process, consistent with queueing theory. The model assumes tasks are not decomposable, which is a reasonable and limiting case.
The model realistically depicts the flow of information in organizational networks, combining both formal passing of information up and down a hierarchy, and informal passing through team relationships.
The model realistically represents matrix and military staff organizational networks, two networks found in real-world organizations.

Empirical Validation
Empirical Input Validation Empirical Output Validation The hierarchical backbone is described by number of levels and branching ratio. This is an idealization in that real organizations exhibit irregularities in both level and branching, but the idealization is reasonable.
For this model, empirical validation is accomplished by cross-validation against a model implemented in MATLAB, along with comparison of results to those previously published by Dodds, Watts and Sabel. Conclusion The multiscale and random network behavior is consistent with reference data and results obtained from an alternate implementation in MATLAB; the model therefore considered valid for further development to explore the effectiveness of organizational networks.

Hypothesis Testing for Validation Experiment
The following hypothesis testing evaluates equality of maximum congestion centralities for random and multiscale networks using the paired data test:

Hypothesis
H0: µD = 0 H1: µD ¹ 0 (µD refers to mean of differences) Agents Workers, representing the individuals within the hierarchy. Depth (level); Department (major division of the hierarchy); Supervisor (immediate superior in the hierarchy); Team (team assignment, for matrix and military staff organizational networks); Capacity (the amount of work the worker can perform in one time step); RFI queue (list of RFIs to be processed, i.e., worker's "in box"); and RFI count (number of RFIs processed by worker Requests for Information (RFIs), representing messages passed between workers. Originator (worker who originated the RFI); Target (worker to who the RFI was sent); Status (status of the RFI: open, answered, or complete); and Age (age of RFI). Links Organization Links, representing the hierarchical structure. Team Links, representing the cross-functional team links added to the hierarchy. Environment The environment is defined by the backbone hierarchical network, the team links added to the hierarchical backbone, and the task environment. The DWS model describes the task environment in terms of the rate and distribution of messages to be exchanged. The information exchange rate, µ, is the average number of messages originated by each node at each time step, and µN is the total number of messages originated across the network at each time step. Message routing considers task decomposability, which depends on complexity. When the system being designed is more complex, tasks are less decomposable and require greater cross-functional collaboration. Thus, at high complexity, message target nodes are selected at random from across the hierarchy. At low complexity, tasks are decomposable, and message target nodes are selected at random from among other nodes in the same major branch as the source.

Validation of the Information Exchange 2 Model Face Validation
Micro-Face Validation Macro-Face Validation The model implements complexity in a way that increases the need for crossfunctional communication as complexity increases, consistent with the notion that complexity decreases task decomposability.
Complex engineered systems are complex because they have numerous and varied elements whose interactions are poorly understood. As complexity increases, design tasks are likely to require greater cross-functional communication and collaboration.

Empirical Validation
Empirical Input Validation Empirical Output Validation The model rates complexity on a scale of 1 to 10. Use of a simple scale is not meant to represent a quantitative comparison of system complexity, but instead differentiates systems of low and high complexity in a numerical fashion Empirical output validation relies on stylized facts, i.e., the expectation that high complexity will increase congestion centralities because non-decomposable tasks require greater cross-functional routing.
that is easy to implement in a model.

Conclusion
The model provides a reasonable representation of the difference in information exchange network behavior at low and high complexity.

Hypothesis Testing-Information Exchange Characterization
The following hypothesis testing evaluates equality of maximum congestion centralities for random and multiscale networks compared to military staff and matrix networks at high complexity using the paired data test, as above: Workers, representing the individuals within the hierarchy. Depth (level); Department (major division of the hierarchy); Supervisor (immediate superior in the hierarchy); Team (team assignment, for matrix and military staff organizational networks); Capacity (the amount of work the worker can perform in one time step); Artifact Queue (list of artifacts to be processed); Artifact Count (number of artifacts processed by worker); Hold Queue (list of artifacts placed on hold while awaiting RFI response); RFI queue (list of RFIs to be processed); and RFI count (number of RFIs processed by worker).
Artifacts representing work products. Originator (worker who originated the artifact)' Status (status of the artifact: open, hold or complete); and Age (age of artifact).
Requests for Information (RFIs), representing messages passed between workers. Artifact (the artifact to which the RFI is related); Originator (worker who originated the RFI); Target (worker to whom the RFI was sent); Status (status of the RFI: open, answered, or complete); and Age (age of RFI). Links Organization Links, representing the hierarchical structure. Team Links, representing the cross-functional team links added to the hierarchy. Environment The environment is defined by the backbone hierarchical network, the team links added to the hierarchical backbone, and the task environment. The Artifact model describes the task environment in terms of the rate and distribution of artifacts to be processed and messages that must be exchanged to accomplish cross-functional collaboration. The artifact rate, µA, is the average number of artifacts originated by each node at each time stem, and µAN is the total number of artifacts originated across the network at each time step. Artifact routing follows the functional hierarchy. Workers at the lowest level of the hierarchy originate artifacts and then pass them up the functional chain of command to a manager near the top of the hierarchy for approval. For simple tasks, the originating worker likely has sufficient information to complete the artifact without the need for cross-functional collaboration. For complex tasks, however, the worker likely lacks sufficient information and requires additional information from other workers. In this case, the originating worker places the artifact on hold and originates a request for information (RFI) to acquire the additional information required to complete the artifact. RFIs pass from source to target through a chain of intermediate nodes as with messages in the information exchange model. Upon receipt, the RFI target provides the information requested and returns the RFI directly to the originator. When the originator receives an answered RFI, he completes the associated artifact and routes if for approval. Complexity affects the rate and distribution of RFIs. At low complexity, few RFIs are created, and because tasks are decomposable, RFIs are routed to other workers in the same functional organization. At high complexity, many RFIs are created. Since tasks are not decomposable, RFIs are routed to other workers across the organization. The Artifact model uses the same qualitative complexity scale used in the information exchange models implement in phase one. Time Behavior At each time step, workers process artifacts and information requests up to their capacity. If a given node has only RFIs or artifacts available, it processes them, but if both are available, it decides which to process by comparing a random number to an artifact preference rating, in the range [0,1]. When the artifact preference rating is higher, it is more likely the node will select an artifact than an RFI. An artifact rating of 0.5 represents a "coin flip," with the node choosing RFIs half the time, and artifacts the other half. Inputs Network parameters: Levels, branching ratio; Network type: random, multiscale, matrix or military staff (BCCWG); Number of Teams, for matrix and military staff organizational networks; Dij Name, the file containing a matrix of the depths of lowest common ancestors; Team links added, m; Task Arrival Rate; Worker Capacity; Complexity; and Artifact Preference Outputs The principal output is artifact completion rate, defined as the number of artifacts completed divided by the total number of artifacts. If the organizational network is able to keep pace with the artifact and information processing work load, the artifact completion rate will tend to unity with a small deviation resulting from the artifacts in process at any given time step. Additional outputs include: RFI arrival and completion rates; Mean age of RFIs and artifacts; RFI, artifact and effective congestion centralities and congested nodes; Network parameters: mean path length and global clustering coefficient. Networks demonstrate comparable performance and are congestion free at low complexity but exhibit divergent behavior and experience congestion failure at high complexity. Conclusion Model correctly implements artifact creation and routing and also correctly implements the relationship between artifacts and RFIs. Elements common to information exchange models previously verified.

Validation of the Artifacts Model Face Validation Micro-Face Validation
Macro-Face Validation Principal inputs to model are the organizational networks and the task environment. The organizational networks are based on real-world organizations or ideal classes described in the literature (i.e., random and multiscale). Model implements the creation of artifacts and sharing of information in design organizations.
The model realistically depicts the flow of artifacts and information in organizational networks, combining both formal passing of information up and down a hierarchy, and informal passing through team relationships.

Empirical Validation Empirical Input Validation
Empirical Output Validation The hierarchical backbone is described by number of levels and branching ratio. This is an idealization in that real organizations exhibit irregularities in both level and branching, but the idealization is reasonable.
The model rates complexity on a scale of 1 to 10. Use of a simple scale is not meant to represent a quantitative comparison of system complexity, but instead differentiates systems of low and high complexity in a numerical fashion that is easy to implement in a model.
The model uses an artifact preference rating to control worker selection between artifacts and RFIs when both are present. This is a reasonable representation of realworld behavior.
Empirical validation relies on stylized facts, primarily the expectation that organizational networks will exhibit different performance at low and high complexity. A designed experiment confirms that all networks perform well at low complexity, but experience congestion failures at high complexity, with multiscale and military staff organizational networks out-performing random and matrix organizational networks.

Conclusion
The model is considered valid for the purpose of evaluating the factors and causes leading to the inability of design organizations to manage the complexity associated with the development of large engineered systems.

Hypothesis Testing for Artifact Characterization Experiment
The following hypothesis tests compare artifact completion rates for random and multiscale networks to military staff and matrix networks at high complexity using the paired data test, as before. Requests for Information (RFIs), representing messages passed between workers. Artifact (the artifact to which the RFI is related); Originator (worker who originated the RFI); Target (worker to whom the RFI was sent); Status (status of the RFI: open, answered, or complete); and Age (age of RFI). Links Organization Links, representing the hierarchical structure. Team Links, representing the cross-functional team links added to the hierarchy. Environment The environment is defined by the backbone hierarchical network, the team links added to the hierarchical backbone, and the task environment. For matrix and military staff organizational networks, the Smart Team model extends the Artifacts model to account for teams of different size. The model uses a stochastic rule to determine if a new link will be added to a given team, or whether a new link will be created between existing team members. The rule:

Multiscale-Military Comparison Hypothesis
where S is the size of the team and c is a scaling factor. As the size of a team increases relative to the scaling factor, it is more likely intra-team links will be added. The Smart Team model describes the task environment in terms of the rate and distribution of artifacts to be processed and messages that must be exchanged to accomplish cross-functional collaboration. The artifact rate, µA, is the average number of artifacts originated by each node at each time stem, and µAN is the total number of artifacts originated across the network at each time step. Artifact routing follows the functional hierarchy. Workers at the lowest level of the hierarchy originate artifacts and then pass them up the functional chain of command to a manager near the top of the hierarchy for approval. The Smart Team model extends the Artifact model to account for decentralized approvals. When the decentralized approvals option is selected, artifacts can be approved by a supervisor, one level below the manager. The model includes an input called decentralized preference which controls the probability a supervisor will approve an artifact. When the preference is higher, it is more likely a supervisor will approve an artifact. For simple tasks, the originating worker likely has sufficient information to complete the artifact without the need for cross-functional collaboration. For complex tasks, however, the worker likely lacks sufficient information and requires additional information from other workers. In this case, the originating worker places the artifact on hold and originates a request for information (RFI) to acquire the additional information required to complete the artifact. RFIs pass from source to target through a chain of intermediate nodes as with messages in the information exchange model. Upon receipt, the RFI target provides the information requested and returns the RFI directly to the originator. When the originator receives an answered RFI, he completes the associated artifact and routes if for approval. Complexity affects the rate and distribution of RFIs. At low complexity, few RFIs are created, and because tasks are decomposable, RFIs are routed to other workers in the same functional organization. At high complexity, many RFIs are created. Since tasks are not decomposable, RFIs are routed to other workers across the organization. The Artifact model uses the same qualitative complexity scale used in the information exchange models implement in phase one. Time Behavior At each time step, workers process artifacts and information requests up to their capacity. If a given node has only RFIs or artifacts available, it processes them, but if both are available, it decides which to process by comparing a random number to an artifact preference rating, in the range [0,1]. When the artifact preference rating is higher, it is more likely the node will select an artifact than an RFI. An artifact rating of 0.5 represents a "coin flip," with the node choosing RFIs half the time, and artifacts the other half. Inputs Network parameters: Levels, branching ratio; Network type: random, multiscale, matrix or military staff (BCCWG); Number of Teams, for matrix and military staff organizational networks; Dij Name, the file containing a matrix of the depths of lowest common ancestors; Team links added, m; Task Arrival Rate; Worker Capacity; Complexity; and Artifact Preference Outputs The principal output is artifact completion rate, defined as the number of artifacts completed divided by the total number of artifacts. If the organizational network is able to keep pace with the artifact and information processing work load, the artifact completion rate will tend to unity with a small deviation resulting from the artifacts in process at any given time step. Additional outputs include: RFI arrival and completion rates; Mean age of RFIs and artifacts; RFI, artifact and effective congestion centralities and congested nodes; Difference between total and completed artifacts and delta-slope; and Network parameters: mean path length and global clustering coefficient. Networks demonstrate performance comparable to artifacts model. Conclusion Model correctly implements stochastic rule for team links in matrix and military staff organizations, the selection of RFIs and artifacts for processing, and decentralized approvals. Elements common to information exchange and artifact models previously verified.

Validation of the Artifacts Model Face Validation
Micro-Face Validation Macro-Face Validation Principal inputs to model are the organizational networks and the task environment. The organizational networks are based on real-world organizations or ideal classes described in the literature (i.e., random and multiscale). Model implements the creation of artifacts and sharing of information in design organizations.
The model realistically depicts the flow of artifacts and information in organizational networks, combining both formal passing of information up and down a hierarchy, and informal passing through team relationships.

Empirical Validation
Empirical Input Validation Empirical Output Validation The hierarchical backbone is described by number of levels and branching ratio. This is an idealization in that real organizations exhibit irregularities in both level and branching, but the idealization is reasonable.
The model rates complexity on a scale of Empirical validation relies on stylized facts, primarily the expectation that organizational networks will exhibit different performance for different combinations of smart team parameters. A designed experiment confirms Smart Team factors except artifact preference affect artifact completion rates. 1 to 10. Use of a simple scale is not meant to represent a quantitative comparison of system complexity, but instead differentiates systems of low and high complexity in a numerical fashion that is easy to implement in a model.
The model uses an artifact preference rating to control worker selection between artifacts and RFIs when both are present. This is a reasonable representation of realworld behavior.
The model uses a selector to enable decentralized approvals. Organizations often allow supervisors and managers at different levels in the organization to approve work products.

Conclusion
The model is considered valid for the purpose of evaluating ways organizational networks can be modified to improve their performance. .

Using Artifact Completion Rate to Identify Congestion
Cross-over from always congested to sometimes congested: Cross-Over from sometimes congested to free of congestion: ;;procedure to add levels (i.e., rows) to hierarchical organizational structure to make-level [row] let b Branching-Ratio ;;let b equal the branching ratio let W b ^ (row -1) ;;let W equal the number of workers in pervious row let N (b ^ (row -1) -1) / (b -1) ;;let N equal number of workers in all previous rows of hierarchy ;;for each of the workers in the previous row foreach n-values W