Suggested Searches

SEH 6.0 Crosscutting Technical Management

Encyclopedia
Updated Feb 7, 2019

This chapter describes the activities in the technical management processes listed in the systems engineering engine (Figure 2.1-1). The processes described in Chapters 4 and 5 are performed through the design and realization phases of a product. These processes can occur throughout the product life cycle, from concept through disposal. They may occur simultaneously with any of the other processes. The chapter is separated into sections corresponding to the technical management processes 10 through 17 listed in Figure 2.1-1. Each technical management process is discussed in terms of the inputs, the activities, and the outputs. Additional guidance is provided using examples that are relevant to NASA projects.

The technical management processes are the bridges between project management and the technical team. In this portion of the engine, eight processes provide the crosscutting functions that allow the design solution to be developed, realized, and to operate. Even though every technical team member might not be directly involved with these eight processes, they are indirectly affected by these key functions. Every member of the technical team relies on technical planning; management of requirements, interfaces, technical risk, configuration, and technical data; technical assessment; and decision analysis to meet the project’s objectives. Without these crosscutting processes, individual members and tasks cannot be integrated into a functioning system that meets the ConOps within cost and schedule. These technical processes also support the project management team in executing project control.

The next sections describe each of the eight technical management processes and their associated products for a given NASA mission.

The Technical Planning Process, the first of the eight technical management processes contained in the systems engineering engine, establishes a plan for applying and managing each of the common technical processes that will be used to drive the development of system products and associated work products. This process also establishes a plan for identifying and defining the technical effort required to satisfy the project objectives and life cycle phase success criteria within the cost, schedule, and risk constraints of the project.

This effort starts with the technical team conducting extensive planning early in Pre-Phase A. With this early planning, technical team members will understand the roles and responsibilities of each team member, and can establish cost and schedule goals and objectives. From this effort, the Systems Engineering Management Plan (SEMP) and other technical plans are developed and baselined. Once the SEMP and technical plans have been established, they should be synchronized with the project master plans and schedule. In addition, the plans for establishing and executing all technical contracting efforts are identified.

Crosscutting Technical Management Keys

  • Thoroughly understand and plan the scope of the technical effort by investing time upfront to develop the technical product breakdown structure, the technical schedule and workflow diagrams, and the technical resource requirements and constraints (funding, budget, facilities, and long-lead items) that will be the technical planning infrastructure. The systems engineer also needs to be familiar with the non-technical aspects of the project.
  • Define all interfaces and assign interface authorities and responsibilities to each, both intra-and inter-organizational. This includes understanding potential incompatibilities and defining the transition processes.
  • Control of the configuration is critical to understanding how changes will impact the system. For example, changes in design and environment could invalidate previous analysis results.
  • Conduct milestone reviews to enable a critical and valuable assessment to be performed. These reviews are not to be solely used to meet contractual or scheduling incentives. These reviews have specific entrance criteria and should be conducted when these are met.
  • Understand any biases, assumptions, and constraints that impact the analysis results.
  • Place all analysis under configuration control to be able to track the impact of changes and understand when the analysis needs to be reevaluated.

This is a recursive and iterative process. Early in the life cycle, the technical plans are established and synchronized to run the design and realization processes. As the system matures and progresses through the life cycle, these plans should be updated as necessary to reflect the current environment and resources and to control the project’s performance, cost, and schedule. At a minimum, these updates will occur at every Key Decision Point (KDP). However, if there is a significant change in the project, such as new stakeholder expectations, resource adjustments, or other constraints, all plans should be analyzed for the impact of these changes on the baselined project.

6.1.1 Process Description

Figure 6.1-1 provides a typical flow diagram for the Technical Planning Process and identifies typical inputs, outputs, and activities to consider in addressing technical planning.

technical planning
u003cstrongu003eFigure 6.1-1u003c/strongu003e Technical Planning Process

6.1.1.1 Inputs

Input to the Technical Planning Process comes from both the project management and technical teams as outputs from the other common technical processes. Initial planning utilizing external inputs from the project to determine the general scope and framework of the technical effort will be based on known technical and programmatic requirements, constraints, policies, and processes. Throughout the project’s life cycle, the technical team continually incorporates results into the technical planning strategy and documentation and any internal changes based on decisions and assessments generated by the other processes of the SE engine or from requirements and constraints mandated by the project.

  • Project Technical Effort Requirements and Project Resource Constraints: The program/project plan provides the project’s top-level technical requirements, the available budget allocated to the program/project from the program, and the desired schedule to support overall program needs. Although the budget and schedule allocated to the program/project serve as constraints, the technical team generates a technical cost estimate and schedule based on the actual work required to satisfy the technical requirements. Discrepancies between the allocated budget and schedule and the technical team’s actual cost estimate and schedule should be reconciled continuously throughout the life cycle.
  • Agreements, Capability Needs, Applicable Product Life Cycle Phase: The program/project plan also defines the applicable life cycle phases and milestones, as well as any internal and external agreements or capability needs required for successful execution. The life cycle phases and programmatic milestones provide the general framework for establishing the technical planning effort and for generating the detailed technical activities and products required to meet the overall milestones in each of the life cycle phases.
  • Applicable Policies, Procedures, Standards, and Organizational Processes: The program/project plan includes all programmatic policies, procedures, standards, and organizational processes that should be adhered to during execution of the technical effort. The technical team should develop a technical approach that ensures the program/project requirements are satisfied and that any technical procedures, processes, and standards to be used in developing the intermediate and final products comply with the policies and processes mandated in the program/project plan.
  • Prior Phase or Baseline Plans: The latest technical plans (either baselined or from the previous life cycle phase) from the Data Management or Configuration Management Processes should be used in updating the technical planning for the upcoming life cycle phase.
  • Replanning Needs: Technical planning updates may be required based on results from technical reviews conducted in the Technical Assessment Process, issues identified during the Technical Risk Management Process, or from decisions made during the Decision Analysis Process.

6.1.1.2 Process Activities

Technical planning as it relates to systems engineering at NASA is intended to define how the project will be organized, structured, and conducted and to identify, define, and plan how the 17 common technical processes in NPR 7123.1, NASA Systems Engineering Processes and Requirements will be applied in each life cycle phase for all levels of the product hierarchy (see Section 6.1.2.1 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository.) within the system structure to meet product life cycle phase success criteria. A key document capturing and updating the details from the technical planning process is the SEMP.

The SEMP is a subordinate document to the project plan. The project plan defines how the project will be managed to achieve its goals and objectives within defined programmatic constraints. The SEMP defines for all project participants how the project will be technically managed within the constraints established by the project. The SEMP also communicates how the systems engineering management techniques will be applied throughout all phases of the project life cycle.

Technical planning should be tightly integrated with the Technical Risk Management Process (see Section 6.4) and the Technical Assessment Process (see Section 6.7) to ensure corrective action for future activities will be incorporated based on current issues identified within the project.

Technical planning, as opposed to program or project planning, addresses the scope of the technical effort required to develop the system products. While the project manager concentrates on managing the overall project life cycle, the technical team, led by the systems engineer, concentrates on managing the technical aspects of the project. The technical team identifies, defines, and develops plans for performing decomposition, definition, integration, verification, and validation of the system while orchestrating and incorporating the appropriate concurrent and crosscutting engineering. Additional planning includes defining and planning for the appropriate technical reviews, audits, assessments, and status reports and determining crosscutting engineering discipline and/or design verification requirements.

This section describes how to perform the activities contained in the Technical Planning Process shown in Figure 6.1-1. The initial technical planning at the beginning of the project establishes the technical team members; their roles and responsibilities; and the tools, processes, and resources that will be utilized in executing the technical effort. In addition, the expected activities that the technical team will perform and the products it will produce are identified, defined, and scheduled. Technical planning continues to evolve as actual data from completed tasks are received and details of near-term and future activities are known.

6.1.1.2.1 Technical Planning Preparation

For technical planning to be conducted properly, the processes and procedures that are needed to conduct technical planning should be identified, defined, and communicated. As participants are identified, their roles and responsibilities and any training and/or certification activities should be clearly defined and communicated.

Team Selection

Teams engaged in the early part of the technical planning process need to identify the required skill mix for technical teams that will develop and produce a product. Typically, a technical team consists of a mix of both subsystem and discipline engineers. Considering a spacecraft example, subsystem engineers normally have cognizance over development of a particular subsystem (e.g., mechanical, power, etc.), whereas discipline engineers normally provide specific analyses (e.g., flight dynamics, radiation, etc.). The availability of appropriately skilled personnel also needs to be considered.

To an extent, determining the skill mix required for developing any particular product is a subjective process. Due to this, the skill mix is normally determined in consultation with people experienced in leading design teams for a particular mission or technical application. Some of the subjective considerations involved include the product and its requirements, the mission class, and the project phase.

Continuing with a spacecraft example, most teams typically share a common core of required skills, such as subsystem engineering for mechanical, thermal, power, etc. However, the particular requirements of a spacecraft and mission can cause the skill mix to vary. For example, as opposed to robotic space missions, human-rated systems typically add the need for human systems discipline engineering and environmental control and life support subsystem engineering. As opposed to near Earth space missions, deep space missions may add the need for safety and planetary protection discipline engineering specific to contamination of the Earth or remote solar system bodies. And, as opposed to teams designing spacecraft instruments that operate at moderate temperatures, teams designing spacecraft instruments that operate at cryogenic temperatures will need cryogenics subsystem support.

Mission class and project phase may also influence the required team skill mix. For example, with respect to mission class, certain discipline analyses needed for Class A and B missions may not be required for Class D (or lower) missions. And with respect to project phase, some design and analyses may be performed by a single general discipline in Pre-Phase A and Phase A, whereas the need to conduct design and analyses in more detail in Phases B and C may indicate the need for multiple specialized subsystem design and discipline engineering skills.

An example skill mix for a Pre-Phase A technical team tasked to design a cryogenic interferometer space observatory is shown in Table 6.1-1 for purposes of illustration. For simplicity, analysis and technology development is assumed to be included in the subsystem or discipline shown. For example, this means “mechanical subsystem” includes both loads and dynamics analysis and mechanical technology development.

Engineering Team Disciplines
u003cstrongu003eTable 6.1-1u003c/strongu003e Example Engineering Team Disciplines in Pre-Phase A for Robotic Infrared Observatory

Once the processes, people, and roles and responsibilities are in place, a planning strategy may be formulated for the technical effort. A basic technical planning strategy should address the following:

  • The communication strategy within the technical team and for up and out communications;
  • Identification and tailoring of NASA procedural requirements that apply to each level of the PBS structure;
  • The level of planning documentation required for the SEMP and all other technical planning documents;
  • Identifying and collecting input documentation;
  • The sequence of technical work to be conducted, including inputs and outputs;
  • The deliverable products from the technical work;
  • How to capture the work products of technical activities;
  • How technical risks will be identified and managed;
  • The tools, methods, and training needed to conduct the technical effort;
  • The involvement of stakeholders in each facet of the technical effort;
  • How the NASA technical team will be involved with the technical efforts of external contractors;
  • The entry and success criteria for milestones, such as technical reviews and life cycle phases;
  • The identification, definition, and control of internal and external interfaces;
  • The identification and incorporation of relevant lessons learned into the technical planning;
  • The team’s approach to capturing lessons learned during the project and how those lessons will be recorded;
  • The approach for technology development and how the resulting technology will be incorporated into the project;
  • The identification and definition of the technical metrics for measuring and tracking progress to the realized product;
  • The criteria for make, buy, or reuse decisions and incorporation criteria for Commercial Off-the-Shelf (COTS) software and hardware;
  • The plan to identify and mitigate off-nominal performance;
  • The “how-tos” for contingency planning and replanning;
  • The plan for status assessment and reporting;
  • The approach to decision analysis, including materials needed, skills required, and expectations in terms of accuracy; and
  • The plan for managing the human element in the technical activities and product.

By addressing these items and others unique to the project, the technical team will have a basis for understanding and defining the scope of the technical effort, including the deliverable products that the overall technical effort will produce, the schedule and key milestones for the project that the technical team should support, and the resources required by the technical team to perform the work.

A key element in defining the technical planning effort is understanding the amount of work associated with performing the identified activities. Once the scope of the technical effort begins to coalesce, the technical team may begin to define specific planning activities and to estimate the amount of effort and resources required to perform each task. Historically, many projects have underestimated the resources required to perform proper planning activities and have been forced into a position of continuous crisis management in order to keep up with changes in the project.

Identifying Facilities

The planning process also includes identifying the required facilities, laboratories, test beds, and instrumentation needed to build, test, launch, and operate a variety of commercial and Government products. A sample list of the kinds of facilities that might be considered when planning is illustrated in Table 6.1-2.

Table listing examples of types of facilities to consider during planning.
u003cstrongu003eTable 6.1-2 u003c/strongu003eExamples of Types of Facilities to Consider During Planning

6.1.1.2.2 Define the Technical Work

The technical effort should be defined commensurate with the level of detail needed for the life cycle phase. When performing the technical planning, realistic values for cost, schedule, and labor resources should be used. Whether extrapolated from historical databases or from interactive planning sessions with the project and stakeholders, realistic values should be calculated and provided to the project team. Contingency should be included in any estimate and should be based on the complexity and criticality of the effort. Contingency planning should be conducted. The following are examples of contingency planning:

  • Additional, unplanned-for software engineering resources are typically needed during hardware and systems development and testing to aid in troubleshooting errors/anomalies. Frequently, software engineers are called upon to help troubleshoot problems and pinpoint the source of errors in hardware and systems development and testing (e.g., for writing additional test drivers to debug hardware problems). Additional software staff should be planned into the project contingencies to accommodate inevitable component and system debugging and avoid cost and schedule overruns.
  • Hardware-In-the-Loop (HWIL) should be accounted for in the technical planning contingencies. HWIL testing is typically accomplished as a debugging exercise where the hardware and software are brought together for the first time in the costly environment of HWIL. If upfront work is not done to understand the messages and errors arising during this test, additional time in the HWIL facility may result in significant cost and schedule impacts. Impacts may be mitigated through upfront planning, such as making appropriate debugging software available to the technical team prior to the test, etc.
  • Similarly, Human-In-The-Loop (HITL) evaluations identify contingency operational issues. HITL investigations are particularly critical early in the design process to expose, identify, and cost-effectively correct operational issues—nominal, maintenance, repair, off-nominal, training, etc.—in the required human interactions with the planned design. HITL testing should also be approached as a debugging exercise where hardware, software, and human elements interact and their performance is evaluated. If operational design and/or performance issues are not identified early, the cost of late design changes will be significant.

6.1.1.2.3 Schedule, Organize, and Budget the Technical Effort

Once the technical team has defined the technical work to be done, efforts can focus on producing a schedule and cost estimate for the technical portion of the project. The technical team should organize the technical tasks according to the project WBS in a logical sequence of events, taking into consideration the major project milestones, phasing of available funding, and timing of the availability of supporting resources.

Scheduling

Products described in the WBS are the result of activities that take time to complete. These activities have time precedence relationships among them that may be used to create a network schedule explicitly defining the dependencies of each activity on other activities, the availability of resources, and the receipt of receivables from outside sources. Use of a scheduling tool may facilitate the development and maintenance of the schedule.

Scheduling is an essential component of planning and managing the activities of a project. The process of creating a network schedule provides a standard method for defining and communicating what needs to be done, how long it will take, and how each element of the project WBS might affect other elements. A complete network schedule may be used to calculate how long it will take to complete a project; which activities determine that duration (i.e., critical path activities); and how much spare time (i.e., float) exists for all the other activities of the project.

“Critical path” is the sequence of dependent tasks that determines the longest duration of time needed to complete the project. These tasks drive the schedule and continually change, so they should be updated. The critical path may encompass only one task or a series of interrelated tasks. It is important to identify the critical path and the resources needed to complete the critical tasks along the path if the project is to be completed on time and within its resources. As the project progresses, the critical path will change as the critical tasks are completed or as other tasks are delayed. This evolving critical path with its identified tasks needs to be carefully monitored during the progression of the project.

Network scheduling systems help managers accurately assess the impact of both technical and resource changes on the cost and schedule of a project. Cost and technical problems often show up first as schedule problems. Understanding the project’s schedule is a prerequisite for determining an accurate project budget and for tracking performance and progress. Because network schedules show how each activity affects other activities, they assist in assessing and predicting the consequences of schedule slips or accelerations of an activity on the entire project.

For additional information on scheduling, refer to NASA/SP-2010-3403, NASA Schedule Management Handbook.

Budgeting

Budgeting and resource planning involve establishing a reasonable project baseline budget and the capability to analyze changes to that baseline resulting from technical and/or schedule changes. The project’s WBS, baseline schedule, and budget should be viewed as mutually dependent, reflecting the technical content, time, and cost of meeting the project’s goals and objectives. The budgeting process needs to take into account whether a fixed cost cap or fixed cost profile exists. When no such cap or profile exists, a baseline budget is developed from the WBS and network schedule. This specifically involves combining the project’s workforce and other resource needs with the appropriate workforce rates and other financial and programmatic factors to obtain cost element estimates. These elements of cost include

  • direct labor costs,
  • overhead costs,
  • other direct costs (travel, data processing, etc.),
  • subcontract costs,
  • material costs,
  • equipment costs,
  • general and administrative costs,
  • cost of money (i.e., interest payments, if applicable),
  • fee (if applicable), and
  • contingency (Unallocated Future Expenses (UFE)).

For additional information on cost estimating, refer to the NASA Cost Estimating Handbook and NPR 7120.5, NASA Space Flight Program and Project Management Requirements.

6.1.1.2.4 Prepare the SEMP and Other Technical Plans

Systems Engineering Management Plan

The SEMP is the primary, top-level technical management document for the project and is developed early in the Formulation Phase and updated throughout the project life cycle. The SEMP is driven by the type of project, the phase in the project life cycle, and the technical development risks and is written specifically for each project or project element. While the specific content of the SEMP is tailored to the project, the recommended content is discussed in Appendix J. It is important to remember that the main value of the SEMP is in the work that goes into the planning.

The technical team, working under the overall project plan, develops and updates the SEMP as necessary. The technical team works with the project manager to review the content and obtain concurrence. This allows for thorough discussion and coordination of how the proposed technical activities would impact the programmatic, cost, and schedule aspects of the project. The SEMP provides the specifics of the technical effort and describes the technical processes that will be used, how the processes will be applied using appropriate activities, how the project will be organized to accomplish the activities, and the cost and schedule associated with accomplishing the activities.

The physical length of a SEMP is not what is important. This will vary from project to project. The plan needs to be adequate to address the specific technical needs of the project. It is a living document that is updated as often as necessary to incorporate new information as it becomes available and as the project develops through the Implementation Phase. The SEMP should not duplicate other project documents; however, the SEMP should reference and summarize the content of other technical plans.

The systems engineer and project manager should identify additional required technical plans based on the project scope and type. If plans are not included in the SEMP, they should be referenced and coordinated in the development of the SEMP. Other plans, such as system safety, probabilistic risk assessment, and an HSI Plan also need to be planned for and coordinated with the SEMP. If a technical plan is a stand-alone, it should be referenced in the SEMP. Depending on the size and complexity of the project, these may be separate plans or they may be included within the SEMP. Once identified, the plans can be developed, training on these plans established, and the plans implemented. Examples of technical plans in addition to the SEMP are listed in Appendix K.

The SEMP should be developed during pre-formulation. In developing the SEMP, the technical approach to the project’s life cycle is developed. This determines the project’s length and cost. The development of the programmatic and technical management approaches requires that the key project personnel develop an understanding of the work to be performed and the relationships among the various parts of that work. Refer to Sections 6.1.2.1 and 6.1.1.2 on WBSs and network scheduling, respectively. The SEMP then flows into the project plan to ensure the proper allocation of resources including cost, schedule, and personnel.

The SEMP’s development requires contributions from knowledgeable programmatic and technical experts from all areas of the project that can significantly influence the project’s outcome. The involvement of recognized experts is needed to establish a SEMP that is credible to the project manager and to secure the full commitment of the project team.

Role of the SEMP

The SEMP is the rule book that describes to all participants how the project will be technically managed. The NASA technical team on the project should have a SEMP to describe how it will conduct its technical management, and each contractor should have a SEMP to describe how it will manage in accordance with both its contract and NASA’s technical management practices. Since the SEMP is unique to a project and contract, it should be updated for each significant programmatic change or it will become outmoded and unused and the project could slide into an uncontrolled state. The lead NASA field Center should have its SEMP developed before attempting to prepare an initial cost estimate since activities that incur cost, such as technical risk reduction and human element accounting, need to be identified and described beforehand. The contractor should have its SEMP developed during the proposal process (prior to costing and pricing) because the SEMP describes the technical content of the project, the potentially costly risk management activities, and the verification and validation techniques to be used, all of which should be included in the preparation of project cost estimates. The SEMPs from the supporting Centers should be developed along with the primary project SEMP. The project SEMP is the senior technical management document for the project; all other technical plans should comply with it. The SEMP should be comprehensive and describe how a fully integrated engineering effort will be managed and conducted.

Verification Plan

The verification plan is developed as part of the Technical Planning Process and is baselined at PDR. As the design matures throughout the life cycle, the plan is updated and refined as needed. The task of preparing the verification plan includes establishing the method of verification to be performed, dependent on the life cycle phase; the position of the product in the system structure; the form of the product used; and the related costs of verification of individual specified requirements. The verification methods include analyses, inspection, demonstration, and test. In some cases, the complete verification of a given requirement might require more than one method. For example, to verify the performance of a product may require looking at many use cases. This might be accomplished by running a Monte Carlo simulation (analysis) and also running actual tests on a few of the key cases. The verification plan, typically written at a detailed technical level, plays a pivotal role in bottom-up product realization.

Types of Hardware

Breadboard: A low fidelity unit that demonstrates function only without considering form or fit in the case of hardware or platform in the case of software. It often uses commercial and/or ad hoc components and is not intended to provide definitive information regarding operational performance.

Brassboard: A medium fidelity functional unit that typically tries to make use of as much operational hardware/software as possible and begins to address scaling issues associated with the operational system. It does not have the engineering pedigree in all aspects, but is structured to be able to operate in simulated operational environments in order to assess performance of critical functions.

Engineering Unit: A high fidelity unit that demonstrates critical aspects of the engineering processes involved in the development of the operational unit. Engineering test units are intended to closely resemble the final product (hardware/software) to the maximum extent possible and are built and tested so as to establish confidence that the design will function in the expected environments. In some cases, the engineering unit will become the final product, assuming proper traceability has been exercised over the components and hardware handling.

Prototype Unit: The prototype unit demonstrates form, fit, and function at a scale deemed to be representative of the final product operating in its operational environment. A subscale test article provides fidelity sufficient to permit validation of analytical models capable of predicting the behavior of full-scale systems in an operational environment.

Qualification Unit: A unit that is the same as the flight unit (form, fit, function, components, etc.) that will be exposed to the extremes of the environmental criteria (thermal, vibration, etc.). The unit will typically not be flown due to these off-nominal stresses.

Protoflight Unit: In projects that will not develop a qualification unit, the flight unit may be designated as a protoflight unit and a limited version of qualification test ranges will be applied. This unit will be flown.

Flight Unit: The end product that will be flown and will typically undergo acceptance level testing.

Note: The final, official verification of the end product should be on a controlled unit. Typically, attempting to “buy off” a “shall” on a prototype is not acceptable; it is usually completed on a qualification, flight, or other more final, controlled unit.

A phase product can be verified recursively throughout the project life cycle and on a wide variety of product forms. For example:

  • simulated (algorithmic models, virtual reality simulator);
  • mock-up (plywood, brassboard, breadboard);
  • concept description (paper report);
  • engineering unit (fully functional but may not be same form/fit);
  • prototype (form, fit, and function);
  • design verification test units (form, fit, and function is the same, but they may not have flight parts);
  • qualification units (identical to flight units but may be subjected to extreme environments); and
  • flight units (end product that is flown, including protoflight units).

Verification of the end product—that is, the official “run for the record” verification where the program/project takes credit for meeting a requirement—is usually performed on a qualification, protoflight, or flight unit to ensure its applicability to the flight system. However, with discussion and approval from the program/project and systems engineering teams, verification credit may be taken on lower fidelity units if they can be shown to be sufficiently like the flight units in the areas to be verified.

Any of these types of product forms may be in any of these states:

  • produced (built, fabricated, manufactured, or coded);
  • reused (modified internal non-developmental products or OTS product); or
  • assembled and integrated (a composite of lower-level products).

The conditions and environment under which the product is to be verified should be established and the verification should be planned based on the associated entrance/exit criteria that are identified. The Decision Analysis Process should be used to help finalize the planning details.

Procedures should be prepared to conduct verification based on the method (e.g., analysis, inspection, demonstration, or test) planned. These procedures are typically developed during the design phase of the project life cycle and matured as the design is matured. Operational use scenarios are thought through in order to explore all possible verification activities to be performed.

Note: Verification planning begins early in the project life cycle during the requirements development phase. (See Section 4.2.) The verification approach to use should be included as part of requirements development to plan for future activities, to establish special requirements derived from identified verification-enabling products, and to ensure that the requirements are verifiable. Updates to verification planning continue throughout logical decomposition and design development, especially as design reviews and simulations shed light on items under consideration. (See Section 6.1.)

As appropriate, project risk items are updated based on approved verification strategies that cannot duplicate fully integrated test systems, configurations, and/or target operating environments. Rationales, trade space, optimization results, and implications of the approaches are documented in the new or revised risk statements as well as references to accommodate future design, test, and operational changes to the project baseline.

Validation Plan

The validation plan is one of the work products of the Technical Planning Process and is generated during the Design Solution Process to validate the end product against the baselined stakeholder expectations. This plan can take many forms. The plan describes the total Test and Evaluation (T&E) planning from development of lower-end through higher-end products in the system structure and through operational T&E into production and acceptance. It may combine the verification and validation plans into a single document. (See Appendix I for a sample Verification and Validation Plan outline.)

The methods of validation include test, demonstration, inspection, and analysis. While the name of each method is the same as the name of the methods for verification, the purpose and intent as described above are quite different.

Planning to conduct the product validation is a key first step. The method of validation to be used (e.g., analysis, demonstration, inspection, or test) should be established based on the form of the realized end product, the applicable life cycle phase, cost, schedule, resources available, and location of the system product within the system structure.

An established set or subset of expectations or behaviors to be validated should be identified and the validation plan reviewed (an output of the Technical Planning Process, based on design solution outputs) for any specific procedures, constraints, success criteria, or other validation requirements. The conditions and environment under which the product is to be validated should be established and the validation should be planned based on the relevant life cycle phase and associated success criteria identified. The Decision Analysis Process should be used to help finalize the planning details.

It is important to review the validation plans with relevant stakeholders and to understand the relationship between the context of the validation and the context of use (human involvement). As part of the planning process, validation-enabling products should be identified and scheduling and/or acquisition should be initiated.

Procedures should be prepared to conduct validation based on the method planned; e.g., analysis, inspection, demonstration, or test). These procedures are typically developed during the design phase of the project life cycle and matured as the design is matured. Operational and use-case scenarios are thought through in order to explore all possible validation activities to be performed.

Validation is conducted by the user/operator or by the developer as determined by NASA Center directives or the contract with the developers. Systems-level validation (e.g., customer Test and Evaluation (T&E) and some other types of validation) may be performed by an acquirer testing organization. For those portions of validation performed by the developer, appropriate agreements should be negotiated to ensure that validation proof-of-documentation is delivered with the product.

Regardless of the source (buy, make, reuse, assemble and integrate) and the position in the system structure, all realized end products should be validated to demonstrate/confirm satisfaction of stakeholder expectations. Variations, anomalies, and out-of-compliance conditions, where such have been detected, are documented along with the actions taken to resolve the discrepancies. Validation is typically carried out in the intended operational environment or a relevant environment under simulated or actual operational conditions, not necessarily under the tightly controlled conditions usually employed for the Product Verification Process.

Environments

Relevant Environment: Not all systems, subsystems, and/or components need to be operated in the operational environment in order to satisfactorily address performance margin requirements or stakeholder expectations. Consequently, the relevant environment is the specific subset of the operational environment that is required to demonstrate critical “at risk” aspects of the final product performance in an operational environment.

Operational Environment: The environment in which the final product will be operated. In the case of space flight hardware/software, it is space. In the case of ground-based or airborne systems that are not directed toward space flight, it is the environments defined by the scope of operations. For software, the environment is defined by the operational platform.

Validation of phase products can be performed recursively throughout the project life cycle and on a wide variety of product forms. For example:

  • simulated (algorithmic models, virtual reality simulator);
  • mock-up (plywood, brassboard, breadboard);
  • concept description (paper report);
  • engineering unit (functional but may not be same form/fit);
  • prototype (product with form, fit, and function);
  • design validation test units (form, fit, and function may be the same, but they may not have flight parts);
  • qualification unit (identical to flight unit but may be subjected to extreme environments); and
  • flight unit (end product that is flown).

Any of these types of product forms may be in any of these states:

  • produced (built, fabricated, manufactured, or coded);
  • reused (modified internal non-developmental products or off-the-shelf product); or
  • assembled and integrated (a composite of lower level products).

Note: The final, official validation of the end product should be for a controlled unit. Typically, attempting final validation against the ConOps on a prototype is not acceptable: it is usually completed on a qualification, flight, or other more final, controlled unit.

For additional information on technical plans, refer to the following appendices of this document and to Section 6.1.1.2.4 of the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository:

  • Appendix H  Integration Plan Outline
  • Appendix I  Verification and Validation Plan Outline
  • Appendix J  SEMP Content Outline
  • Appendix K  Technical Plans
  • Appendix L Interface Requirements Document Outline
  • Appendix M CM Plan Outline
  • Appendix R HSI Plan Content Outline
  • Appendix S Concept of Operations Annotated Outline

Note: In planning for validation, consideration should be given to the extent to which validation testing will be done. In many instances, off-nominal operational scenarios and nominal operational scenarios should be utilized. Off-nominal testing offers insight into a system’s total performance characteristics and often assists in identifying the design issues and human-machine interface, training, and procedural changes required to meet the mission goals and objectives. Off-nominal testing as well as nominal testing should be included when planning for validation.

6.1.1.2.5 Obtain Stakeholder Commitments to Technical Plans

Stakeholder Roles in Project Planning

To obtain commitments to the technical plans from the stakeholders, the technical team should ensure that the appropriate stakeholders, including subject domain experts, have a method to provide inputs and to review the project planning for implementation of stakeholder interests.

During the Formulation Phase, the roles of the stakeholders should be defined in the project plan and the SEMP. Review of these plans and the agreements from the stakeholders to the content of these plans constitutes buy-in from the stakeholders to the technical approach. It is essential to identify the stakeholders and get their concurrence on the technical approach.

Later in the project life cycle, stakeholders may be responsible for delivering products to the project. Initial agreements regarding the responsibilities of the stakeholders are key to ensuring that the project technical team obtains the appropriate deliveries from stakeholders.

Stakeholder Involvement in Defining Requirements

The identification of stakeholders is one of the early steps in the systems engineering process. As the project progresses, stakeholder expectations are flowed down through the Logical Decomposition Process, and specific stakeholders are identified for all of the primary and derived requirements. A critical part of the stakeholders’ involvement is in the definition of the technical requirements. As requirements and the ConOps are developed, the stakeholders will be required to agree to these products. Inadequate stakeholder involvement leads to inadequate requirements and a resultant product that does not meet the stakeholder expectations. Status on relevant stakeholder involvement should be tracked and corrective action taken if stakeholders are not participating as planned.

Stakeholder Agreements

Throughout the project life cycle, communication with the stakeholders and commitments from the stakeholders may be accomplished through the use of agreements. Organizations may use an Internal Task Agreement (ITA), a Memorandum Of Understanding (MOU), or other similar documentation to establish the relationship between the project and the stakeholder. These agreements are also used to document the customer and provider responsibilities for defining products to be delivered. These agreements should establish the Measures of Effectiveness (MOEs) or Measures of Performance (MOPs) that will be used to monitor the progress of activities. Reporting requirements and schedule requirements should be established in these agreements. Preparation of these agreements will ensure that the stakeholders’ roles and responsibilities support the project goals and that the project has a method to address risks and issues as they are identified.

Stakeholder Support for Forums

During development of the project plan and the SEMP, forums are established to facilitate communication and document decisions during the life cycle of the project. These forums include meetings, working groups, decision panels, and control boards. Each of these forums should establish a charter to define the scope and authority of the forum and identify necessary voting or nonvoting participants. Ad hoc members may be identified when the expertise or input of specific stakeholders is needed when specific topics are addressed. It is important to ensure that stakeholders have been identified to support the forum.

6.1.1.2.6 Issue Technical Work Directives

The technical team provides technical work directives to Cost Account Managers (CAMs). This enables the CAMs to prepare detailed plans that are mutually consistent and collectively address all of the work to be performed. These plans include the detailed schedules and budgets for cost accounts that are needed for cost management and EVM.

Issuing technical work directives is an essential activity during Phase B of a project when a detailed planning baseline is required. If this activity is not implemented, then the CAMs are often left with insufficient guidance for detailed planning. The schedules and budgets that are needed for EVM will then be based on assumptions and local interpretations of project-level information. If this is the case, it is highly likely that substantial variances will occur between the baseline plan and the work performed. Providing technical work directives to CAMs produces a more organized technical team. This activity may be repeated when replanning occurs.

This “technical work directives” step produces: (1) planning directives to CAMs that result in (2) a consistent set of cost account plans. Where EVM is called for, it produces (3) an EVM planning baseline, including a Budgeted Cost of Work Scheduled (BCWS).

This activity is not limited to systems engineering. This is a normal part of project planning wherever there is a need for an accurate planning baseline. For additional information on Technical Work Directives, refer to Section 6.1.1.2.6 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository.

6.1.1.2.7 Capture Technical Planning Work Products

The work products from the Technical Planning Process should be managed using either the Technical Data Management Process or the Configuration Management Process as required. Some of the more important products of technical planning (i.e., the WBS, the SEMP, and the schedule, etc.) are kept under configuration control and captured using the CM process. The Technical Data Management Process is used to capture trade studies, cost estimates, technical analyses, reports, and other important documents not under formal configuration control. Work products, such as meeting minutes and correspondence (including e-mail) containing decisions or agreements with stakeholders also should be retained and stored in project files for later reference.

6.1.1.3 Outputs

Typical outputs from technical planning activities are:

  • Technical work cost estimates, schedules, and resource needs: e.g., funds, workforce, facilities, and equipment (to the project) within the project resources;
  • Product and process measures: Those needed to assess progress of the technical effort and the effectiveness of processes (to the Technical Assessment Process);
  • SEMP and other technical plans: Technical planning strategy, WBS, SEMP, HSI Plan, V&V Plan, and other technical plans that support implementation of the technical effort (to all processes; applicable plans to technical processes);
  • Technical work directives: e.g., work packages or task orders with work authorization (to applicable technical teams); and
  • Technical Planning Process work products: Includes products needed to provide reports, records, and non-deliverable outcomes of process activities (to the Technical Data Management Process).

The resulting technical planning strategy constitutes an outline, or rough draft, of the SEMP. This serves as a starting point for the overall Technical Planning Process after initial preparation is complete. When preparations for technical planning are complete, the technical team should have a cost estimate and schedule for the technical planning effort. The budget and schedule to support the defined technical planning effort can then be negotiated with the project manager to resolve any discrepancies between what is needed and what is available. The SEMP baseline needs to be completed. Planning for the update of the SEMP based on programmatic changes needs to be developed and implemented. The SEMP needs to be approved by the appropriate level of authority.

6.1.2 Technical Planning Guidance

Refer to Section 6.1.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository for additional guidance on:

  • Work Breakdown Structure (WBS),
  • cost definition and modeling, and
  • lessons learned.

Additional information on the WBS can also be found in NASA/SP-2010-3404, NASA Work Breakdown Structure Handbook and on costing in the NASA Cost Estimating Handbook.

Requirements management activities apply to the management of all stakeholder expectations, customer requirements, and technical product requirements down to the lowest level product component requirements (hereafter referred to as expectations and requirements). This includes physical functional and operational requirements, including those that result from interfaces between the systems in question and other external entities and environments. The Requirements Management Process is used to:

  • Identify, control, decompose, and allocate requirements across all levels of the WBS.
  • Provide bidirectional traceability.
  • Manage the changes to established requirement baselines over the life cycle of the system products.

Definitions

Traceability: A discernible association between two or more logical entities such as requirements, system elements, verifications, or tasks.

Bidirectional traceability: The ability to trace any given requirement/expectation to its parent requirement/expectation and to its allocated children requirements/expectations.

6.2.1 Process Description

Figure 6.2-1 provides a typical flow diagram for the Requirements Management Process and identifies typical inputs, outputs, and activities to consider in addressing requirements management.

Requirement process figure 6.2-1
u003cstrongu003eFigure 6.2‑1u003c/strongu003e Requirements Management Process

6.2.1.1 Inputs

There are several fundamental inputs to the Requirements Management Process.

  • Expectations and requirements to be managed: Requirements and stakeholder expectations are identified during the system design processes, primarily from the Stakeholder Expectations Definition Process and the Technical Requirements Definition Process.
  • Requirement change requests: The Requirements Management Process should be prepared to deal with requirement change requests that can be generated at any time during the project life cycle or as a result of reviews and assessments as part of the Technical Assessment Process.
  • TPM estimation/evaluation results: TPM estimation/evaluation results from the Technical Assessment Process provide an early warning of the adequacy of a design in satisfying selected critical technical parameter requirements. Variances from expected values of product performance may trigger changes to requirements.
  • Product verification and validation results: Product verification and product validation results from the Product Verification and Product Validation Processes are mapped into the requirements database with the goal of verifying and validating all requirements.

6.2.1.2 Process Activities

6.2.1.2.1 Prepare to Conduct Requirements Management

Preparing to conduct requirements management includes gathering the requirements that were defined and baselined during the Requirements Definition Process. Identification of the sources/owners of each requirement should be checked for currency. The organization (e.g., change board) and procedures to perform requirements management are established.

6.2.1.2.2 Conduct Requirements Management

The Requirements Management Process involves managing all changes to expectations and requirements baselines over the life of the product and maintaining bidirectional traceability between stakeholder expectations, customer requirements, technical product requirements, product component requirements, design documents, and test plans and procedures. The successful management of requirements involves several key activities:

  • Establish a plan for executing requirements management.
  • Receive requirements from the system design processes and organize them in a hierarchical tree structure.
  • Maintain bidirectional traceability between requirements.
  • Evaluate all change requests to the requirements baseline over the life of the project and make changes if approved by change board.
  • Maintain consistency between the requirements, the ConOps, and the architecture/design, and initiate corrective actions to eliminate inconsistencies.

6.2.1.2.3 Conduct Expectations and Requirements Traceability

As each requirement is documented, its bidirectional traceability should be recorded. Each requirement should be traced back to a parent/source requirement or expectation in a baselined document or identified as self-derived and concurrence on it sought from the next higher level requirements sources. Examples of self-derived requirements are requirements that are locally adopted as good practices or are the result of design decisions made while performing the activities of the Logical Decomposition and Design Solution Processes.

The requirements should be evaluated, independently if possible, to ensure that the requirements trace is correct and that it fully addresses its parent requirements. If it does not, some other requirement(s) should complete fulfillment of the parent requirement and be included in the traceability matrix. In addition, ensure that all top-level parent document requirements have been allocated to the lower level requirements. If there is no parent for a particular requirement and it is not an acceptable self-derived requirement, it should be assumed either that the traceability process is flawed and should be redone or that the requirement is “gold plating” and should be eliminated. Duplication between levels should be resolved. If a requirement is simply repeated at a lower level and it is not an externally imposed constraint, it may not belong at the higher level. Requirements traceability is usually recorded in a requirements matrix or through the use of a requirements modeling application.

6.2.1.2.4 Managing Expectations and Requirement Changes

Throughout early Phase A, changes in requirements and constraints will occur as they are initially defined and matured. It is imperative that all changes be thoroughly evaluated to determine the impacts on the cost, schedule, architecture, design, interfaces, ConOps, and higher and lower level requirements. Performing functional and sensitivity analyses will ensure that the requirements are realistic and evenly allocated. Rigorous requirements verification and validation will ensure that the requirements can be satisfied and conform to mission objectives. All changes should be subjected to a review and approval cycle to maintain traceability and to ensure that the impacts are fully assessed for all parts of the system.

Once the requirements have been validated and reviewed in the System Requirements Review (SRR) in late Phase A, they are placed under formal configuration control. Thereafter, any changes to the requirements should be approved by a Configuration Control Board (CCB) or equivalent authority. The systems engineer, project manager, and other key engineers usually participate in the CCB approval processes to assess the impact of the change including cost, performance, programmatic, and safety.

Requirement changes during Phases B and C are more likely to cause significant adverse impacts to the project cost and schedule. It is even more important that these late changes are carefully evaluated to fully understand their impact on cost, schedule, and technical designs.

The technical team should also ensure that the approved requirements are communicated in a timely manner to all relevant people. Each project should have already established the mechanism to track and disseminate the latest project information. Further information on Configuration Management (CM) can be found in Section 6.5.

6.2.1.2.5 Key Issues for Requirements Management

Requirements Changes

Effective management of requirements changes requires a process that assesses the impact of the proposed changes prior to approval and implementation of the change. This is normally accomplished through the use of the Configuration Management Process. In order for CM to perform this function, a baseline configuration should be documented and tools used to assess impacts to the baseline. Typical tools used to analyze the change impacts are as follows:

  • Performance Margins: This tool is a list of key performance margins for the system and the current status of the margin. For example, the propellant performance margin will provide the necessary propellant available versus the propellant necessary to complete the mission. Changes should be assessed for their impact on performance margins.
  • CM Topic Evaluators List: This list is developed by the project office to ensure that the appropriate persons are evaluating the changes and providing impacts to the change. All changes need to be routed to the appropriate individuals to ensure that the change has had all impacts identified. This list will need to be updated periodically.
  • Risk System and Threats List: The risk system can be used to identify risks to the project and the cost, schedule, and technical aspects of the risk. Changes to the baseline can affect the consequences and likelihood of identified risk or can introduce new risk to the project. A threats list is normally used to identify the costs associated with all the risks for the project. Project reserves are used to mitigate the appropriate risk. Analyses of the reserves available versus the needs identified by the threats list assist in the prioritization for reserve use.

The process for managing requirements changes needs to take into account the distribution of information related to the decisions made during the change process. The Configuration Management Process needs to communicate the requirements change decisions to the affected organizations. During a board meeting to approve a change, actions to update documentation need to be included as part of the change package. These actions should be tracked to ensure that affected documentation is updated in a timely manner.

Requirements Creep

“Requirements creep” is the term used to describe the subtle way that requirements grow imperceptibly during the course of a project. The tendency for the set of requirements is to relentlessly increase in size during the course of development, resulting in a system that is more expensive and complex than originally intended. Often the changes are quite innocent and what appear to be changes to a system are really enhancements in disguise.

However, some of the requirements creep involves truly new requirements that did not exist, and could not have been anticipated, during the Technical Requirements Definition Process. These new requirements are the result of evolution, and if we are to build a relevant system, we cannot ignore them.

There are several techniques for avoiding or at least minimizing requirements creep:

  • The first line of defense is a good ConOps that has been thoroughly discussed and agreed-to by the customer and relevant stakeholders.
  • In the early requirements definition phase, flush out the conscious, unconscious, and undreamed-of requirements that might otherwise not be stated.
  • Establish a strict process for assessing requirement changes as part of the Configuration Management Process.
  • Establish official channels for submitting change requests. This will determine who has the authority to generate requirement changes and submit them formally to the CCB (e.g., a contractor-designated representative, project technical leads, customer/science team lead, or user).
  • Measure the functionality of each requirement change request and assess its impact on the rest of the system. Compare this impact with the consequences of not approving the change. What is the risk if the change is not approved?
  • Determine if the proposed change can be accommodated within the fiscal and technical resource budgets. If it cannot be accommodated within the established resource margins, then the change most likely should be denied.

6.2.1.2.6 Capture Work Products

These products include maintaining and reporting information on the rationale for and disposition and implementation of change actions, current requirement compliance status and expectation, and requirement baselines.

6.2.1.3 Outputs

Typical outputs from the requirements management activities are:

  • Requirements Documents: Requirements documents are submitted to the Configuration Management Process when the requirements are baselined. The official controlled versions of these documents are generally maintained in electronic format within the requirements management tool that has been selected by the project. In this way, they are linked to the requirements matrix with all of its traceable relationships.
  • Approved Changes to the Requirements Baselines: Approved changes to the requirements baselines are issued as an output of the Requirements Management Process after careful assessment of all the impacts of the requirements change across the entire product or system. A single change can have a far-reaching ripple effect, which may result in several requirement changes in a number of documents.
  • Various Requirements Management Work Products: Requirements management work products are any reports, records, and undeliverable outcomes of the Requirements Management Process. For example, the bidirectional traceability status would be one of the work products that would be used in the verification and validation reports.

6.2.2 Requirements Management Guidance

Refer to Section 6.2.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository for additional guidance on:

  • the Requirements Management Plan and
  • requirements management tools.

The definition, management, and control of interfaces are crucial to successful programs or projects. Interface management is a process to assist in controlling product development when efforts are divided among parties (e.g., Government, contractors, geographically diverse technical teams, etc.) and/or to define and maintain compliance among the products that should interoperate.

The basic tasks that need to be established involve the management of internal and external interfaces of the various levels of products and operator tasks to support product integration. These basic tasks are as follows:

  • Define interfaces;
  • Identify the characteristics of the interfaces (physical, electrical, mechanical, human, etc.);
  • Ensure interface compatibility at all defined interfaces by using a process documented and approved by the project;
  • Strictly control all of the interface processes during design, construction, operation, etc.;
  • Identify lower level products to be assembled and integrated (from the Product Transition Process);
  • Identify assembly drawings or other documentation that show the complete configuration of the product being integrated, a parts list, and any assembly instructions (e.g., torque requirements for fasteners);
  • Identify end-product, design-definition-specified requirements (specifications), and configuration documentation for the applicable work breakdown structure model, including interface specifications, in the form appropriate to satisfy the product life cycle phase success criteria (from the Configuration Management Process); and
  • Identify product integration-enabling products (from existing resources or the Product Transition Process for enabling product realization).

6.3.1 Process Description

Figure 6.3-1 provides a typical flow diagram for the Interface Management Process and identifies typical inputs, outputs, and activities to consider in addressing interface management.

Interface Process Figure 6.3-1
u003cstrongu003eFigure 6.3-1u003c/strongu003e Interface Management Process

6.3.1.1 Inputs

Typical inputs needed to understand and address interface management would include the following:

  • Interface Requirements: These include the internal and external functional, physical, and performance interface requirements developed as part of the Technical Requirements Definition Process for the product(s).
  • Interface Change Requests: These include changes resulting from program or project agreements or changes on the part of the technical team as part of the Technical Assessment Process.

Other inputs that might be useful are:

  • System Description: This allows the design of the system to be explored and examined to determine where system interfaces exist. Contractor arrangements will also dictate where interfaces are needed.
  • System Boundaries: Documented physical boundaries, components, and/or subsystems, which are all drivers for determining where interfaces exist.
  • Organizational Structure: Decisions on which organization will dictate interfaces, particularly when there is the need to jointly agree on shared interface parameters of a system. The program and project WBS will also provide organizational interface boundaries.
  • Boards Structure: Defined board structure that identifies organizational interface responsibilities.

6.3.1.2 Process Activities

6.3.1.2.1 Prepare or Update Interface Management Procedures

These procedures establish the interface management responsibilities, what process will be used to maintain and control the internal and external functional and physical interfaces (including human), and how the change process will be conducted. Training of the technical teams or other support may also be required and planned.

6.3.1.2.2 Conduct Interface Management during System Design Activities

During project Formulation, the ConOps of the product is analyzed to identify both external and internal interfaces. This analysis will establish the origin, destination, stimuli, and special characteristics of the interfaces that need to be documented and maintained. As the system structure and architecture emerges, interfaces will be added and existing interfaces will be changed and should be maintained. Thus, the Interface Management Process has a close relationship to other areas, such as requirements definition and configuration management, during this period.

6.3.1.2.3 Conduct Interface Management during Product Integration

During product integration, interface management activities would support the review of integration and assembly procedures to ensure interfaces are properly marked and compatible with specifications and interface control documents. The interface management process has a close relationship to verification and validation. Interface control documentation and approved interface requirement changes are used as inputs to the Product Verification Process and the Product Validation Process, particularly where verification test constraints and interface parameters are needed to set the test objectives and test plans. Interface requirements verification is a critical aspect of the overall system verification.

6.3.1.2.4 Conduct Interface Control

Typically, an Interface Working Group (IWG) establishes communication links between those responsible for interfacing systems, end products, enabling products, and subsystems. The IWG has the responsibility to ensure accomplishment of the planning, scheduling, and execution of all interface activities. An IWG is typically a technical team with appropriate technical membership from the interfacing parties (e.g., the project, the contractor, etc.). The IWG may work independently or as a part of a larger change control board.

6.3.1.2.5 Capture Work Products

Work products include the strategy and procedures for conducting interface management, rationale for interface decisions made, assumptions made in approving or denying an interface change, actions taken to correct identified interface anomalies, lessons learned and updated support and interface agreement documentation.

6.3.1.3 Outputs

Typical outputs needed to capture interface management would include:

  • Interface control documentation. This is the documentation that identifies and captures the interface information and the approved interface change requests. Types of interface documentation include the Interface Requirements Document (IRD), Interface Control Document/Drawing (ICD), Interface Definition Document (IDD), and Interface Control Plan (ICP). These outputs will then be maintained and approved using the Configuration Management Process and become a part of the overall technical data package for the project.
  • Approved interface requirement changes. After the interface requirements have been baselined, the Requirements Management Process should be used to identify the need for changes, evaluate the impact of the proposed change, document the final approval/disapproval, and update the requirements documentation/tool/database. For interfaces that require approval from all sides, unanimous approval is required. Changing interface requirements late in the design or implementation life cycle is more likely to have a significant impact on the cost, schedule, or technical design/operations.
  • Other work products. These work products include the strategy and procedures for conducting interface management, the rationale for interface decisions made, the assumption made in approving or denying an interface change, the actions taken to correct identified interface anomalies, the lessons learned in performing the interface management activities, and the updated support and interface agreement documentation.

6.3.2 Interface Management Guidance

Refer to Section 6.3.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository for additional guidance on:

  • interface requirements documents,
  • interface control documents,
  • interface control drawings,
  • interface definition documents,
  • the interface control plans, and
  • interface management tasks.

The Technical Risk Management Process is one of the crosscutting technical management processes. Risk is the potential for performance shortfalls, which may be realized in the future, with respect to achieving explicitly established and stated performance requirements. The performance shortfalls may be related to institutional support for mission execution or related to any one or more of the following mission execution domains:

  • safety
  • technical
  • cost
  • schedule

Systems engineers are involved in this process to help identify potential technical risks, develop mitigation plans, monitor progress of the technical effort to determine if new risks arise or old risks can be retired, and to be available to answer questions and resolve issues. The following is guidance in implementation of risk management in general. Thus, when implementing risk management on any given program/project, the responsible systems engineer should direct the effort accordingly. This may involve more or less rigor and formality than that specified in governing documents such as NPRs. Of course, if deviating from NPR “requirements,” the responsible engineer must follow the deviation approval process. The idea is to tailor the risk management process so that it meets the needs of the individual program/project being executed while working within the bounds of the governing documentation (e.g., NPRs). For detailed information on the Risk Management Process, refer to the NASA Risk Management Handbook (NASA/SP-2011-3422).

Risk is characterized by three basic components:

  1. The scenario(s) leading to degraded performance with respect to one or more performance measures (e.g., scenarios leading to injury, fatality, destruction of key assets; scenarios leading to exceedance of mass limits; scenarios leading to cost overruns; scenarios leading to schedule slippage);
  2. The likelihood(s) (qualitative or quantitative) of those scenario(s); and
  3. The consequence(s) (qualitative or quantitative severity of the performance degradation) that would result if the scenario(s) was (were) to occur.

Uncertainties are included in the evaluation of likelihoods and consequences.

Scenarios begin with a set of initiating events that cause the activity to depart from its intended state. For each initiating event, other events that are relevant to the evolution of the scenario may (or may not) occur and may have either a mitigating or exacerbating effect on the scenario progression. The frequencies of scenarios with undesired consequences are determined. Finally, the multitude of such scenarios is put together, with an understanding of the uncertainties, to create the risk profile of the system.

Risk Scenario figure 6.4-1
u003cstrongu003eFigure 6.4-1u003c/strongu003e Risk Scenario Development

This “risk triplet” conceptualization of risk is illustrated in Figures 6.4-1 and 6.4-2.

risk aggregate figure 6.4-2
u003cstrongu003eFigure 6.4-2u003c/strongu003e Risk as an Aggregate Set of Risk Triplets

Undesired scenario(s) might come from technical or programmatic sources (e.g., a cost overrun, schedule slippage, safety mishap, health problem, malicious activities, environmental impact, or failure to achieve a needed scientific or technological objective or success criterion). Both the likelihood and consequences may have associated uncertainties.

Key Concepts in Risk Management

Risk: Risk is the potential for shortfalls, which may be realized in the future with respect to achieving explicitly-stated requirements. The performance shortfalls may be related to institutional support for mission execution, or related to any one or more of the following mission execution domains: safety, technical, cost, schedule. Risk is characterized as a set of triplets:

  • The scenario(s) leading to degraded performance in one or more performance measures.
  • The likelihood(s) of those scenarios.
  • The consequence(s), impact, or severity of the impact on performance that would result if those scenarios were to occur. 

Uncertainties are included in the evaluation of likelihoods and consequences.

Cost Risk: This is the risk associated with the ability of the program/project to achieve its life-cycle cost objectives and secure appropriate funding. Two risk areas bearing on cost are (1) the risk that the cost estimates and objectives are not accurate and reasonable; and (2) the risk that program execution will not meet the cost objectives as a result of a failure to handle cost, schedule, and performance risks.

Schedule Risk: Schedule risks are those associated with the adequacy of the time estimated and allocated for the development, production, implementation, and operation of the system. Two risk areas bearing on schedule risk are (1) the risk that the schedule estimates and objectives are not realistic and reasonable; and (2) the risk that program execution will fall short of the schedule objectives as a result of failure to handle cost, schedule, or performance risks.

Technical Risk: This is the risk associated with the evolution of the design and the production of the system of interest affecting the level of performance necessary to meet the stakeholder expectations and technical requirements. The design, test, and production processes (process risk) influence the technical risk and the nature of the product as depicted in the various levels of the PBS (product risk).

Programmatic Risk: This is the risk associated with action or inaction from outside the project, over which the project manager has no control, but which may have significant impact on the project. These impacts may manifest themselves in terms of technical, cost, and/or schedule. This includes such activities as: International Traffic in Arms Regulations (ITAR), import/export control, partner agreements with other domestic or foreign organizations, congressional direction or earmarks, Office of Management and Budget (OMB) direction, industrial contractor restructuring, external organizational changes, etc.

Scenario: A sequence of credible events that specifies the evolution of a system or process from a given state to a future state. In the context of risk management, scenarios are used to identify the ways in which a system or process in its current state can evolve to an undesirable state.

6.4.1 Risk Management Process Description

Figure 6.4-3 provides a typical flow diagram for the Risk Management Process and identifies typical inputs, activities, and outputs to consider in addressing risk management.

Risk Process figure 6.4-3
u003cstrongu003eFigure 6.4-3u003c/strongu003e Risk Management Process

6.4.1.1 Inputs

The following are typical inputs to risk management:

  • Project Risk Management Plan: The Risk Management Plan is developed under the Technical Planning Process and defines how risk will be identified, mitigated, monitored, and controlled within the project.
  • Technical Risk Issues: These will be the technical issues identified as the project progresses that pose a risk to the successful accomplishment of the project mission/goals.
  • Technical Risk Status Measurements: These are any measures that are established that help to monitor and report the status of project technical risks.
  • Technical Risk Reporting Requirements: Includes requirements of how technical risks will be reported, how often, and to whom.

Additional inputs that may be useful:

  • Other Plans and Policies: Systems Engineering Management Plan, form of technical data products, and policy input to metrics and thresholds.
  • Technical Inputs: Stakeholder expectations, concept of operations, imposed constraints, tracked observables, current program baseline, performance requirements, and relevant experience data.

6.4.1.2 Activities

6.4.1.2.1 Prepare a Strategy to Conduct Technical Risk Management

This strategy would include documenting how the program/project risk management plan (as developed during the Technical Planning Process) will be implemented, identifying any additional technical risk sources and categories not captured in the plan, identifying what will trigger actions and how these activities will be communicated to the internal and external teams.

6.4.1.2.2 Identify Technical Risks

On a continuing basis, the technical team will identify technical risks including their source, analyze the potential consequence and likelihood of the risks occurring, and prepare clear risk statements for entry into the program/project risk management system. Coordination with the relevant stakeholders for the identified risks is included. For more information on identifying technical risks, see Section 6.4.2.1.

6.4.1.2.3 Conduct Technical Risk Assessment

Until recently, NASA’s Risk Management (RM) approach was based almost exclusively on Continuous Risk Management (CRM), which stresses the management of individual risk issues during implementation. In December of 2008, NASA revised its RM approach in order to more effectively foster proactive risk management. The new approach, which is outlined in NPR 8000.4, Agency Risk Management Procedural Requirements and further developed in NASA/SP-2011-3422, NASA Risk Management Handbook, evolves NASA‘s risk management to entail two complementary processes: Risk-Informed Decision Making (RIDM) and CRM. RIDM is intended to inform direction-setting systems engineering (SE) decisions (e.g., design decisions) through better use of risk and uncertainty information in selecting alternatives and establishing baseline performance requirements (for additional RIDM technical information, guidance, and process description, see NASA/SP-2010-576 Version 1, NASA Risk-Informed Decision Making Handbook).

CRM is then used to manage risks over the course of the development and implementation phases of the life cycle to assure that requirements related to safety, technical, cost, and schedule are met. In the past, RM was considered equivalent to the CRM process; now, RM is defined as comprising both the RIDM and CRM processes, which work together to assure proactive risk management as NASA programs and projects are conceived, developed, and executed. Figure 6.4-4 illustrates the concept.

Risk Informed Decision Figure 5.4-4
u003cstrongu003eFigure 6.4-4u003c/strongu003e Risk Management as the Interaction of Risk-Informed Decision Making and Continuous Risk Management

6.4.1.2.4 Prepare for Technical Risk Mitigation

This includes selecting the risks that will be mitigated and more closely monitored, identifying the risk level or threshold that will trigger a risk mitigation action plan, and identifying for each risk which stakeholders will need to be informed that a mitigation/contingency action is determined as well as which organizations will need to become involved to perform the mitigation/contingency action.

6.4.1.2.5 Monitor the Status of Each Technical Risk Periodically

Risk status will need to be monitored periodically at a frequency identified in the risk plan. Risks that are approaching the trigger thresholds will be monitored on a more frequent basis. Reports of the status are made to the appropriate program/project management or board for communication and for decisions whether to trigger a mitigation action early. Risk status will also be reported at most life cycle reviews.

6.4.1.2.6 Implement Technical Risk Mitigation and Contingency Action Plans as Triggered

When the applicable thresholds are triggered, the technical risk mitigation and contingency action plans are implemented. This includes monitoring the results of the action plan implementation and modifying them as necessary, continuing the mitigation until the residual risk and/or consequence impacts are acceptable, and communicating the actions and results to the identified stakeholders. Action plan reports are prepared and results reported at appropriate boards and at life cycle reviews.

6.4.1.2.7 Capture Work Products

Work products include the strategy and procedures for conducting technical risk management; the rationale for decisions made; assumptions made in prioritizing, handling, and reporting technical risks and action plan effectiveness; actions taken to correct action plan implementation anomalies; and lessons learned.

6.4.1.3 Outputs

Following are key risk outputs from activities:

  • Technical Risk Mitigation and/or Contingency Actions: Actions taken to mitigate identified risks or contingency actions taken in case risks are realized.
  • Technical Risk Reports: Reports of the technical risk policies, status, remaining residual risks, actions taken, etc. Output at the agreed-to frequency and recipients.
  • Work Products: Includes the procedures for conducting technical risk management; rationale for decisions made; selected decision alternatives; assumptions made in prioritizing, handling, and reporting technical risks; and lessons learned.

6.4.2 Risk Management Process Guidance

For additional guidance on risk management, refer to NASA/SP-2010-576, NASA RIDM Handbook and NASA/SP-2011-3422, NASA Risk Management Handbook.

Configuration management is a management discipline applied over the product’s life cycle to provide visibility into and to control changes to performance and functional and physical characteristics. Additionally, according to SAE Electronic Industries Alliance (EIA) 649B, improper configuration management may result in incorrect, ineffective, and/or unsafe products being released. Therefore, in order to protect and ensure the integrity of NASA products, NASA has endorsed the implementation of the five configuration management functions and the associated 37 underlying principles defined within SAE/EIA-649-2 Configuration Management Requirements for NASA Enterprises.

Together, these standards address what configuration management activities are to be done, when they are to happen in the product life cycle, and what planning and resources are required. Configuration management is a key systems engineering practice that, when properly implemented, provides visibility of a true representation of a product and attains the product’s integrity by controlling the changes made to the baseline configuration and tracking such changes. Configuration management ensures that the configuration of a product is known and reflected in product information, that any product change is beneficial and is effected without adverse consequences, and that changes are managed.

CM reduces technical risks by ensuring correct product configurations, distinguishes among product versions, ensures consistency between the product and information about the product, and avoids the embarrassment cost of stakeholder dissatisfaction and complaint. In general, NASA adopts the CM principles as defined by SAE/EIA 649B, Configuration Management Standard, in addition to implementation as defined by NASA CM professionals and as approved by NASA management.

When applied to the design, fabrication/assembly, system/subsystem testing, integration, and operational and sustaining activities of complex technology items, CM represents the “backbone” of the enterprise structure. It instills discipline and keeps the product attributes and documentation consistent. CM enables all stakeholders in the technical effort, at any given time in the life of a product, to use identical data for development activities and decision-making. CM principles are applied to keep the documentation consistent with the approved product, and to ensure that the product conforms to the functional and physical requirements of the approved design.

6.5.1 Process Description

Figure 6.5-1 provides a typical flow diagram for the Configuration Management Process and identifies typical inputs, outputs, and activities to consider in addressing CM.

Configuration Management Process figure 6.5-1
u003cstrongu003eFigure 6.5-1u003c/strongu003e Configuration Management Process

6.5.1.1 Inputs

The inputs for this process are:

  • CM plan: This plan would have been developed under the Technical Planning Process and serves as the overall guidance for this process for the program/project
  • Engineering change proposals: These are the requests for changes to the established baselines in whatever form they may appear throughout the life cycle.
  • Expectation, requirements and interface documents: These baselined documents or models are key to the design and development of the product.
  • Approved requirements baseline changes: The approved requests for changes will authorize the update of the associated baselined document or model.
  • Designated configuration items to be controlled: As part of technical planning, a list or philosophy would have been developed that identifies the types of items that will need to be placed under configuration control.

6.5.1.2 Process Activities

There are five elements of CM (see Figure 6.5-2):

  • configuration planning and management
  • configuration identification,
  • configuration change management,
  • Configuration Status Accounting (CSA), and
  • configuration verification.
Elements for Configuration Managment
u003cstrongu003eFigure 6.5-2u003c/strongu003e Five Elements of Configuration Management

6.5.1.2.1 Prepare a Strategy to Conduct CM

CM planning starts at a program’s or project’s inception. The CM office should carefully weigh the value of prioritizing resources into CM tools or into CM surveillance of the contractors. Reviews by the Center Configuration Management Organization (CMO) are warranted and will cost resources and time, but the correction of systemic CM problems before they erupt into losing configuration control are always preferable to explaining why incorrect or misidentified parts are causing major problems in the program/project.

One of the key inputs to preparing for CM implementation is a strategic plan for the project’s complete CM process. This is typically contained in a CM plan. See Appendix M for an outline of a typical CM plan.

This plan has both internal and external uses:

  • Internal: It is used within the program/project office to guide, monitor, and measure the overall CM process. It describes all the CM activities and the schedule for implementing those activities within the program/project.
  • External: The CM plan is used to communicate the CM process to the contractors involved in the program/project. It establishes consistent CM processes and working relationships.

The CM plan may be a standalone document or it may be combined with other program/project planning documents. It should describe the criteria for each technical baseline creation, technical approvals, and audits.

6.5.1.2.2 Identify Baseline to be Under Configuration Control

Configuration identification is the systematic process of selecting, organizing, and stating the product attributes. Identification requires unique identifiers for a product and its configuration documentation. The CM activity associated with identification includes selecting the Configuration Items (CIs), determining the CIs’ associated configuration documentation, determining the appropriate change control authority, issuing unique identifiers for both CIs and CI documentation, releasing configuration documentation, and establishing configuration baselines.

NASA has four baselines, each of which defines a distinct phase in the evolution of a product design. The baseline identifies an agreed-to description of attributes of a CI at a point in time and provides a known configuration to which changes are addressed. Baselines are established by agreeing to (and documenting) the stated definition of a CI’s attributes. The approved “current” baseline defines the basis of the subsequent change. The system specification is typically finalized following the SRR. The functional baseline is established at the SDR and will usually transfer to NASA’s control at that time for contracting efforts. For in-house efforts, the baseline is set/controlled by the NASA program/project.

Evolution of Technical Baseline figure 6.5-3
u003cstrongu003eFigure 6.5‑3u003c/strongu003e Evolution of Technical Baseline

The four baselines (see Figure 6.5-3) normally controlled by the program, project, or Center are the following:

  • Functional Baseline: The functional baseline is the approved configuration documentation that describes a system’s or top-level CI’s performance requirements (functional, interoperability, and interface characteristics) and the verification required to demonstrate the achievement of those specified characteristics. The functional baseline is established at the SDR by the NASA program/project. The program/project will direct through contractual agreements, how the functional baselines are managed at the different functional levels. (Levels 1–4)
  • Allocated Baseline: The allocated baseline is the approved performance-oriented configuration documentation for a CI to be developed that describes the functional, performance, and interface characteristics that are allocated from a higher level requirements document or a CI and the verification required to demonstrate achievement of those specified characteristics. The allocated baseline extends the top-level performance requirements of the functional baseline to sufficient detail for defining the functional and performance characteristics and for initiating detailed design for a CI. The allocated baseline is usually controlled by the design organization until all design requirements have been verified. The allocated baseline is typically established at the successful completion of the PDR. Prior to CDR, NASA normally reviews design output for conformance to design requirements through incremental deliveries of engineering data. NASA control of the allocated baseline occurs through review of the engineering deliveries as data items.
  • Product Baseline: The product baseline is the approved technical documentation that describes the configuration of a CI during the production, fielding/deployment, and operational support phases of its life cycle. The established product baseline is controlled as described in the configuration management plan that was developed during Phase A. The product baseline is typically established at the completion of the CDR. The product baseline describes:
    • Detailed physical or form, fit, and function characteristics of a CI;
    • The selected functional characteristics designated for production acceptance testing; and
    • The production acceptance test requirements.
  • As-Deployed Baseline: The as-deployed baseline occurs at the ORR. At this point, the design is considered to be functional and ready for flight. All changes will have been incorporated into the documentation.

6.5.1.2.3 Manage Configuration Change Control

Configuration change management is a process to manage approved designs and the implementation of approved changes. Configuration change management is achieved through the systematic proposal, justification, and evaluation of proposed changes followed by incorporation of approved changes and verification of implementation. Implementing configuration change management in a given program/project requires unique knowledge of the program/project objectives and requirements. The first step establishes a robust and well-disciplined internal NASA Configuration Control Board (CCB) system, which is chaired by someone with program/project change authority. CCB members represent the stakeholders with authority to commit the team they represent. The second step creates configuration change management surveillance of the contractor’s activity. The CM office advises the NASA program or project manager to achieve a balanced configuration change management implementation that suits the unique program/project situation. See Figure 6.5-4 for an example of a typical configuration change management control process.

Typical Change Control Process figure 6.5-4
u003cstrongu003eFigure 6.5‑4u003c/strongu003e Typical Change Control Process

Types of Configuration Management Changes

  • Engineering Change: An engineering change is an iteration in the baseline. Changes can be major or minor. They may or may not include a specification change. Changes affecting an external interface must be coordinated and approved by all stakeholders affected.
    • A “major” change is a change to the baseline configuration documentation that has significant impact (i.e., requires retrofit of delivered products or affects the baseline specification, cost, safety, compatibility with interfacing products, or operator, or maintenance training).
    • A ”minor” change corrects or modifies configuration documentation or processes without impact to the interchangeability of products or system elements in the system structure.
  • Waiver: A waiver is a documented agreement intentionally releasing a program or project from meeting a requirement. (Some Centers use deviations prior to Implementation and waivers during Implementation.) Authorized waivers do not constitute a change to a baseline.

6.5.1.2.4 Maintain the Status of Configuration Documentation

Configuration Status Accounting (CSA) is the recording and reporting of configuration data necessary to manage CIs effectively. An effective CSA system provides timely and accurate configuration information such as:

  • Complete current and historical configuration documentation and unique identifiers.
  • Status of proposed changes, deviations, and waivers from initiation to implementation.
  • Status and final disposition of identified discrepancies and actions identified during each configuration audit.

Some useful purposes of the CSA data include:

  • An aid for proposed change evaluations, change decisions, investigations of design problems, warranties, and shelf-life calculations
  • Historical traceability
  • Software trouble reporting
  • Performance measurement data

The following are critical functions or attributes to consider if designing or purchasing software to assist with the task of managing configuration.

  • Ability to share data real time with internal and external stakeholders securely;
  • Version control and comparison (track history of an object or product);
  • Secure user checkout and check in;
  • Tracking capabilities for gathering metrics (i.e., time, date, who, time in phases, etc.);
  • Web based;
  • Notification capability via e-mail;
  • Integration with other databases or legacy systems;
  • Compatible with required support contractors and/or suppliers (i.e., can accept data from a third party as required);
  • Integration with drafting and modeling programs as required;
  • Provide neutral format viewer for users;
  • License agreement allows for multiple users within an agreed-to number;
  • Workflow and life cycle management;
  • Limited customization;
  • Migration support for software upgrades;
  • User friendly;
  • Consideration for users with limited access;
  • Ability to attach standard format files from desktop
  • Workflow capability (i.e., route a CI as required based on a specific set of criteria); and
  • Capable of acting as the one and only source for released information.

6.4.1.2.5 Conduct Configuration Audits

Configuration verification is accomplished by inspecting documents, products, and records; reviewing procedures, processes, and systems of operations to verify that the product has achieved its required performance requirements and functional attributes; and verifying that the product’s design is documented. This is sometimes divided into functional and physical configuration audits. (See Section 6.7.2.3 for more on technical reviews.)

6.4.1.2.6 Capture work Products

These include the strategy and procedures for configuration management, the list of identified configuration items, descriptions of the configuration items, change requests, disposition of the requests, rational for dispositions, reports, and audit results.

6.5.1.3 Outputs

NPR 7120.5 defines a project’s life cycle in progressive phases. Beginning with Pre-Phase A, these steps in turn are grouped under the headings of Formulation and Implementation. Approval is required to transition between these phases. Key Decision Points (KDPs) define transitions between the phases. CM plays an important role in determining whether a KDP has been met. Major outputs of CM are:

  • List of configuration items under control (Configuration Status Accounting (CSA) reports): This output is the list of all the items, documents, hardware, software, models, etc., that were identified as needing to be placed under configuration control. CSA reports are updated and maintained throughout the program and project life cycle.
  • Current baselines: Baselines of the current configurations of all items that are on the CM list are made available to all technical teams and stakeholders.
  • CM reports: Periodic reports on the status of the CM items should be available to all stakeholders on an agreed-to frequency and at key life cycle reviews.
  • Other CM work products: Other work products include the strategy and procedures used for CM; descriptions, drawings and/or models of the CM items; change requests and their disposition and accompanying rationale; reports; audit results as well as any corrective actions needed.

6.5.2 CM Guidance

Refer to Section 6.5.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository for additional guidance on:
the impact of not doing CM,

  • warning signs when you know you are in trouble, and
  • when it is acceptable to use redline drawings.

The Technical Data Management Process is used to plan for, acquire, access, manage, protect, and use data of a technical nature to support the total life cycle of a system. Data Management (DM) includes the development, deployment, operations and support, eventual retirement, and retention of appropriate technical, to include mission and science, data beyond system retirement as required by NPR 1441.1, NASA Records Retention Schedules.

DM is illustrated in Figure 6.6-1. Key aspects of DM for systems engineering include:

  • application of policies and procedures for data identification and control,
  • timely and economical acquisition of technical data,
  • assurance of the adequacy of data and its protection,
  • facilitating access to and distribution of the data to the point of use,
  • analysis of data use,
  • evaluation of data for future value to other programs/projects, and
  • process access to information written in legacy software.

The Technical Data Management and Configuration Management Processes work side-by-side to ensure all information about the project is safe, known, and accessible. Changes to information under configuration control require a Change Request (CR) and are typically approved by a Configuration Control Board. Changes to information under Technical Data Management do not need a CR but still need to be managed by identifying who can make changes to each type of technical data.

Technical Management Data Process Figure 6.6-1
u003cstrongu003eFigure 6.6‑1u003c/strongu003e Technical Data Management Process

6.6.1 Process Description

Figure 6.6-1 provides a typical flow diagram for the Technical Data Management Process and identifies typical inputs, outputs, and activities to consider in addressing technical data management.

6.6.1.1 Inputs

The inputs for this process are:

  • Technical data products to be managed: Technical data, regardless of the form or method of recording and whether the data are generated by the contractor or Government during the life cycle of the system being developed. (Electronic technical data should be stored with sufficient metadata to enable easy retrieval and sorting.)
  • Technical data requests: External or internal requests for any of the technical data generated by the program/project.

6.6.1.2 Process Activities

Each Center is responsible for policies and procedures for technical DM. NPR 7120.5 and NPR 7123.1 define the need to manage data, but leave specifics to the individual Centers. However, NPR 7120.5 does require that DM planning be provided as either a section in the program/project plan, CM plan, or as a separate document. The program or project manager is responsible for ensuring that the data required are captured and stored, data integrity is maintained, and data are disseminated as required.

Other NASA policies address the acquisition and storage of data and not just the technical data used in the life cycle of a system.

6.6.1.2.1 Prepare for Technical Data Management Implementation

The recommended procedure is that the DM plan be a separate plan apart from the program/project plan. DM issues are usually of sufficient magnitude to justify a separate plan. The plan should cover the following major DM topics:

  • Identification/definition/management of data sets.
  • Control procedures—receipt, modification, review, and approval.
  • Guidance on how to access/search for data for users.
  • Data exchange formats that promote data reuse and help to ensure that data can be used consistently throughout the system, family of systems, or system of systems.
  • Data rights and distribution limitations such as export-control Sensitive But Unclassified (SBU).
  • Storage and maintenance of data, including master lists where documents and records are maintained and managed.

Prepare a technical data management strategy. This strategy can document how the program/project data management plan will be implemented by the technical effort or, in the absence of such a program-level plan, be used as the basis for preparing a detailed technical data management plan, including:

  • Items of data that will be managed according to program/project or organizational policy, agreements, or legislation;
  • The data content and format;
  • A framework for data flow within the program/project and to/from contractors including the language(s) to be employed in technical effort information exchanges;
  • Technical data management responsibilities and authorities regarding the origin, generation, capture, archiving, security, privacy, and disposal of data products;
  • Establishing the rights, obligations, and commitments regarding the retention of, transmission of, and access to data items; and
  • Relevant data storage, transformation, transmission, and presentation standards and conventions to be used according to program/project or organizational policy, agreements, or legislative constraints.
  • Obtain strategy/plan commitment from relevant stakeholders.
  • Prepare procedures for implementing the technical data management strategy for the technical effort and/or for implementing the activities of the technical data management plan.
  • Establish a technical database(s) to use for technical data maintenance and storage or work with the program/project staff to arrange use of the program/project database(s) for managing technical data.
  • Establish data collection tools, as appropriate to the technical data management scope and available resources.
  • Establish electronic data exchange interfaces in accordance with international standards/agreements and applicable NASA standards.

Train appropriate stakeholders and other technical personnel in the established technical data management strategy/plan, procedures, and data collection tools, as applicable

Data Identification/Definition

Each program/project determines data needs during the life cycle. Data types may be defined in standard documents. Center and Agency directives sometimes specify content of documents and are appropriately used for in-house data preparation. The standard description is modified to suit program/project-specific needs, and appropriate language is included in SOWs to implement actions resulting from the data evaluation. “Data suppliers” may be contractors, academia, or the Government. Procurement of data from an outside supplier is a formal procurement action that requires a procurement document; in-house requirements may be handled using a less formal method. Below are the different types of data that might be utilized within a program/project:

  • Data
    • “Data” is defined in general as “recorded information regardless of the form or method of recording.” However, the terms “data” and “information” are frequently used interchangeably. To be more precise, data generally should be processed in some manner to generate useful, actionable information.
    • “Data,” as used in SE DM, includes technical data; computer software documentation; and representation of facts, numbers, or data of any nature that can be communicated, stored, and processed to form information required by a contract or agreement to be delivered to, or accessed by, the Government.
    • Data include that associated with system development, modeling and simulation used in development or test, test and evaluation, installation, parts, spares, repairs, usage data required for product sustainability, and source and/or supplier data.
    • Data specifically not included in Technical Data Management would be data relating to general NASA workforce operations information, communications information (except where related to a specific requirement), financial transactions, personnel data, transactional data, and other data of a purely business nature.
  • Data Call: Solicitation from Government stakeholders (specifically Integrated Product Team (IPT) leads and functional managers) identifies and justifies their data requirements from a proposed contracted procurement. Since data provided by contractors have a cost to the Government, a data call (or an equivalent activity) is a common control mechanism used to ensure that the requested data are truly needed. If approved by the data call, a description of each data item needed is then developed and placed on contract.
  • Information: Information is generally considered as processed data. The form of the processed data is dependent on the documentation, report, review formats, or templates that are applicable.
  • Technical Data Package: A technical data package is a technical description of an item adequate for supporting an acquisition strategy, production, engineering, and logistics support. The package defines the required design configuration and procedures to ensure adequacy of item performance. It consists of all applicable items such as drawings, associated lists, specifications, standards, performance requirements, quality assurance provisions, and packaging details.
  • Technical Data Management System: The strategies, plans, procedures, tools, people, data formats, data exchange rules, databases, and other entities and descriptions required to manage the technical data of a program/project.

6.6.1.2.2 Collect and Store Data

Subsequent activities collect, store, and maintain technical data and provide it to authorized parties as required. Some considerations that impact these activities for implementing Technical Data Management include:

  • Requirements relating to the flow/delivery of data to or from a contractor should be specified in the technical data management plan and included in the Request for Proposal (RFP) and contractor agreement.
  • NASA should not impose changes on existing contractor data management systems unless the program/project technical data management requirements, including data exchange requirements, cannot otherwise be met.
  • Responsibility for data inputs into the technical data management system lies solely with the originator or generator of the data.
  • The availability/access of technical data lies with the author, originator, or generator of the data in conjunction with the manager of the technical data management system.
  • The established availability/access description and list should be baselined and placed under configuration control.
  • For new programs/projects, a digital generation and delivery medium is desired. Existing programs/projects should weigh the cost/benefit trades of digitizing hard copy data.

Table 6.6-1 defines the tasks required to capture technical data.

Technical Data Tasks table 6.6-1i
Technical Data tasks table 6.6-1
u003cstrongu003eTable 6.6‑1u003c/strongu003e Technical Data Tasks

6.6.1.2.3 Provide Data to Authorized Parties

All data deliverables should include distribution statements and procedures to protect all data that contain critical technology information, as well as to ensure that limited distribution data, intellectual property data, or proprietary data are properly handled during systems engineering activities. This injunction applies whether the data are hard copy or digital.

As part of overall asset protection planning, NASA has established special procedures for the protection of Critical Program Information (CPI). CPI may include components; engineering, design, or manufacturing processes; technologies; system capabilities, and vulnerabilities; and any other information that gives a system its distinctive operational capability.

CPI protection should be a key consideration for the technical data management effort and is part of the asset protection planning process.

Data Collection Checklist

  • Have the frequency of collection and the points in the technical and technical management processes when data inputs will be available been determined?
  • Has the timeline that is required to move data from the point of origin to storage repositories or stakeholders been established?
  • Who is responsible for the input of the data?
  • Who is responsible for data storage, retrieval, and security?
  • Have necessary supporting tools been developed or acquired?

6.6.1.3 Outputs

Outputs include timely, secure availability of needed data in various representations to those authorized to receive it. Major outputs from the Technical Data Management Process include the following (see Figure 6.6-1):

  • Form of Technical Data Products: How each type of data is held and stored such as textual, graphic, video, etc.
  • Technical Data Electronic Exchange Formats: Description and perhaps templates, models or other ways to capture the formats used for the various data exchanges.
  • Delivered Technical Data: The data that were delivered to the requester.

Other work products generated as part of this process include the strategy and procedures used for technical data management, request dispositions, decisions, and assumptions.

6.6.2 Technical Data Management Guidance

Refer to Section 6.6.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository for additional guidance on:

  • data security and
  • ITAR.

Technical assessment is the crosscutting process used to help monitor technical progress of a program/project through periodic technical reviews and through monitoring of technical indicators such as MOEs, MOPs, Key Performance Parameters (KPPs), and TPMs. The reviews and metrics also provide status information to support assessing system design, product realization, and technical management decisions.

NASA has multiple review cycle processes for both space flight programs and projects (see NPR 7120.5), and research and technology programs and projects. (See NPR 7120.8, NASA Research and Technology Program and Project Management Requirements.) These different review cycles all support the same basic goals but with differing formats and formalities based on the particular program or project needs.

6.7.1 Process Description

Figure 6.7-1 provides a typical flow diagram for the Technical Assessment Process and identifies typical inputs, outputs, and activities to consider in addressing technical assessment. Technical assessment is focused on providing a periodic assessment of the program/project’s technical and programmatic status and health at key points in the life cycle. There are 6 criteria considered in this assessment process: alignment with and contribution to Agency strategic goals; adequacy of management approach; adequacy of technical approach; adequacy of the integrated cost and schedule estimates and funding strategy; adequacy and availability of non-budgetary resources, and adequacy of the risk management approach.

Technical Assessment Process figure 6.7-1
u003cstrongu003eFigure 6.7‑1 Tu003c/strongu003eechnical Assessment Process

6.7.1.1 Inputs

Typical inputs needed for the Technical Assessment Process would include the following:

  • Technical Plans: These are the planning documents that will outline the technical reviews/assessment process as well as identify the technical product/process measures that will be tracked and assessed to determine technical progress. Examples of these plans are the program (or project) plan, SEMP (if applicable), review plans (which may be part of the program or project plan), ILS plan, and EVM plan (if applicable). These plans contain the information and descriptions of the program/project’s alignment with and contribution to Agency strategic goals, its management approach, its technical approach, its integrated cost and schedule, its budget, resource allocations, and its risk management approach.
  • Technical Process and Product Measures: These are the identified technical measures that will be assessed or tracked to determine technical progress. These measures are also referred to as MOEs, MOPs, KPPs, and TPMs. (See Section 6.7.2.6.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository.) They provide indications of the program/project’s performance in key management, technical, cost (budget), schedule, and risk areas.
  • Reporting Requirements: These are the requirements on the methodology in which the status of the technical measures will be reported with regard to management, technical cost (budget), schedule, and risk. The requirements apply internally to the program/project and are used externally by the Centers and Mission Directorates to assess the performance of the program or project. The methodology and tools used for reporting the status will be established on a project-by-project basis.

6.7.1.2 Process Activities

6.7.1.2.1 Prepare Strategy for Conducting Technical Assessments

As outlined in Figure 6.7-1, the technical plans provide the initial inputs into the Technical Assessment Process. These documents outline the technical reviews/assessment approach as well as identify the technical measures that will be tracked and assessed to determine technical progress. An important part of the technical planning is determining what is needed in time, resources, and performance to complete a system that meets desired goals and objectives. Project managers need visibility into the progress of those plans in order to exercise proper management control. Typical activities in determining progress against the identified technical measures include status reporting and assessing the data. Status reporting will identify where the project stands with regard to a particular technical measure. Assessing will analytically convert the output of the status reporting into a more useful form from which trends can be determined and variances from expected results can be understood. Results of the assessment activity then feed into the Decision Analysis Process (see Section 6.8) where potential corrective action may be necessary.

These activities together form the feedback loop depicted in Figure 6.7-2.

Planning and Status figure 7.7-2
u003cstrongu003eFigure 6.7‑2 u003c/strongu003ePlanning and Status Reporting Feedback Loop

This loop takes place on a continual basis throughout the project life cycle. This loop is applicable at each level of the project hierarchy. Planning data, status reporting data, and assessments flow up the hierarchy with appropriate aggregation at each level; decisions cause actions to be taken down the hierarchy. Managers at each level determine (consistent with policies established at the next higher level of the project hierarchy) how often and in what form status data should be reported and assessments should be made. In establishing these status reporting and assessment requirements, some principles of good practice are as follows:

  • Use an agreed-upon set of well-defined technical measures. (See Section 6.7.2.6.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository.)
  • Report these technical measures in a consistent format at all project levels.
  • Maintain historical data for both trend identification and cross-project analyses.
  • Encourage a logical process of rolling up technical measures (e.g., use the WBS or PBS for project progress status).
  • Support assessments with quantitative risk measures.
  • Summarize the condition of the project by using color-coded (red, yellow, and green) alert zones for all technical measures.

6.7.1.2.2 Assess Technical Work Productivity and Product Quality and Conduct Progress Reviews

Regular, periodic (e.g., monthly) tracking of the technical measures is recommended, although some measures should be tracked more often when there is rapid change or cause for concern. Key reviews, such as PDRs and CDRs, or status reviews are points at which technical measures and their trends should be carefully scrutinized for early warning signs of potential problems. Should there be indications that existing trends, if allowed to continue, will yield an unfavorable outcome, corrective action should begin as soon as practical. Section 6.7.2.6.1 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository provides additional information on status reporting and assessment techniques for costs and schedules (including EVM), technical performance, and systems engineering process metrics.

The measures are predominantly assessed during the program and project technical reviews. Typical activities performed for technical reviews include (1) identifying, planning, and conducting phase-to-phase technical reviews; (2) establishing each review’s purpose, objective, and entry and success criteria; (3) establishing the makeup of the review team; and (4) identifying and resolving action items resulting from the review. Section 6.7.2.3 summarizes the types of technical reviews typically conducted on a program/project and the role of these reviews in supporting management decision processes. This section address the types of technical reviews typically conducted for both space flight and research and technology programs/projects and the role of these reviews in supporting management decision processes. It also identifies some general principles for holding reviews, but leaves explicit direction for executing a review to the program/project team to define.

The process of executing technical assessment has close relationships to other areas, such as risk management, decision analysis, and technical planning. These areas may provide input into the Technical Assessment Process or be the benefactor of outputs from the process.

Table 6.7-1 provides a summary of the types of reviews for a space flight project, their purpose, and timing.

Life-Cycle Reviews for Spaceflight Projects part-1
Life-Cycle Reviews for Spaceflight Projects part-2
Life-Cycle Reviews for Spaceflight Projects part-3
Life-Cycle Reviews for Spaceflight Projects part-4
u003cstrongu003eTable 6.7-1u003c/strongu003e Purpose and Results for Life-Cycle Reviews for Spaceflight Projects

6.7.1.2.3 Capture Work Products

The work products generated during these activities should be captured along with key decisions made, supporting decision rationale and assumptions, and lessons learned in performing the Technical Assessment Process.

6.7.1.3 Outputs

Typical outputs of the Technical Assessment Process would include the following:

  • Assessment Results, Findings, and Recommendations: This is the collective data on the established measures from which trends can be determined and variances from expected results can be understood. Results then feed into the Decision Analysis Process where corrective action may be necessary.
  • Technical Review Reports/Minutes: This is the collective information coming out of each review that captures the results, recommendations, and actions with regard to meeting the review’s success criteria.
  • Other Work Products: These would include strategies and procedures for technical assessment, key decisions and associated rationale, assumptions, and lessons learned.

6.7.2 Technical Assessment Guidance

Refer to Section 6.7.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository for additional guidance on:

  • the basis of technical reviews,
  • audits,
  • Key Decision Points,
  • required technical reviews for space flight projects,
  • other reviews,
  • status reporting and assessment (including MOEs, MOPs, KPPs, TPMs, EVM and other metrics,

Additional information is also available in NASA/SP-2014-3705, NASA Space Flight Program and Project Management Handbook.

The purpose of this section is to provide an overview of the Decision Analysis Process, highlighting selected tools and methodologies. Decision Analysis is a framework within which analyses of diverse types are applied to the formulation and characterization of decision alternatives that best implement the decision-maker’s priorities given the decision-maker’s state of knowledge.

The Decision Analysis Process is used in support of decision making bodies to help evaluate technical, cost, and schedule issues, alternatives, and their uncertainties. Decision models have the capacity for accepting and quantifying human subjective inputs: judgments of experts and preferences of decision makers.

The outputs from this process support the decision authority’s difficult task of deciding among competing alternatives without complete knowledge; therefore, it is critical to understand and document the assumptions and limitation of any tool or methodology and integrate them with other factors when deciding among viable options.

6.8.1 Process Description

A typical process flow diagram is provided in Figure 6.8-1, including inputs, activities, and outputs. The first step in the process is understanding the decision to be made in the context of the system/mission. Understanding the decision needed requires knowledge of the intended outcome in terms of technical performance, cost, and schedule. For an issue that follows the decision analysis process, the definition of the decision criteria or the measures that are important to characterize the options for making a decision should be the next step in the process. With this defined, a set of alternative solutions can be defined for evaluation. These solutions should cover the full decision space as defined by the understanding of the decision and definition of the decision criteria. The need for specific decision analysis tools (defined in Section 6.8.3 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository) can then be determined and employed to support the formulation of a solution. Following completion of the analysis, a description of how each alternative compares with the decision criteria can be captured for submission to the decision-making body or authority. A recommendation is typically provided from the decision analysis, but is not always required depending on the discretion of the decision-making body. A decision analysis report should be generated including: decision to be made, decision criteria, alternatives, evaluation methods, evaluation process and results, recommendation, and final decision.

Decision Analysis figure 6.8-1
u003cstrongu003eFigure 6.8‑1u003c/strongu003e Decision Analysis Process

Decision analysis covers a wide range of timeframes. Complex, strategic decisions may require weeks or months to fully assess all alternatives and potential outcomes. Decisions can also be made in hours or in a few days, especially for smaller projects or activities. Decisions are also made in emergency situations. Under such conditions, process steps, procedures, and meetings may be combined. In these cases, the focus of the systems engineer is on obtaining accurate decisions quickly. Once the decision is made, the report can be generated. The report is usually generated in an ongoing fashion during the decision analysis process. However, for quick or emergency decisions, the report information may be captured after the decision has been made.

Not all decisions require the same amount of analysis effort. The level and rigor required in a specific situation depend essentially on how clear-cut the decision is. If there is enough uncertainty in the alternatives’ performance that the decision might change if that uncertainty were to be reduced, then consideration needs to be given to reducing that uncertainty. A robust decision is one that is based on sufficient technical evidence and characterization of uncertainties to determine that the selected alternative best reflects decision-maker preferences and values given the state of knowledge at the time of the decision. This is suggested in Figure 6.8-2 below.

Risk Analysis figure 6.8-2
u003cstrongu003eFigure 6.8-2u003c/strongu003e Risk Analysis of Decision Alternatives

Note that in Figure 6.8-2, the phrase “net beneficial” in the decision node “Net beneficial to reduce uncertainty?” is meant to imply consideration of all factors, including whether the project can afford any schedule slip that might be caused by additional information collection and additional analysis.

6.8.1.1 Inputs

The technical, cost, and schedule inputs need to be comprehensively understood as part of the general decision definition. Based on this understanding, decision making can be addressed from a simple meeting to a formal analytical analysis. As illustrated in Figure 6.8-2, many decisions do not require extensive analysis and can be readily made with clear input from the responsible engineering and programmatic disciplines. Complex decisions may require more formal decision analysis when contributing factors have complicated or not well defined relationships. Due to this complexity, formal decision analysis has the potential to consume significant resources and time. Typically, its application to a specific decision is warranted only when some of the following conditions are met:

  • Complexity: The actual ramifications of alternatives are difficult to understand without detailed analysis;
  • Uncertainty: Uncertainty in key inputs creates substantial uncertainty in the ranking of alternatives and points to risks that may need to be managed;
  • Multiple Attributes: Greater numbers of attributes cause a greater need for formal analysis; and
  • Diversity of Stakeholders: Extra attention is warranted to clarify objectives and formulate TPMs when the set of stakeholders reflects a diversity of values, preferences, and perspectives.

Satisfaction of all of these conditions is not a requirement for initiating decision analysis. The point is, rather, that the need for decision analysis increases as a function of the above conditions. In addition, often these decisions have the potential to result in high stakes impacts to cost, safety, or mission success criteria, which should be identified and addressed in the process. When the Decision Analysis Process is triggered, the following are inputs:

  • Decision need, identified alternatives, issues, or problems and supporting data: This information would come from all technical, cost, and schedule management processes. It may also include high-level objectives and constraints (from the program/project).
  • Analysis support requests: Requests will arise from the technical, cost, and schedule assessment processes.

6.8.1.2 Process Activities

For the Decision Analysis Process, the following activities are typically performed.

It is important to understand the decision needed in the context of the mission and system, which requires knowledge of the intended outcome in terms of technical performance, cost, and schedule. A part of this understanding is the definition of the decision criteria, or the measures that are important to characterize the options for making a decision. The specific decision-making body, whether the program/project manager, chief engineer, line management, or control board should also be well defined. Based on this understanding, then the specific approach to decision-making can be defined.

Decisions are based on facts, qualitative and quantitative data, engineering judgment, and open communications to facilitate the flow of information throughout the hierarchy of forums where technical analyses and evaluations are presented and assessed and where decisions are made. The extent of technical analysis and evaluation required should be commensurate with the consequences of the issue requiring a decision. The work required to conduct a formal evaluation is significant and applicability should be based on the nature of the problem to be resolved. Guidelines for use can be determined by the magnitude of the possible consequences of the decision to be made.

6.8.1.2.1 Define the Criteria for Evaluating Alternative Solutions

This step includes identifying the following:

  • The types of criteria to consider, such as customer expectations and requirements, technology limitations, environmental impact, safety, risks, total ownership and life cycle costs, and schedule impact;
  • The acceptable range and scale of the criteria; and
  • The rank of each criterion by its importance.

Decision criteria are requirements for individually assessing the options and alternatives being considered. Typical decision criteria include cost, schedule, risk, safety, mission success, and supportability. However, considerations should also include technical criteria specific to the decision being made. Criteria should be objective and measurable. Criteria should also permit differentiating among options or alternatives. Some criteria may not be meaningful to a decision; however, they should be documented as having been considered. Criteria may be mandatory (i.e., “shall have”) or enhancing. An option that does not meet mandatory criteria should be disregarded. For complex decisions, criteria can be grouped into categories or objectives.

6.8.1.2.2 Identify Alternative Solutions to Address Decision Issues

With the decision need well understood, alternatives can be identified that fit the mission and system context. There may be several alternatives that could potentially satisfy the decision criteria. Alternatives can be found from design options, operational options, cost options, and/or schedule options.

Almost every decision will have options to choose from. These options are often fairly clear within the mission and system context once the decision need is understood. In cases where the approach has uncertainty, there are several methods to help generate various options. Brainstorming decision options with those knowledgeable of the context and decision can provide a good list of candidate alternatives. A literature search of related systems and approaches to identify options may also provide some possible options. All possible options should be considered. This can get unwieldy if a large number of variations is possible. A “trade tree” (discussed later) is an excellent way to prune the set of variations before extensive analysis is undertaken, and to convey to other stakeholders the basis for that pruning.

A good understanding of decision need and criteria will include the definition of primary and secondary factors. Options should be focused on primary factors in the decision as defined by the decision criteria. Non-primary factors (i.e., secondary, tertiary) can be included in evaluations but should not, in general, define separate alternatives. This will require some engineering judgment that is based on the mission and system context as well as the identified decision criteria. Some options may quickly drop out of consideration as the analysis is conducted. It is important to document the fact that these options were considered. A few decisions might only have one option. It is a best practice to document a decision matrix for a major decision even if only one alternative is determined to be viable. (Sometimes doing nothing or not making a decision is an option.)

6.8.1.2.3 Select Evaluation Methods and Tools

Based on the decision to be made, various approaches can be taken to evaluate identified alternatives. These can range from simple discussion meetings with contributing and affected stakeholders to more formal evaluation methods. In selecting the approach, the mission and system context should be kept in mind and the complexity of the decision analysis should fit the complexity of the mission, system, and corresponding decision.

Evaluation methods and tools/techniques to be used should be selected based on the purpose for analyzing a decision and on the availability of the information used to support the method and/or tool. Typical evaluation methods include: simulations; weighted trade-off matrices; engineering, manufacturing, cost, and technical opportunity trade studies; surveys; human-in-the-loop testing; extrapolations based on field experience and prototypes; user review and comment; and testing. Section 6.8.2 provides several options.

6.8.1.2.4 Evaluate Alternative Solutions with the Established Criteria and Selected Methods

The performance of each alternative with respect to each chosen performance measure is evaluated. In all but the simplest cases, some consideration of uncertainty is warranted. Uncertainty matters in a particular analysis only if there is a non-zero probability that uncertainty reduction could alter the ranking of alternatives. If this condition is obtained, then it is necessary to consider the value of reducing that uncertainty, and act accordingly.

Regardless of the methods or tools used, results should include the following:

  • Evaluation of assumptions related to evaluation criteria and of the evidence that supports the assumptions; and
  • Evaluation of whether uncertainty in the values for alternative solutions affects the evaluation.

When decision criteria have different measurement bases (e.g., numbers, money, weight, dates), normalization can be used to establish a common base for mathematical operations. The process of “normalization” is making a scale so that all different kinds of criteria can be compared or added together. This can be done informally (e.g., low, medium, high), on a scale (e.g., 1-3-9), or more formally with a tool. No matter how normalization is done, the most important thing to remember is to have operational definitions of the scale. An operational definition is a repeatable, measurable number. For example, “high” could mean “a probability of 67 percent and above.” “Low” could mean “a probability of 33 percent and below.” For complex decisions, decision tools usually provide an automated way to normalize. It is important to question and understand the operational definitions for the weights and scales of the tool.

Note: Completing the decision matrix can be thought of as a default evaluation method. Completing the decision matrix is iterative. Each cell for each criterion and each option needs to be completed by the team. Use evaluation methods as needed to complete the entire decision matrix.

6.8.1.2.5 Select Recommended Solutions from the Alternatives Based on the Evaluation Criteria and Report to the Decision-Maker

Once the decision alternative evaluation is completed, recommendations should be brought back to the decision maker including an assessment of the robustness of the ranking (i.e., whether the uncertainties are such that reducing them could credibly change the ranking of the alternatives). Generally, a single alternative should be recommended. However, if the alternatives do not significantly differ, or if uncertainty reduction could credibly alter the ranking, the recommendation should include all closely ranked alternatives for a final selection by the decision-maker. In any case, the decision-maker is always free to select any alternative or ask for additional alternatives to be assessed (often with updated guidance on selection criteria). This step includes documenting the information, including assumptions and limitations of the evaluation methods used, and analysis of the uncertainty in the analysis of the alternatives’ performance that justifies the recommendations made and gives the impacts of taking the recommended course of action, including whether further uncertainty reduction would be justifiable.

The highest score (e.g., percentage, total score) is typically the option that is recommended to management. If a different option is recommended, an explanation should be provided as to why the lower score is preferred. Usually, if an alternative having a lower score is recommended, the “risks” or “disadvantages” were too great for the highest ranking alternative indicating the scoring methods did not properly rank the alternatives. Sometimes the benefits and advantages of a lower or close score outweigh the highest score. If this occurs, the decision criteria should be reevaluated, not only the weights, but the basic definitions of what is being measured for each alternative. The criteria should be updated, with concurrence from the decision-maker, to more correctly reflect the suitability of each alternative.

6.8.1.2.6 Report Analysis Results

These results are reported to the appropriate stakeholders with recommendations, impacts, and corrective actions

6.8.1.2.7 Capture Work Products

These work products may include the decision analysis guidelines, strategy, and procedures that were used; analysis/evaluation approach; criteria, methods, and tools used; analysis/evaluation assumptions made in arriving at recommendations; uncertainties; sensitivities of the recommended actions or corrective actions; and lessons learned.

6.8.1.3 Outputs

6.8.1.3.1 Alternative Selection and Decision Support Recommendations and Impacts

Once the technical team recommends an alternative to a NASA decision-maker (e.g., a NASA board, forum, or panel), all decision analysis information should be documented. The team should produce a report to document all major recommendations to serve as a backup to any presentation materials used. A report in conjunction with a decision matrix provides clearly documented rationale for the presentation materials (especially for complex decisions). Decisions are typically captured in meeting minutes and should be captured in the report as well. Based on the mission and system context and the decision made, the report may be a simple white paper or a more formally formatted document. The important characteristic of the report is the content, which fully documents the decision needed, assessments done, recommendations, and decision finally made.

This report includes the following:

  • mission and system context for the decision
  • decision needed and intended outcomes
  • decision criteria
  • identified alternative solutions
  • decision evaluation methods and tools employed
  • assumptions, uncertainties, and sensitivities in the evaluations and recommendations
  • results of all alternative evaluations
  • alternative recommendations
  • final decision made with rationale
  • lessons learned

Typical information captured in a decision report is shown in Table 6.8-1.

Typical Information Table 6.8-1
u003cstrongu003eTable 6.8-1u003c/strongu003e Typical Information to Capture in a Decision Report

6.8.2 Decision Analysis Guidance

Refer to Section 6.8.2 in the NASA Expanded Guidance for Systems Engineering at https://nen.nasa.gov/web/se/doc-repository for additional guidance on decision analysis methods supporting all SE processes and phases including:

  • trade studies,
  • cost-benefit analysis,
  • influence diagrams,
  • decision trees,
  • analytic hierarchy process,
  • Borda counting, and
  • utility analysis,

Additional information on tools for decision making can be found in NASA Reference Publication 1358, System Engineering “Toolbox” for Design-Oriented Engineers located at https://nen.nasa.gov/web/se/doc-repository.