By Carol Woody and Christopher Alberts


Abstract.

Mission threads describe operational process steps required to perform organizational functions. Researchers from the Carnegie Mellon Software Engineering Institute (SEI) explored the use of mission threads to connect desired operational capability to the underlying technology for analysis of system and software qualities such as security. The SEI has successfully applied this approach to develop an integration strategy for alert originators who plan to use the Wireless Emergency Alerts (WEA) system, a public alerting service offered by the Department of Homeland Security. This paper describes the approach using the WEA example and the value gained in applying mission thread analysis for security.

Importance of Systems of Systems

Everything we do these days involves system and software technology: cars, planes, banks, restaurants, stores, telephones, appliances, and entertainment rely extensively on technology. Much of this capability is supported by systems of systems— independent heterogeneous systems that work together to address desired functionality through complex network, data, and software interactions. The Wireless Emergency Alerts (WEA) service is a good example of a system of systems.

WEA enables local, tribal, state, territorial, and federal public safety officials to send geographically targeted text alerts to the public. The U.S. Department of Homeland Security Science and Technology (DHS S&T) Directorate partners with the Federal Emergency Management Agency (FEMA), the Federal Communication Commission, and commercial mobile service providers (CMSPs) to enhance public safety through the deployment of WEA, which permits emergency management organizations nationwide to submit alerts for public distribution by mobile carriers [1]. Alert originators can send three types of messages:

• Presidential Alert issued by the president of the United States to reach any region of the nation, or the nation as a whole

• Imminent Threat Alerts

• AMBER (America’s Missing: Broadcast Emergency Response) Alerts

CMSPs relay these alerts from FEMA’s Integrated Public Alert and Warning System (IPAWS) to mobile phones using cell broadcast technology, which does not get backlogged during times of emergency, unlike wireless voice and data services. Customers who own WEA-capable mobile phones will automatically receive these alerts during an emergency if they are located in the affected geographic area.

Alert originators already have extensive alert dissemination capability through the Emergency Alert System, highway signage systems, Internet websites, and telephone dialing systems, just to mention a few widely used alerting channels. The WEA system, which is diagramed in Fig. 1, would expand these options to mobile devices. FEMA established the message structure and the approvals needed to have the Alert Aggregator system accept messages for dissemination to mobile devices. Many alert originators plan to integrate this capability with systems already in place for other dissemination channels.

The Systems Engineering Handbook describes the following challenges for the development (and sustainment) of systems of systems [2]:

• Each participating system operates independently.

• Each participating system has its own update, enhancement, and replacement cycle.

• Overall performance of the desired functionality depends on how the various participating systems can interact, which is not always known in advance.

• Missing or conflicting standards can make the design of data exchanges among the participating systems complex and difficult to sustain.

• Each participating system has its own management, and the coordination of requirements, budget constraints, schedules, interfaces, and upgrades can have a major impact on the expected capability of the system of systems.

• Fuzzy boundaries can cause confusion and error; no one really owns the interface, but one of the participants needs to take leadership to ensure some level of shared understanding.

• The system of systems is never finished because as each system grows, expands, and ages, there is a constant opportunity and need for adjustment.

Public safety officials and alert recipients want to be able to rely on WEA capabilities and need to have confidence that the alerts are accurate and timely. Effective security is required to support this confidence. The risk that an attacker could create false alerts or cause valid alerts to be delayed, destroyed, or modified was identified as a critical issue. This could place the alert-originating organization’s mission, and the lives and property of the citizens it serves, at risk.

DHS S&T asked a team of security experts at the SEI to research this problem and identify a means for evaluating WEA alert originator security concerns. The team selected mission thread analysis as a means for developing a view of the system of systems that could be used for evaluating security risks.

An analysis approach was needed to prepare alert originators to address the following critical security questions [3]:

• What do alert originators need to protect? Why does it need to be protected? What happens if it is not protected?

• What potential adverse consequences do alert originators need to prevent? At what cost? How much disruption can they stand before they take action?

• How do alert originators determine and effectively manage the residual risk?

In addition, alert originators needed to consider the local, state, and federal compliance standards that the organization must address to ensure that the planned choices for security also meet other mandated standards.

Preparing for Mission Thread Analysis

Drawing on SEI security expertise, initial questions were assembled to assist the alert originator in gathering information about the current environment and preparing for WEA (or any new technology capability). The alert-originating organization should compose answers to the following questions:

• What WEA capability do we plan to implement (types of alerts to issue, geographic regions to cover)?

• Can we expand existing capabilities to add WEA, or do we need new capabilities?

• Are good security practices in place for the current operational environment? Is there any history of security problems that can inform our planning?

• Will we use current resources (technology and people), or do we need to add resources?

Responses to these questions begin to frame the target operational context and the critical functionality that organizations must evaluate for operational security. Each organization will have a different mix of acquired technology and services, in-house development components, and existing operational capability into which the WEA capability will be woven. With the use of mission threads, responses to these questions can be described in a visually compelling form that management, system architects, system and software engineers, and stakeholders can share and refine.

A mission thread is an end-to-end set of steps that illustrate the technology and people resources needed to deliver expected behavior under a set of conditions and provide a basis for identifying and analyzing potential problems that could represent risks. For each mission step, the expected actions, outcomes, and assets are assembled. Confirmation that the components appropriately respond to expected operational use increases confidence that the system will function as intended even in the event of an attack [4].

Mission threads provide a means to identify and evaluate the ways, intentional or unintentional, that component system failures could occur and how these would impact the mission. A WEA example is provided to demonstrate how SEI used a mission thread to analyze security.

WEA Mission Thread Example

Mission thread analysis begins with the development of an operational mission thread. For WEA, typically 25 steps take place from the determination of the need for an alert to the receipt by cell phone owners:

1. First responder contacts local alerting authority via an approved device (cell phone, email, radio, etc.) to state that an event meets criteria for using WEA to issue, cancel, or update an alert and provides information for a message.

2. Local alerting authority (person) determines that a call or email from a first responder is legitimate.

3. Local alerting authority instructs Alert Origination System (AOS) operator to issue, cancel, or update an alert using information provided by a first responder.1

4. AOS operator logs on to the AOS.

5. AOS logon process activates auditing of the operator’s session.

6. AOS operator enters alert, cancel, or update message.

7. AOS converts message to a format compliant with the Common Alerting Protocol (CAP, a WEA input standard).

8. CAP-compliant message is signed by a second person for local confirmation.

9. AOS transmits message to the IPAWS Open Platform for Emergency Networks (OPEN) Gateway.

10. IPAWS-OPEN Gateway verifies2 message and returns status message to AOS.

11. AOS operator reads status message and responds as needed.

12. If the message was verified, IPAWS-OPEN Gateway sends message to WEA Alert Aggregator.

13. WEA Alert Aggregator verifies message and returns status to IPAWS-OPEN Gateway.

14. IPAWS-OPEN Gateway processes status and responds as needed.

15. WEA Alert Aggregator performs additional message processing as needed.

16. If the message was verified, WEA Alert Aggregator transmits alert to Federal Alert Gateway.

17. Federal Alert Gateway verifies message and returns status to WEA Alert Aggregator.

18. WEA Alert Aggregator processes status and responds as needed.

19. If the message was verified, Federal Alert Gateway converts message to CMAC (Commercial Mobile Alert for Interface C) format.

20. Federal Alert Gateway transmits message to CMSP gateway.

21. CMSP Gateway returns status to Federal Alert Gateway.

22. Federal Alert Gateway processes status and responds as needed.

23. CMSP Gateway sends message to CMSP Infrastructure.

24. CMSP Infrastructure sends message via broadcast to mobile devices in the designated area(s).

25. Mobile device users (recipients) receive the message.

Although many of the steps do not involve technology, they can still represent security risks to the mission. Mission thread analysis, unlike other techniques such as Failure Mode and Effect Analysis [5] allows consideration of the people and their interactions with technology in addition to the functioning of a system itself. Also, most security evaluations consider only individual system execution. However, effective operational execution of a mission must cross organizational and system boundaries to be complete. The use of mission thread analysis for security provides a way of confirming that each participating system is secure and does not represent a risk to all others involved in mission execution.

Fig. 2 provides a picture of the WEA mission thread and includes step numbers from the list to link each step to the appropriate system area. Successful completion requires flawless execution of four major system areas—alert originator, FEMA IPAWS system, CMSPs, and cell phone recipients—each shown in a row of the figure. Each area operates independently, connected only through the transmission of an alert.

WEA Security Analysis

Using the mission thread illustrated in Fig. 2, potential security concerns can be identified through possible security threats. For the WEA example, SEI selected the STRIDE Threat Method for threat evaluation. STRIDE, developed by Microsoft, considers six typical categories of security concerns: spoofing, tampering with data, repudiations, information disclosure, denial of service, and elevation of privilege [6]. The name of this threat method is derived from the first letter of each security concern [7]. As an illustration of how STRIDE can be applied, focus on Steps 4–9 of the mission thread, which represent the transition across two major system areas from the alert originator to the FEMA system and provide an opportunity for mission failure if interaction between the system areas is not secure. Table 1 shows the result of the STRIDE analysis on the selected steps.

For each step, the technology assets critical to step execution were analyzed to determine ways that STRIDE threats can compromise each asset used in that step [7]. Security and software experts as well as individuals familiar with the operational mission need to participate in this portion of the analysis. The security and software experts have an understanding of what can go wrong and the potential impact of each possible failure on the analysis. Those knowledgeable about the operational execution can ensure that the scenarios are realistic and valid. Available documentation can provide a start for the development of the mission threads, but there is a tendency to document the desired operational environment and not the real one. Effective security risk analysis requires access to realistic operational information.

Based on this input, security experts (individuals with operational security training and experience) identified at least two security risks that could lead to mission failure:

1. authentication of the individual using the AOS in Step 4

2. validation and protection of the digital signatures applied to the alert approved for submission to the Alert Aggregator in Step 8

To analyze these risks in greater detail and help alert originators understand how a security risk could materialize, mission threads for each specific risk were assembled. Fig. 3 provides a picture of the risk scenario that describes the second security risk (validity of the digital signature) noted from the analysis of the WEA operational mission thread. The following paragraphs provide a dialogue that describes ways in which the security threat could materialize and why the alert origination organization should consider possible mitigations.

An outside attacker with malicious intent decides to obtain a valid certificate and use it to send an illegitimate CAP-compliant message. The attacker’s goal is to send people to a dangerous location, hoping to inflict physical and emotional harm on them. The key to this attack is capturing a valid certificate from an alert originator. The attacker develops two strategies for capturing a valid certificate. The first strategy targets an alert originator directly. The second strategy focuses on AOS vendors. Targeting a vendor could be a particularly fruitful strategy for the attacker. The number of vendors that provide AOS software is small. As a result, each vendor controls a large number of certificates. A compromised vendor could provide an attacker with many potential organizations to target.

No matter which strategy is pursued, the attacker looks for vulnerabilities (i.e., weaknesses) in technologies or procedures that can be exploited. For example, the attacker will try to find vulnerabilities that expose certificates to exploit, such as

• unmonitored access to certificates

• lack of encryption controls for certificates during transit and storage

• lack of role-based access to certificates

The attacker might also explore social engineering techniques to obtain a certificate. Here, the attacker attempts to manipulate someone from the alert originator or vendor organization into providing access to a legitimate certificate or to get information that will be useful in the attacker’s quest to get a certificate.

Obtaining a certificate is not a simple endeavor. The attacker has to be sufficiently motivated and skilled to achieve this interim goal. However, once this part of the scenario is complete, the attacker is well positioned to send an illegitimate CAP-compliant message. The attacker has easy access to publicly documented information defining how to construct CAP-compliant messages.

The attacker’s goal in this risk is to send people to a location that will put them in harm’s way. To maximize the impact, the attacker takes advantage of an impending event (e.g., weather event, natural disaster). Because people likely will verify WEA messages through other channels, synchronizing the attack with an impending event makes it more likely that people will follow the attacker’s instructions. This scenario could produce catastrophic consequences, depending on the severity of the event with which the attack is linked.

Through the use of mission thread analysis, security expertise can be integrated with operational execution to fully describe and analyze operational security risk situations. While there may be many variations of operational execution, an exhaustive study of all options is not necessary. Building a representative example that provides a detailed view of a real operational mission from start to finish has proven to be of value for security risk identification.

Conclusion

The process of developing a well-articulated mission thread that operational and security experts can share and analyze provides an opportunity to uncover missing or incomplete requirements as well as differences in understanding, faulty assumptions, and interactions across system and software boundaries that could contribute to security concerns and potential failure [4].

The mission thread analysis connects each mission step with the technology and human assets needed to execute that step and provides a framework to link potential security threats directly to mission execution. Mission thread diagrams and tables assemble information in a structure that can be readily reviewed and validated by operational and technology experts from various disciplines including acquisition, development, and operational support. Mission thread security analysis can be an effective tool for improved identification of security risks to increase confidence that the system of systems will function with appropriate operational security.

Acknowledgment

This material is based upon work funded and supported by Department of Homeland Security under Contract No. FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center sponsored by the United States Department of Defense. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013 and 252.227-7013 Alternate I.

Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Department of Homeland Security or the United States Department of Defense.

Carnegie Mellon®, CERT® are registered in the U.S. Patent and Trademark Office by Carnegie Mellon University. DM-0000471


References and Notes

References:

1. United States. Federal Emergency Management Agency. “Wireless Emergency Alerts.” Federal Emergency Management Agency. FEMA, 20 June 2013. Web. 7 June 2013. .

2. Haskins, C., ed. Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities. Version 3.2. Revised by K. Forsberg, M. Krueger, and R. Hamelin. San Diego: International Council on Systems Engineering (INCOSE), 2010. Print.

3. Allen, J., S. Barnum, R. Ellison, G. McGraw, and N. Mead. Software Security Engineering: A Guide for Project Managers. Upper Saddle River: Addison-Wesley, 2008. Print.

4. Ellison, R., J. Goodenough, C. Weinstock, and C. Woody. Survivability Assurance for System of Systems (CMU/SEI-2008-TR-008). Pittsburgh: Software Engineering Institute, Carnegie Mellon University, May 2008. 7 June 2013. . Web.

5. Stamatis, D. H. Failure Mode Effect Analysis: FEMA from Theory to Execution. 2nd ed. Milwaukee: ASQ Quality Press, 2003. Print.

6. Microsoft. “The STRIDE Threat Model.” Microsoft Developer Network. Microsoft, 2005. 7 June 2013. . Web.

7. Howard, M., and S. Lipner. The Security Development Life Cycle. Redmond: Microsoft Press, 2006. Print.

Notes:

1. In some cases, the alerting authority and the AOS operator may be the same person.

2. In this list of steps, message verification includes authentication and ensuring that the message is correctly formatted.


Carol Woody

Click to view image

Dr. Carol Woody has been a senior member of the technical staff at the Software Engineering Institute, Carnegie Mellon University since 2001. Currently she is the technical lead of the cyber security engineering team whose research focuses on building capabilities in defining, acquiring, developing, measuring, managing, and sustaining secure software for highly complex networked systems as well as systems of systems.

Carnegie Mellon University

Software Engineering Institute

4500 Fifth Avenue

Pittsburgh, PA 15213
Phone: 412-268-9137
E-mail: cwoody@cert.org

Christopher Alberts

Click to view image

Christopher Alberts is a Principal Engineer in the CERT® Program at the Software Engineering Institute, where he leads applied research projects in software assurance and cyber security. His research interests include risk analysis, measurement, and assessment. He has published two books and over 35 technical reports and articles. Alberts has BS and ME degrees in engineering from Carnegie Mellon University. Prior to the SEI, he worked at Carnegie Mellon Research Institute and AT&T Bell Laboratories.

Carnegie Mellon University

Software Engineering Institute

4500 Fifth Avenue

Pittsburgh, PA 15213

Phone: 412-268-3045

E-mail: cja@cert.org

Carol Woody

Click to view image

Dr. Carol Woody has been a senior member of the technical staff at the Software Engineering Institute, Carnegie Mellon University since 2001. Currently she is the technical lead of the cyber security engineering team whose research focuses on building capabilities in defining, acquiring, developing, measuring, managing, and sustaining secure software for highly complex networked systems as well as systems of systems.

Carnegie Mellon University

Software Engineering Institute

4500 Fifth Avenue

Pittsburgh, PA 15213
Phone: 412-268-9137
E-mail: cwoody@cert.org

Christopher Alberts

Click to view image

Christopher Alberts is a Principal Engineer in the CERT® Program at the Software Engineering Institute, where he leads applied research projects in software assurance and cyber security. His research interests include risk analysis, measurement, and assessment. He has published two books and over 35 technical reports and articles. Alberts has BS and ME degrees in engineering from Carnegie Mellon University. Prior to the SEI, he worked at Carnegie Mellon Research Institute and AT&T Bell Laboratories.

Carnegie Mellon University

Software Engineering Institute

4500 Fifth Avenue

Pittsburgh, PA 15213

Phone: 412-268-3045

E-mail: cja@cert.org

Carol Woody

Click to view image

Dr. Carol Woody has been a senior member of the technical staff at the Software Engineering Institute, Carnegie Mellon University since 2001. Currently she is the technical lead of the cyber security engineering team whose research focuses on building capabilities in defining, acquiring, developing, measuring, managing, and sustaining secure software for highly complex networked systems as well as systems of systems.

Carnegie Mellon University

Software Engineering Institute

4500 Fifth Avenue

Pittsburgh, PA 15213
Phone: 412-268-9137
E-mail: cwoody@cert.org


« Previous Next »