Written by: Mark Wireman, January 1, 2020
When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science. — William Thomson, Lord Kelvin, 1883
The next few blogs will be spent on what I consider to be
one of the most important and challenging aspects of DevSecOps: Governance. This
important principle of DevSecOps is often thought of as an after-thought usually
when others outside of the Dev organization begins asking questions about
- security vulnerabilities identified and remediated;
- time and costs to push code into production;
- cost-benefit of integrating a new technology;
- the benefit of a new process or control;
- adherence to compliance and regulatory requirements.
- adherence to policies, standards, and guidelines established by the organization.
Governance is and should be at the forefront of any DevSecOps venture and metrics are the key that will operationalize the Governance integration into a DevSecOps program. As presented in the first blog, Governance drives the Processes which influence the People and decides what and how the technology will be leveraged to adhere to the Processes and keep the People within the required guardrails of the Governance requirements.
This blog will kick-off the Governance series with the importance of measuring, what should be measured, and how metrics are key to the Governance principle within DevSecOps.
Why Should We Measure?
The short answer is – you can’t make a decision on what you don’t know, and what you don’t know can result in a poor decision. And poor decisions lead to overspending, over budget, over resources, and repeating the same decision over and over again. Far too often Organizations spend millions of dollars on the latest technology with the belief that the technology will solve a perceived issue, yet, an understanding of that issue was not properly measured to determine if, in fact, there is an issue and that the latest technology will solve the problem. Instead, millions of dollars are spent, only 25% of the technology is used, and the Organization now has two problems – the original problem and the problem of how to use the new technology.
Measuring and the act thereof helps to identify what resources are required; what and why needs to be improved; and how effective the performance of the solution is. Therefore, if an organization is missing key technologies and skilled resources then the organization has a need to know the depth and breadth of the missing elements. If an organization has a need to improve either performance or a process, then the performance and process must be measured for efficiencies, effectiveness, and gaps. When the organization makes an investment in technologies and skilled resources, then the organization will require a way to state whether or not the desired technical function and skillsets are actually being performed with the expected and anticipated outcomes. Figure 1 is a representation of examples of the What, Why, and Who of metrics.
Therefore, there are important key qualities that help in
shaping an effective measurement program:
- Enable important and key Decisions – the metric must allow for and enable decision-makers at the appropriate levels to take action.
- Objective – the metric must be definable using numbers and comprised of inputs that are numbers from known sources that are as void from manual calculation and subjectivity as possible.
- Foundation based – the metric should have a known and solid source that is well understood and can be explained to all consumers, decision-makers, and stakeholders.
- Repeatable – the metric must be simple to gather and kept up-to-date on a consistent and automated basis.
- Measure only what matters – Start with an understanding of what to track based on the organizational’ s business, risk, and security objectives, and be willing to adjust as needed.
Less is the General Rule – the goal is to have a small number of key metrics vs. a large number of metrics that have no meaning. Other metrics may be required over time, however, start with a select few key measurable objectives then expand as needed.
Selecting the Right Metrics Model
To facilitate the drive toward the key metric qualities,
the foundation of the quality metrics is in the selected model that will be
used for and drive the metrics program. And in the realm of Software Risk this
is no small task. The focus of Software Risk is around vulnerabilities – how
many, risk classification, time to remediation, etc. – yet this is only a small
part of the larger metric picture. If the only focus is on vulnerabilities then
who can an organization determine the effectiveness of training, staff, skills;
what are the types of threats, threat mitigation techniques being used; where
are the security requirements and how are they incorporated into the overall
design; how effective and efficient is the patch management process?
There are two leading maturity models that include metrics as part of the maturity levels: openSAMM(open Software Assurance Maturity Model) (http://www.opensamm.org/) and BSIMM (Build Security In Maturity Model) (https://www.bsimm.com/about/). Yet, when looking at and trying to digest the outcomes of the specific activities, there is little in the way of actual substance and specifics in how to establish the stated metrics in order to meet the objective from a maturity perspective. For example, openSAMM, in a Level 3 activity states “Collect metrics for historic spend” (https://www.owasp.org/index.php/SAMM_-_Strategy_&_Metrics_-_3) yet what does this truly mean? How will the cost of historic spend help the organization make decisions? BSIMM is also just as vague. For a Level 2 activity, the “Create or grow a satellite” (https://www.bsimm.com/framework/governance/software-security-metrics-strategy/) is about building and recognizes security champions across the development organization. However, from a metrics perspective, what information should be collected and how will this help with a business decision? Sure there is value in growing a base of programmers that understand software risk, however, this does not, necessarily, help with an understanding of a problem. So, let’s look at it from the key foundation perspective as shown in Table 1.
|Key Metric Foundation||openSAMM: Collect metrics for historic spend||BSIMM: Create or grow a satellite|
and Key Decisions
|What are the Key Decisions to be made? This information should be captured as part of the overall program, not an after-the-fact activity. Metrics captured in the present can be used to drive decisions 3 to 6 months in advanced while simultaneously helping a program vs. trying to look at potential mistakes painting a negative image of previous work.||What are the Key Decisions to be made? This could, potentially, be used to enable the importance of growing a community of security-aware developers, however, this should already be an integral and important part of the overall software risk effort. Other measures can be leveraged to determine awareness that can foster more immediate decisions that occur as part of the program vs. a measure that is a bolt-on after-the-fact.|
|Objective||This requires an extensive amount of manual input with numbers from known and unknown sources.||This requires an extensive amount of manual input with numbers from known and unknown sources.|
|Foundation Based||The underlying source is not well understood as it will likely be from a mix of individuals as well as some technology, e.g. time tracking, resource tracking, etc.||The underlying source is likely from individuals only and, to some extent, may also come from skills-based testing from training sources. Other sources may include defect tracking and other technologies, however, the nature of these sources is not well understood.|
|Repeatable||Because of the vagueness in the Objective and Foundation, a simple and consistent metric that is kept updates is unknown.||Because of the vagueness in the Objective and Foundation, a simple and consistent metric that is kept updates is unknown.|
|The underlying business, risk, and security objectives are not completely known. While a historical view of the historical spend is a talking point, there isn’t a link to a decision point on what the metric shows.||The underlying business, risk, and security objectives are not completely known. While a historical view of the historical spend is a talking point, there isn’t a link to a decision point on what the metric shows.|
Less is the
|This metric does not meet the Less is the General Rule principle because it does not derive from a well-understood base that will help a decision-maker at all levels make a decision now.||This metric does not meet the Less is the General Rule principle because it does not derive from a well-understood base that will help a decision-maker at all levels make a decision now.|
Therefore, it is difficult and oftentimes daunting for an organization to not only understand the context of the activity and how the activity correlates to the business and risk needs, but it is also equally daunting to understand where the data will be derived to meet the metrics objective. Equally as confusing is the metrics activities in both openSAMM and BSIMM give the appearance that the two are not only competing, both are also making an attempt to introduce a new paradigm to how security is measured within an organization. These challenges are, in the author’s view, one of the main reasons why organizations see measuring Software Risk as a hard problem.
A More Effective Metrics Model
A fair question at this point is – If openSAMM and BSIMM are not effective models to measure Software Risk, then what is? As stated in “How DevSecOps Should be Defined”, the 20 critical security controls offer an excellent model for the important elements to measure within Software Risk. 15 of the 20 controls lend toward a natural measurable activity, with the other 5 an extrapolation of a blending of the 15 and other well-known metrics in the security industry. Since Software Risk is, at its core, a way to identify and mitigate risk, then the same risk mitigation techniques and processes used in the industry today can be leveraged and applied. There isn’t a need to reinvent the wheel as openSAMM and BSIMM give the appearance of so doing. Let’s start with a high-level view of a Software Security Risk Framework that is a derived high-level view of the Categories/Domains that are at the heart of openSAMM and BSIMM, as shown in Table 2.
|Governance||Threat Management||Controls||Release||Software Supply Chain|
|Processes, activities, and technologies to help an organization manage Software Risk to include Policy, Compliance, and Regulatory enforcement.||Processes and activities used in the categorization and identification of Threats and Threat Actors, Threat Modeling, Security Requirements, and development of Standards and Requirements for Secure Design and Architecture.||Processes, activities, and technologies to support building security into the Development and Testing phases. This includes the integration and implementation of a centralized framework, API, Secure Coding Guidelines, Application Risk Assessments, Security Testing, Training, etc.||Processes, activities, and technologies to support periodic testing of applications and system, change management, logging, incident response, hardening of application and dependent systems.||Processes, activities, and technologies to understand risks to the organization with the adoption and use of 3rd Party / open source components, outsourced development of software, vendor-supplied software, acquired software, etc.|
The categories in Table 2 will be discussed in greater detail throughout the remainder of the book to demonstrate the important concepts and goals of the Software Risk adoption, implementation, and metrics that will align with the key qualities of the metrics program. Table 2 is also the basis for the shaping of the metric methodology and model that will now layer the 20 critical security controls to identify the specific measures to capture. As shown in Figure 7 of “How DevSecOps Should Be Defined”, Table 3 is an overlay of the Software Risk Framework into the key areas of the 20 critical security controls that can be used as the key metrics.
|Critical Security Control||Security Risk Framework Category||Description|
Authorized and Unauthorized Devices
Software Supply Chain
|All known and approved services, software, and dependent components must be properly documented and risks identified. Any exceptions must be properly noted and reviewed periodically. Unknown components will introduce risk.|
Configurations for Hardware and Software
|The proper configurations to mitigate against the identified threats must be centralized and integrated into the design of any system. The use of policies, procedures, compliance, and regulatory requirements are also included to meet compliance as well as reduce risk.|
Vulnerability Assessment and Remediation
Release, Software Supply Chain
|A program that identifies and remediates vulnerabilities throughout the development process is key to the overall risk mitigation and risk management program. Catching vulnerabilities as early as possible protects the organization from costly remediation activities post-release.|
of Administrative Privileges
Management, Release, Software Supply Chain
|The requirement for and monitoring of administrative level users within a system will protect an organization from intentional or unintentional compromise. The number of administrative users must be kept to a minimum and safeguards provided to mitigate the potential threat.|
Monitoring, and Analysis of Audit Logs
Release, Software Supply Chain
|Application security logging and monitoring is an important part of the incident management and response process. Understanding function level behavior can be a trigger to or an indication of a compromise in progress. And this behavior extends beyond the service and infrastructure layer components.|
Email and Web
|Release||Determining access to and configuration of email and web browser services as part of the functional requirements of the application is key to protecting the services from being used as avenues of attack. Additionally, the service load and expected service load are important parameters to know to determine requirements and monitoring.|
Release, Software Supply Chain
|Backdoors, unknown or undocumented pathways, ports, and the like can be used to compromise an organization. Detecting and reporting on the state of potential malware within a system can allow the organization to act quickly before the application is released.|
Control of Network Ports
Software Supply Chain
|Using the Least Privilege concept at the network layer, identifying and allowing only those ports that are absolutely necessary is key to reducing the attack surface of an organization. Additionally, an understanding of the number of open ports provides an organization clear visibility into areas of potential compromise and deploying appropriate assets to mitigate exposure.|
Release, Software Supply Chain
|An important concept is for an application to fail safely, meaning the application fails in a state that does not subject the organization to compromise and allows for data recovery in a restored state. The use of appropriate error handling and decision matrix with supporting services will provide insight into a failure state and how recovery is processed after a failure.|
Configurations for Network Devices
|The security requirements of a particular application may require configurations to mitigate against known threats. Additionally, compliance and regulatory requirements may require unique configurations to protect the information. The number and type of configurations assist in the consistency of and implementation of these requirements.|
|12||Boundary Defense||Release||The type and number of boundary defenses will assist in the establishment of security requirements used to protect information. This can also determine if additional defense mechanism is needed to mitigate identified threats.|
Management, Controls, Release
|The type of data, where the data is used, stored, and processed, along with the required protection mechanisms based on requirements are key elements to protecting the information from compromise. This also identifies the resources that require the use of the data and what type of access is required.|
Access Based on the Need to Know
|Knowing who the users of a system are, what data those users require, and the access level necessary to the data will determine overall exposure and risk to the organization.|
|Release||Other avenues of access into the application are important elements to known, understand, and mitigate against to reduce the attack surface to a minimalistic and controlled level.|
Monitoring and Control
|Monitoring all levels of access to information, how often the information is accessed, what actions are performed during access, and whether the user has elevated access to identify potential misuse and abuse of the system. In addition, login attempts, successes, and failures also identify potential issues as well as provides indications on peak usage, etc.|
Assessment and Appropriate Training to Fill Gaps
|Having trained and skilled staff will allow a continued focus on the implementation of the important aspects of software risk. The type of training can be determined by the frequency of vulnerabilities, compliance and regulatory requirements, and turnover.|
|All Categories||The monitoring of the activities identified in the application software security controls as part of the overall SDLC can be measured to determine effectiveness, efficiencies, and drive continuous improvement into the SDLC.|
Response and Management
|Release||Logging and monitoring are an integral part of the Incident process, aligning application security logging activities into the Incident Response and Management flow. Alerts can be generated and threat intelligence used to determine if an active threat is underway or whether a system is under stress from an unexpectedly high number of sessions.|
Tests and Red Team Exercises
|Release||A final test to determine if any remaining vulnerabilities exist along with adherence to security requirements, policies, procedures, and compliance and regulatory requirements rounds out the risk mitigation process. This can also be used to identify new threats and align existing requirements with new ones as part of an organic threat management system.|
figcaption>Table 3: 20 critical security controls mapped to the Software Risk Framework.
Putting it All Together
Now that the metrics methodology is taking shape, it is time to map the Software Risk Framework to the Agile methodology discussed in“How DevSecOps Should be Defined” by way of the DevSecOps processes and procedures. Table 4 is the overlay of the Software Risk Framework which together provides the concept for DevSecOps and how the 20 critical security controls will drive the metrics that will be captured and used throughout the various stages.
|Plan & Measure||Governance, Threat Management||Continuous Innovation, Feedback, Improvements|
|Develop & Test||Controls|
|Release & Deploy||Release, Software Supply Chain|
|Monitor||Controls, Release, Software Supply Chain|
With the establishment of a framework, the overlay of the framework with the key security controls and how the security controls and framework will drive the metrics collection within the development of systems, Figure 3 is the summary of the concept of DevSecOps and the adoption and implementation of the Software Risk metrics methodology.