The first draft of the Consensus Audit Guidelines (CAG) was published on February 23, 2009 according to the SANS press release. As represented in the press release, the CAG includes 20 controls, 15 of which can be automated. Further, the CAG defined tests that could be used to determine whether each control is effectively implemented.
According to the CAG, a "Consortium of US Federal Experts" identified the 20 Most Critical Controls "essential for blocking known high-priority attacks":
Critical Controls Subject to Automated Measurement and Validation:
1: Inventory of Authorized and Unauthorized Hardware.
2: Inventory of Authorized and Unauthorized Software.
3: Secure Configurations for Hardware and Software For Which
Such Configurations Are Available.
4: Secure Configurations of Network Devices Such as Firewalls
And Routers.
5: Boundary Defense
6: Maintenance and Analysis of Complete Security Audit Logs
7: Application Software Security
8: Controlled Use of Administrative Privileges
9: Controlled Access Based On Need to Know
10: Continuous Vulnerability Testing and Remediation
11: Dormant Account Monitoring and Control
12: Anti-Malware Defenses
13: Limitation and Control of Ports, Protocols and Services
14: Wireless Device Control
15: Data Leakage Protection
Additional Critical Controls (not directly supported by automated measurement and validation):
16. Secure Network Engineering
17. Red Team Exercises
18. Incident Response Capability
19. Assured Data Back-Ups
20. Security Skills Assessment and Training to Fill Gaps
The CAG also defined the scope to of each control description to "include how attackers would exploit the lack of the control, how to implement the control, and how to measure if the control has been properly implemented, along with suggestions regarding how standardized measurements can be applied."
As part of my review, I took into consideration existing resources available to the federal government such as NIST guidance, GAO audit guidelines, etc. After an extensive review, I found the CAG document missed several critical components that would be necessary to be an effective augmentation of the existing guidance and "best practices" already available to organizations (private and public) in managing their security programs. Although, setting priorities is important in determining where agencies should focus their efforts, it could increase risk if agencies do not consider how all the controls required in the existing NIST SP 800-53 control catalog would protrect their information and information systems.
The NIST control baselines, although subject to review and revisions to account for changes in the federal government's mission and priorities and the current threat exposure/trends, has a broad scope and includes most, if not all the controls included in the CAG. The CAG was advertised as a subset of NIST SP 800-53 Rev. 3 and addressed only a few of the baseline controls (excluding enhancements), however it did not fill many of the gaps that would in the long run, better prepare agencies across the U.S. Government in benchmarking with "acceptable measures" the security of their organization. Federal initiatives such as the National Checklist Program (NCP), Security Content Automation Protocol (SCAP), Nation Vulnerability Database (NVD), Common Vulnerability Scoring, DHS "Build Security In", etc. have specific goals and objectives. They rely on a partnership between private and public organizations to achieve results. The CAG fails to tie itself directly these initiatives that already have existing roadmaps.
Initially, I thought the CAG would have focused on system-level control auditing recommendation that would assist agencies in selecting appropriate "organizational-defined" or system-specific control parameters agencies are left to generate or define on their own through policy. Additionally, I would have thought the CAG, as an initial draft, would have supplied data points and scenarios to support the selection of controls that were characterized as "critical" for agencies to "defend against external attack”. This data would also be critical in helping agencies determine if their threat exposure is consistent or different from the baseline used to select the CAG control set. Compliance is "not a one size fits all", and need to be based on multiple factors and considerations that help agencies effectively align their current environment with an appropriate set of controls that cost-effectively achieve the control objectives for protecting an organization's information and IT assets. Another key component that was left undefined, and would have benefited agencies, would be a list of measurable metrics that could be used by organizations as a baseline for determining if these controls are in fact being implemented effectively and appropriately, or if the organization should tailor the metrics to achieve a security posture that is more realistic to their budget and assessment of risk.
Although readers of the CAG may vary in their analysis, I do feel the document highlights some very important points that agencies should consider in selecting controls they feel are weakest in their environment, or are relevant to their threat exposure. However, I do not believe the initial draft of the CAG meets the goals it set out to achieve, and should be adjusted accordingly. It is important to note that the CAG is currently still under a public review period, therefore, I am hopeful that the authors will review the additional comments, either similar to the points that I have noted and supplied as part of my feedback, or additional points, to make the document a useful tool that focuses agencies, but does not replace or substitute existing requirements agencies have invested resources to implement or plan as improvement to their own security program.
Reference:
Consensus Audit Guidelines, Draft 1.0, February 23, 2009 (http://www.sans.org/cag/)