How the Common Criteria Really Works
While Shapiro gets the general idea, you should understand that the Protection Profile (PP) hasn't caught on like most of the those involved in the project probably expected. It's optional, first of all, and requires a lot of time and effort from groups who are independent of the security product companies (e.g., consumer rights groups) to work. There are exceptions (e.g., the Smart Card Protection Profile), but, for the most part, the consumers, or their independent representatives, must define their own security needs for this step to be of some utility.
The reason is that if the consumer doesn't understand the PP - the Controlled Access Protection Profile is a good example - then they won't understand the company's security target that claims conformance to that PP, nor what the evaluation - and its associated assurance level - really means.
The Security Target (ST) is the required document that Shapiro should be referencing. It scopes the evaluation, detailing all the security requirements the product satisfies, how it satisfies them, and what assurance the consumer should have that they really are satisfied (that's the evaluation assurance level or EAL). The ST can get those security requirements from one or more PPs, but that isn't required, or, like I said, what normally happens.
Now, that all changed to some degree in January 2000, when what's now the U.S. Committee on National Security Systems issued the National Security Telecommunications and Information Systems Security Policy No. 11 (or NSTISSP #11), requiring that all products dealing with national security information (read anything sold to the U.S. government) be evaluated against the CC. Oh, and here's a bunch of PPs that you can conform to.
So, just because a lot of companies - like Microsoft - chose to conform to those PPs doesn't mean it's the only route; it doesn't even mean it's the best route. It all depends on what you, the consumer, are looking for. Concerned about what access control mechanisms are in your operating system? Well, pick up the ST (available at the CC Web site), flip to the security functional requirements section and see what it claims from the Access Control Policy family (i.e., FDP_ACC; you can find your whole shopping list in the second part of the CC).
But Shapiro's bang on when he says that the EAL, the PP (if there is one) and the ST don't mean diddly-squat if you haven't made sure that your shopping list is covered off.
The Controlled Access Protection Profile
Shapiro is correct in saying that the requirements in the Controlled Access Protection Profile (CAPP) aren't enough to protect an Internet-connected system; but who said they were? No one. (At least I hope Microsoft didn't say that.) To quote the Common Criteria Evaluation and Validation Scheme Web site:
The CAPP was derived from the requirements of the C2 class of the U.S. Department of Defense (DoD) Trusted Computer System Evaluation Criteria (TCSEC), dated December, 1985...
1985, people! And, I would argue, it wasn't that the CAPP was the most complete validated PP. Shapiro's right when he implies that Microsoft couldn't hope to satisfy other "ported" set of requirements: the Labelled Security Protection Profile or B1 replacement; and it's those very mandatory access requirements that we need on the Internet. This is where Shapiro's going when he talks about EROS (SELinux is another example), and how it won't work well with the ubiquitous discretionary access operating systems of today.
Evaluation Assurance Level 4
To quote the Common Methodology for Information Technology Security Evaluation (i.e., the CEM: the document that states how to conduct CC evaluations at the various EALs):
EAL4 provides a moderate to high level of assurance. The security functions are analysed using a functional specification, guidance documentation, the high-level and low-level design of the TOE, and a subset of the implementation to understand the security behaviour...
Shapiro compares this analysis to an audit of a company's business practices. The CEM is not ISO 9001; it demands, among other things, that the product's design be sound. For example, to quote the high-level design content element ADV_HLD.2.3C:
The evaluator should make an assessment as to the appropriateness of the number of subsystems presented by the developer, and also of the choice of grouping of functions within subsystems. The evaluator should ensure that the decomposition of the [Target of Evaluation's (i.e., the product) Security Functions or TSF] into subsystems is sufficient for the evaluator to gain a high-level understanding of how the functionality of the TSF is provided.
Shapiro states that "essentially none of the code is inspected," and that such an examination isn't even required at EAL4. This is false. Notice the last input to the analysis quoted above: a subset of the implementation (i.e., some of the code). The size of this subset is left to the evaluator, with the understanding that if concerns arise while examining the initial sample, sampling should continue until the evaluator is confident in the product's ability to provide its stated security functions. To quote the implementation representation content element ADV_IMP.1.1C:
Other factors that might influence the determination of the subset include:
- the complexity of the design (if the design complexity varies across the [Target of Evaluation or TOE], the subset should include some portions with high complexity);
- the results of other design analysis sub-activities (such as work units related to the low-level or high-level design) that might indicate portions of the TOE in which there is a potential for ambiguity in the design; and
- the evaluator’s judgement as to portions of the implementation representation that might be useful for the evaluator’s independent vulnerability analysis (sub-activity AVA_VLA.2).
To continue the CEM quote on EAL4:
... The analysis is supported by independent testing of a subset of the TOE security functions, evidence of developer testing based on the functional specification and the high level design, selective confirmation of the developer test results, analysis of strengths of the functions, evidence of a developer search for vulnerabilities, and an independent vulnerability analysis demonstrating resistance to low attack potential penetration attackers. Further assurance is gained through the use of an informal model of the TOE security policy and through the use of development environment controls, automated TOE configuration management, and evidence of secure delivery procedures.
I'm not sure what Shapiro means by "no quantifiable measurements... of the software." No lines-of-code measurement? Something related to the number of system calls? While these properties certainly affect the keep-it-simple principle of security, I don't know that there are specific thresholds we could use in evaluating security products. (Well, beyond the extremes: I'm thinking of Schneier's Secrets & Lies... an estimated 60 million LOC in Windows 2000, over 3000 system calls in Windows NT 4.0!)
Certainly the CEM isn't that prescriptive. Things like the number of subsystems in the product's high-level design are left to the evaluator's judgment. Is the developer's choice of subsystems useful in understanding the product’s intended operation? Well then, whatever the number, it's served its purpose.
However, I do not believe, as Shapiro states, that an EAL4 evaluation "says absolutely nothing about the quality of the software itself;" in my opinion, the associated analysis is qualitative. I've mentioned the design requirements, but there's also ensuring that the developer has tested to that design, there's a verification of that testing and additional independent testing... And this is focused on the security functions claimed in the ST, remember. This evaluation says nothing about those functions that aren't security related. (Since they're where the money is, it's a safe bet that the developer's tested them.)
Documents related to the software development process are evaluated (e.g., configuration management and delivery), but the processes themselves are verified during a development site visit. And the quality of these processes is just as important as the quality of the software itself. If you can't guarantee that the evaluated software is what the customer is actually installing, what have you achieved?
Well, if you've made it this far, all I'm saying is that the CC is a tool; you, the consumer, can use it to specify your security needs (i.e., in a PP), and you, the developer, can use it to specify what your product secures and how (i.e., in an ST). That people compare products solely based on the EALs they were evaluated at is not the fault of the framework (use the STs, Luke!). Similarly, the levels themselves give the developer (or their sponsor) a choice in the amount of time and money they want to commit to the enterprise. How much assurance are your customers looking for? Do they simply want to know that the guidance documents you supply with the product will actually help them get it to a secure state (e.g., EAL1)? Or do they want to know that you've tested every single security function identified in your functional specification (e.g., EAL3)?
Having said all that, I would love to see a mandatory access operating system like SELinux go through a CC evaluation. If we could get that kind of security in the U.S. government, maybe, just maybe, it would get enough momentum to spill out into the U.S. population. And then? Oh, then the world, baby! :-)