My buddy's thoughts on Phillip Henry Mann's acquittal got me started... :-)
First, I don't agree that the story begs the question: should a criminal have a reasonable expectation of privacy? If the police have reasonable grounds for suspecting that someone's a criminal, then they can take that person downtown for questioning. So, yes, if they have reasonable grounds for suspecting someone's trafficking drugs, by all means, stop the guy and bring him downtown for questioning. The key point in this story is that after the initial pat down and questioning (which I completely agree with; I don't want to see a cop wounded or killed by some scared kid any more than the next guy), they got the guy to empty his pockets because they were curious about the soft object they'd felt (as I read it). If they were surprised by what it was (i.e., they didn't suspect they'd find it when they approached the individual), and they'd convinced themselves that they weren't in any danger prior to asking to see the contents of the pocket, then there's no reasonable grounds for the search.
See, as with any aspect of the law, you've gotta go out to the boundary cases... Those improbable, and often really scary, situations. In this case, yeah, you know, I'm not saying it'd be the end of democracy as we know it if this guy lost his weed, paid a fine, etc. But what about the 911: The Road to Tyranny footage of the woman being pulled over and eventually charged with obstruction of justice? For those who haven't seen that excellent documentary, think of ticket quotas or cop surliness taken to an unreasonable extreme. Legislation is our only protection against these, thankfully rare, sorts of abuse.
Now, on racial profiling, I'll just tackle a few of its many facets. First and foremost, one should always be mindful of the biases and mind-sets of the people involved; the concept of racial profiling should never be separated from the people involved in the real-world situation, because their biases will have a huge impact on how the concept is applied. For example, if it's obviously a crutch propping up sloppy work, then we, as a society, have a problem.
But, for the sake of argument, let's say that, from a counterterrorism standpoint, some degree of racial profiling makes sense strategically (as in your example of al-Qaida). That is, it makes sense to look for people of a specific ethnicity in the search for the rest of that particular terrorist group. Now, how that appropriateness is applied tactically, in certain American and Canadian cities, for example, is another kettle of fish entirely. We must compare them very carefully (as I mentioned), keeping in mind the freedoms that we enjoy and the concept of being innocent until proven guilty in a court of law.
It's scary stuff, man. If you're on the right side of the law today, then, yeah, sure, pull over, let them search your car, your home, your pockets; you've got nothin' to hide. But, s**t, look out if you happen to be on the wrong side of things tomorrow.
Saturday, July 24, 2004
Saturday, July 17, 2004
On wanting to believe versus believing...
I was doing my part to make Slashdot a better place, metamoderating away, when I came to a comment on Pascal's Wager: an argument for believing in God, basically. Well, in reading the context of the comment, I came to a reply by Dunbar the Inept that echoed my thoughts on belief.
My religious experiences didn't touch on this wager, or any other argument for believing or not. It's as if it was assumed that I believed because I was in Sunday school, because I was confirmed (O.K., maybe there was an argument for the assumption in that case) and because I read some of the Bible, when, in fact, I was conflicted.
And these experiences aren't limited to my United Church of Christ days: as I briefly discussed last month, I quickly became discouraged as I read the Qur'an, with its strong language against anyone who doesn't hold belief in their heart. What's amusing is that, were I able to flick belief on like a switch, I would be insane (as Dunbar pointed out).
My religious experiences didn't touch on this wager, or any other argument for believing or not. It's as if it was assumed that I believed because I was in Sunday school, because I was confirmed (O.K., maybe there was an argument for the assumption in that case) and because I read some of the Bible, when, in fact, I was conflicted.
And these experiences aren't limited to my United Church of Christ days: as I briefly discussed last month, I quickly became discouraged as I read the Qur'an, with its strong language against anyone who doesn't hold belief in their heart. What's amusing is that, were I able to flick belief on like a switch, I would be insane (as Dunbar pointed out).
Tuesday, July 13, 2004
On kids and the Web...
HaloScan's basic account restricts comments on the owner's site to 1000 characters. Who knew I'd have so much to say on this post by Deirdre?
Brian captured my thoughts on the first point; security through obscurity shouldn't be your only line of defense.
On the second point, I'll bite. From the MacCentral article on the ruling:
I disagree. If I were a parent - and I'm not, so, yes, take that into account - I wouldn't be relying on legislation or technology to assuage my fears about what's going on behind closed doors.
Depending on the age and maturity of the child, I would rely on either supervised Web time only, or my child's judgment and our open relationship. Gone are the days of unsupervised research with the children's encyclopedia or Britannica. If my kid needed to do research on tadpoles, filtering software (even the ICRA functionality in Internet Explorer seems to work well, although it requires some work on the Web site author's part) plus my supervision would be the only way to go. As they got older, some unsupervised time could be introduced... It comes down to being my responsibility.
Having the government determine what the
Tread lightly, people. We have to live with these decisions.
Brian captured my thoughts on the first point; security through obscurity shouldn't be your only line of defense.
On the second point, I'll bite. From the MacCentral article on the ruling:
If the law hadn't been challenged, a workable solution would now be in place... Parents wouldn't be afraid to leave their kids alone in the room with the computer on.
I disagree. If I were a parent - and I'm not, so, yes, take that into account - I wouldn't be relying on legislation or technology to assuage my fears about what's going on behind closed doors.
Depending on the age and maturity of the child, I would rely on either supervised Web time only, or my child's judgment and our open relationship. Gone are the days of unsupervised research with the children's encyclopedia or Britannica. If my kid needed to do research on tadpoles, filtering software (even the ICRA functionality in Internet Explorer seems to work well, although it requires some work on the Web site author's part) plus my supervision would be the only way to go. As they got older, some unsupervised time could be introduced... It comes down to being my responsibility.
Having the government determine what the
average person, applying contemporary community standards, would find obscenescares me, to be honest. We're talking about getting rid of the artistic merit defense up here in Canada too, and I just keep thinking that, yes, it sounds reasonable in many, even most, scenarios that proponents bring up. But what about the cases at the extremes of the spectrum? What about the filmmaker who shies away from the story that needs to be told for fear of going to jail?
Tread lightly, people. We have to live with these decisions.
Card Skimming
My bank disabled my banking card over the weekend; apparently, I used it at a location that's under investigation for card skimming. When I first got the call Sunday morning, I thought I'd pegged the compromised automated banking machine, but today the teller told me it could've been any location - including stores - I'd banked at in the last month. (They don't give out the location to avoid compromising the investigation.)
Now, one scheme I've heard of reads the information off the magnetic stripe on your banking card while a camera in the pamphlet holder records you entering your PIN. I learned today that a common scheme ignores the PIN, making a copy of your card and forcing a reset of the PIN with a master PIN, using the same machine you use to change your PIN at your branch. Schneier would love it! Foiled again by a global secret (in all card writers, in this case).
The good news is that so long as I notify the bank within 24 hours of learning my banking card has been lost or stolen, I'm not liable for any of the subsequent charges. Same goes for the scenario where the bank informs me of the compromise, obviously (which is why I still suspect that it was that ABM I used on Friday; the bank disabled the card right away because they knew they'd be footing any bill the skimmers or their friends racked up). Now, this all assumes that I haven't contributed to the compromise (e.g., helpfully writing my PIN on a Post-it stuck to my card, giving my card and PIN to my long-lost Uncle Bob so he can buy some smokes); should the bank be able to prove otherwise, I could be liable for even more than my account balance!
Now, one scheme I've heard of reads the information off the magnetic stripe on your banking card while a camera in the pamphlet holder records you entering your PIN. I learned today that a common scheme ignores the PIN, making a copy of your card and forcing a reset of the PIN with a master PIN, using the same machine you use to change your PIN at your branch. Schneier would love it! Foiled again by a global secret (in all card writers, in this case).
The good news is that so long as I notify the bank within 24 hours of learning my banking card has been lost or stolen, I'm not liable for any of the subsequent charges. Same goes for the scenario where the bank informs me of the compromise, obviously (which is why I still suspect that it was that ABM I used on Friday; the bank disabled the card right away because they knew they'd be footing any bill the skimmers or their friends racked up). Now, this all assumes that I haven't contributed to the compromise (e.g., helpfully writing my PIN on a Post-it stuck to my card, giving my card and PIN to my long-lost Uncle Bob so he can buy some smokes); should the bank be able to prove otherwise, I could be liable for even more than my account balance!
Saturday, July 10, 2004
On Shapiro's Understanding the Windows EAL4 Evaluation...
I should've done this a long time ago. It seems like every time a Common Criteria (CC) story hits Slashdot, some would-be expert hyperlinks to Jonathan S. Shapiro's Understanding the Windows EAL4 Evaluation, like it's the CC's Brutus. However, as often happens in discussions involving the CC, people misunderstand (and often overstate) what the CC and an EAL say about the security a product provides.
While Shapiro gets the general idea, you should understand that the Protection Profile (PP) hasn't caught on like most of the those involved in the project probably expected. It's optional, first of all, and requires a lot of time and effort from groups who are independent of the security product companies (e.g., consumer rights groups) to work. There are exceptions (e.g., the Smart Card Protection Profile), but, for the most part, the consumers, or their independent representatives, must define their own security needs for this step to be of some utility.
The reason is that if the consumer doesn't understand the PP - the Controlled Access Protection Profile is a good example - then they won't understand the company's security target that claims conformance to that PP, nor what the evaluation - and its associated assurance level - really means.
The Security Target (ST) is the required document that Shapiro should be referencing. It scopes the evaluation, detailing all the security requirements the product satisfies, how it satisfies them, and what assurance the consumer should have that they really are satisfied (that's the evaluation assurance level or EAL). The ST can get those security requirements from one or more PPs, but that isn't required, or, like I said, what normally happens.
Now, that all changed to some degree in January 2000, when what's now the U.S. Committee on National Security Systems issued the National Security Telecommunications and Information Systems Security Policy No. 11 (or NSTISSP #11), requiring that all products dealing with national security information (read anything sold to the U.S. government) be evaluated against the CC. Oh, and here's a bunch of PPs that you can conform to.
So, just because a lot of companies - like Microsoft - chose to conform to those PPs doesn't mean it's the only route; it doesn't even mean it's the best route. It all depends on what you, the consumer, are looking for. Concerned about what access control mechanisms are in your operating system? Well, pick up the ST (available at the CC Web site), flip to the security functional requirements section and see what it claims from the Access Control Policy family (i.e., FDP_ACC; you can find your whole shopping list in the second part of the CC).
But Shapiro's bang on when he says that the EAL, the PP (if there is one) and the ST don't mean diddly-squat if you haven't made sure that your shopping list is covered off.
Shapiro is correct in saying that the requirements in the Controlled Access Protection Profile (CAPP) aren't enough to protect an Internet-connected system; but who said they were? No one. (At least I hope Microsoft didn't say that.) To quote the Common Criteria Evaluation and Validation Scheme Web site:
1985, people! And, I would argue, it wasn't that the CAPP was the most complete validated PP. Shapiro's right when he implies that Microsoft couldn't hope to satisfy other "ported" set of requirements: the Labelled Security Protection Profile or B1 replacement; and it's those very mandatory access requirements that we need on the Internet. This is where Shapiro's going when he talks about EROS (SELinux is another example), and how it won't work well with the ubiquitous discretionary access operating systems of today.
To quote the Common Methodology for Information Technology Security Evaluation (i.e., the CEM: the document that states how to conduct CC evaluations at the various EALs):
Shapiro compares this analysis to an audit of a company's business practices. The CEM is not ISO 9001; it demands, among other things, that the product's design be sound. For example, to quote the high-level design content element ADV_HLD.2.3C:
Shapiro states that "essentially none of the code is inspected," and that such an examination isn't even required at EAL4. This is false. Notice the last input to the analysis quoted above: a subset of the implementation (i.e., some of the code). The size of this subset is left to the evaluator, with the understanding that if concerns arise while examining the initial sample, sampling should continue until the evaluator is confident in the product's ability to provide its stated security functions. To quote the implementation representation content element ADV_IMP.1.1C:
To continue the CEM quote on EAL4:
I'm not sure what Shapiro means by "no quantifiable measurements... of the software." No lines-of-code measurement? Something related to the number of system calls? While these properties certainly affect the keep-it-simple principle of security, I don't know that there are specific thresholds we could use in evaluating security products. (Well, beyond the extremes: I'm thinking of Schneier's Secrets & Lies... an estimated 60 million LOC in Windows 2000, over 3000 system calls in Windows NT 4.0!)
Certainly the CEM isn't that prescriptive. Things like the number of subsystems in the product's high-level design are left to the evaluator's judgment. Is the developer's choice of subsystems useful in understanding the product’s intended operation? Well then, whatever the number, it's served its purpose.
However, I do not believe, as Shapiro states, that an EAL4 evaluation "says absolutely nothing about the quality of the software itself;" in my opinion, the associated analysis is qualitative. I've mentioned the design requirements, but there's also ensuring that the developer has tested to that design, there's a verification of that testing and additional independent testing... And this is focused on the security functions claimed in the ST, remember. This evaluation says nothing about those functions that aren't security related. (Since they're where the money is, it's a safe bet that the developer's tested them.)
Documents related to the software development process are evaluated (e.g., configuration management and delivery), but the processes themselves are verified during a development site visit. And the quality of these processes is just as important as the quality of the software itself. If you can't guarantee that the evaluated software is what the customer is actually installing, what have you achieved?
Well, if you've made it this far, all I'm saying is that the CC is a tool; you, the consumer, can use it to specify your security needs (i.e., in a PP), and you, the developer, can use it to specify what your product secures and how (i.e., in an ST). That people compare products solely based on the EALs they were evaluated at is not the fault of the framework (use the STs, Luke!). Similarly, the levels themselves give the developer (or their sponsor) a choice in the amount of time and money they want to commit to the enterprise. How much assurance are your customers looking for? Do they simply want to know that the guidance documents you supply with the product will actually help them get it to a secure state (e.g., EAL1)? Or do they want to know that you've tested every single security function identified in your functional specification (e.g., EAL3)?
Having said all that, I would love to see a mandatory access operating system like SELinux go through a CC evaluation. If we could get that kind of security in the U.S. government, maybe, just maybe, it would get enough momentum to spill out into the U.S. population. And then? Oh, then the world, baby! :-)
How the Common Criteria Really Works
While Shapiro gets the general idea, you should understand that the Protection Profile (PP) hasn't caught on like most of the those involved in the project probably expected. It's optional, first of all, and requires a lot of time and effort from groups who are independent of the security product companies (e.g., consumer rights groups) to work. There are exceptions (e.g., the Smart Card Protection Profile), but, for the most part, the consumers, or their independent representatives, must define their own security needs for this step to be of some utility.
The reason is that if the consumer doesn't understand the PP - the Controlled Access Protection Profile is a good example - then they won't understand the company's security target that claims conformance to that PP, nor what the evaluation - and its associated assurance level - really means.
The Security Target (ST) is the required document that Shapiro should be referencing. It scopes the evaluation, detailing all the security requirements the product satisfies, how it satisfies them, and what assurance the consumer should have that they really are satisfied (that's the evaluation assurance level or EAL). The ST can get those security requirements from one or more PPs, but that isn't required, or, like I said, what normally happens.
Now, that all changed to some degree in January 2000, when what's now the U.S. Committee on National Security Systems issued the National Security Telecommunications and Information Systems Security Policy No. 11 (or NSTISSP #11), requiring that all products dealing with national security information (read anything sold to the U.S. government) be evaluated against the CC. Oh, and here's a bunch of PPs that you can conform to.
So, just because a lot of companies - like Microsoft - chose to conform to those PPs doesn't mean it's the only route; it doesn't even mean it's the best route. It all depends on what you, the consumer, are looking for. Concerned about what access control mechanisms are in your operating system? Well, pick up the ST (available at the CC Web site), flip to the security functional requirements section and see what it claims from the Access Control Policy family (i.e., FDP_ACC; you can find your whole shopping list in the second part of the CC).
But Shapiro's bang on when he says that the EAL, the PP (if there is one) and the ST don't mean diddly-squat if you haven't made sure that your shopping list is covered off.
The Controlled Access Protection Profile
Shapiro is correct in saying that the requirements in the Controlled Access Protection Profile (CAPP) aren't enough to protect an Internet-connected system; but who said they were? No one. (At least I hope Microsoft didn't say that.) To quote the Common Criteria Evaluation and Validation Scheme Web site:
The CAPP was derived from the requirements of the C2 class of the U.S. Department of Defense (DoD) Trusted Computer System Evaluation Criteria (TCSEC), dated December, 1985...
1985, people! And, I would argue, it wasn't that the CAPP was the most complete validated PP. Shapiro's right when he implies that Microsoft couldn't hope to satisfy other "ported" set of requirements: the Labelled Security Protection Profile or B1 replacement; and it's those very mandatory access requirements that we need on the Internet. This is where Shapiro's going when he talks about EROS (SELinux is another example), and how it won't work well with the ubiquitous discretionary access operating systems of today.
Evaluation Assurance Level 4
To quote the Common Methodology for Information Technology Security Evaluation (i.e., the CEM: the document that states how to conduct CC evaluations at the various EALs):
EAL4 provides a moderate to high level of assurance. The security functions are analysed using a functional specification, guidance documentation, the high-level and low-level design of the TOE, and a subset of the implementation to understand the security behaviour...
Shapiro compares this analysis to an audit of a company's business practices. The CEM is not ISO 9001; it demands, among other things, that the product's design be sound. For example, to quote the high-level design content element ADV_HLD.2.3C:
The evaluator should make an assessment as to the appropriateness of the number of subsystems presented by the developer, and also of the choice of grouping of functions within subsystems. The evaluator should ensure that the decomposition of the [Target of Evaluation's (i.e., the product) Security Functions or TSF] into subsystems is sufficient for the evaluator to gain a high-level understanding of how the functionality of the TSF is provided.
Shapiro states that "essentially none of the code is inspected," and that such an examination isn't even required at EAL4. This is false. Notice the last input to the analysis quoted above: a subset of the implementation (i.e., some of the code). The size of this subset is left to the evaluator, with the understanding that if concerns arise while examining the initial sample, sampling should continue until the evaluator is confident in the product's ability to provide its stated security functions. To quote the implementation representation content element ADV_IMP.1.1C:
Other factors that might influence the determination of the subset include:
- the complexity of the design (if the design complexity varies across the [Target of Evaluation or TOE], the subset should include some portions with high complexity);
- the results of other design analysis sub-activities (such as work units related to the low-level or high-level design) that might indicate portions of the TOE in which there is a potential for ambiguity in the design; and
- the evaluator’s judgement as to portions of the implementation representation that might be useful for the evaluator’s independent vulnerability analysis (sub-activity AVA_VLA.2).
To continue the CEM quote on EAL4:
... The analysis is supported by independent testing of a subset of the TOE security functions, evidence of developer testing based on the functional specification and the high level design, selective confirmation of the developer test results, analysis of strengths of the functions, evidence of a developer search for vulnerabilities, and an independent vulnerability analysis demonstrating resistance to low attack potential penetration attackers. Further assurance is gained through the use of an informal model of the TOE security policy and through the use of development environment controls, automated TOE configuration management, and evidence of secure delivery procedures.
I'm not sure what Shapiro means by "no quantifiable measurements... of the software." No lines-of-code measurement? Something related to the number of system calls? While these properties certainly affect the keep-it-simple principle of security, I don't know that there are specific thresholds we could use in evaluating security products. (Well, beyond the extremes: I'm thinking of Schneier's Secrets & Lies... an estimated 60 million LOC in Windows 2000, over 3000 system calls in Windows NT 4.0!)
Certainly the CEM isn't that prescriptive. Things like the number of subsystems in the product's high-level design are left to the evaluator's judgment. Is the developer's choice of subsystems useful in understanding the product’s intended operation? Well then, whatever the number, it's served its purpose.
However, I do not believe, as Shapiro states, that an EAL4 evaluation "says absolutely nothing about the quality of the software itself;" in my opinion, the associated analysis is qualitative. I've mentioned the design requirements, but there's also ensuring that the developer has tested to that design, there's a verification of that testing and additional independent testing... And this is focused on the security functions claimed in the ST, remember. This evaluation says nothing about those functions that aren't security related. (Since they're where the money is, it's a safe bet that the developer's tested them.)
Documents related to the software development process are evaluated (e.g., configuration management and delivery), but the processes themselves are verified during a development site visit. And the quality of these processes is just as important as the quality of the software itself. If you can't guarantee that the evaluated software is what the customer is actually installing, what have you achieved?
Conclusion
Well, if you've made it this far, all I'm saying is that the CC is a tool; you, the consumer, can use it to specify your security needs (i.e., in a PP), and you, the developer, can use it to specify what your product secures and how (i.e., in an ST). That people compare products solely based on the EALs they were evaluated at is not the fault of the framework (use the STs, Luke!). Similarly, the levels themselves give the developer (or their sponsor) a choice in the amount of time and money they want to commit to the enterprise. How much assurance are your customers looking for? Do they simply want to know that the guidance documents you supply with the product will actually help them get it to a secure state (e.g., EAL1)? Or do they want to know that you've tested every single security function identified in your functional specification (e.g., EAL3)?
Having said all that, I would love to see a mandatory access operating system like SELinux go through a CC evaluation. If we could get that kind of security in the U.S. government, maybe, just maybe, it would get enough momentum to spill out into the U.S. population. And then? Oh, then the world, baby! :-)
Subscribe to:
Posts (Atom)