Showing posts with label infosec. Show all posts
Showing posts with label infosec. Show all posts

Thursday, August 18, 2011

Elections Canada on Internet voting

Chief Electoral Officer Marc Mayrand recently published his report on the 41st General Election, held earlier this year. It includes one reference to Internet voting:
Elections Canada has been examining Internet voting as a complementary and convenient way to cast a ballot. The Chief Electoral Officer is committed to seeking approval for a test of Internet voting in a by-election held after 2013.
The CBC headlined their article on the report with it: Elections Canada lobbies for test of online voting. Clearly the topic has gone mainstream. Overall, I see reasons for optimism: first, note that the press is making the distinction between electronic voting and online voting; an old lament of mine. Second, they've highlighted the proper implementation of the secret ballot as one of the concerns about voting online. And, finally, Elections Canada isn't racing ahead on this -- note that the statement I quoted doesn't include a deadline. They are also eliciting informed opinions, and remaining far more technology agnostic than most folks would expect them to be, I would imagine:
Strategic initiatives
Our key strategies to support [the Accessibility] objective in the next five years are to:
... with the prior approval of Parliament, test a secure voting process during a by-election that allows electors to vote by telephone or Internet
Strategic Plan 2008-2013 (the emphasis is mine)

It isn't perfect, of course: that workshop made but one reference to the risk of coerced voting, as far as I could tell. Also, the public discourse -- well, such as it is in comments on press articles, and the questions raised at that workshop -- hasn't adequately quashed that old argument celebrating online banking (and tax filing, I've seen recently) as proof that the nut of Internet security has been cracked. As I've stated previously, that argument is based on a false premise. Still, I'm hopeful that these trials to come will be well run, their results thoroughly examined, before any Internet facilitated process débuts in an election on our national stage.

Saturday, March 06, 2010

21st century vote

Michael Geist linked to a Sun story about Alberta considering on-line/Internet voting for the province at some point in the future, using the term e-voting in the title of the post. This muddying of terms really worries me; it has huge implications for this issue, I maintain: while e-voting machines may one day be a viable option for elections in Canada, I have serious doubts about the same ever being true of on-line/Internet voting.

This quote from Alberta's Chief Electoral Officer highlights a few of my concerns:
I can do my banking online, but I can’t do my voting online... Once it has been proven to be effective, that the votes can be certified, all that security stuff can be looked after, I certainly see that as something that’s coming. Anything that we can do to make the process more accessible to electors is obviously a good thing.
First, the security requirements associated with on-line banking differ significantly from those associated with any Internet voting system. I would also suggest that they are much more complex: consider that, under the current system, a voter cannot be directly linked with his or her specific vote and is therefore free from being coerced to vote a certain way. Similarly, banks accept a certain level of fraud (including on-line fraud) as the price of doing business; I don't think the same can be said of any voting system we would consider using to determine the leadership of the country.

This brings me to my second point: there are complexities in this that shouldn't be passed on to other trials, be they in the EU, the US, or wherever. When officials in power use phrases like "security stuff" and imply that other smart people are doing things, so why aren't we, well, again, I get nervous. He uses the term certified. What does that mean to him, or the people conducting the trial? Again, if part of it includes proving that a particular user cast a particular vote -- certainly part of a plausible definition -- that would obviously have enormous privacy implications (as it is completely unnecessary, and just asking for problems, however careful the government is with that information).

Finally, in addition to confusing e-voting machines with Internet voting -- I'm sure someone in power thinks trials of one have some bearing on the suitability of the other -- voter turn-out, or the lack thereof in recent years, always seems to come up in these discussions. And while I'll be the first to admit that it's an important issue, it's for that very reason that it should be divorced from any discussion about the voting systems to be used. Otherwise, the implication is that advent of one-click Internet voting will bring the young voters in droves. On this point, I like the provincial NDP leader's comment (i.e., look at mandatory voting, as they have in Australia); while one could question the merit of the suggestion, the idea that voter engagement need not be synonymous with Internet voting is spot on.

Update: Geist on why thoughts of using Internet voting in provincial and federal elections are premature.

Thursday, April 09, 2009

More on my driver rating system idea

I witnessed an unusual traffic accident yesterday: what began as a typical rear-ending took a bizarre turn when the 'rear-ender' reversed at high-speed and rammed the 'rear-endee' again!

As I navigated the Ottawa Police Web-site later in the day, their instructions for submitting traffic complaints got me thinking about my driver rating system idea again: specifically, note how much information is required regarding the timing, the other driver, their vehicle, etc. Now, admittedly, I glossed over how a particular identifier would be represented as one of many potential vehicles in proximity with you, but, setting that aside for the moment, a lot of the information the police are looking for could be automatically generated; the process would also be more timely, and possibly even safer, if you compare it with the scenario where a person is trying to relay all that information over a handheld cell-phone while driving.

On the point of representing particular identifiers on a person's appliance, the balance between the cost and complexity (and safety) is at the crux of the problem: ideally, the system would visually represent the makes, models and positions involved, updating the information every few seconds, on a sizable screen that can be centrally located in the vehicle's dash. However, at a minimum, a multi-line text display of license plate number, make, model, colour and direction (with respect to your vehicle: so, front, back, left, right, etc.), updated regularly, would suffice. (You wouldn't want to rely on the license plate alone, since the vehicle could be screaming through an intersection on a path perpendicular to yours; also, straining to read a plate in your rearview mirror could be a serious distraction.)

One final point on the subject of this system being a target: all the contact information that the police require from the person reporting the incident shouldn't be included in this system. Depending on how drivers are issued their RFID tag and associated identifier, the DMV system or a separate system could be queried by police using the reporting appliance's identifier. That way, people who want to know the address of everyone who drives down their street need access to more than this system.

Saturday, February 14, 2009

How's my driving? on a municipal scale

After being tailgated for a dozen blocks or so by a big black pick-up with tinted windows, a thought occurred to me: as I saw him cut-off two other vehicles in less than a block of the double-lane road we were all travelling on, I wondered about a world where the three of us could present our combined assessment of that person's driving in some sort of public forum. This led me to further speculate on a system that went beyond public shaming, where enough poor assessments could affect someone's insurance rating or the number of points on their driver's licence.

At its most basic, the system would combine RFID tags and readers, and some simple appliance that would provide little beyond Internet access. As a practical aside, maybe the system could be partially financed by asking people to buy the optional appliance (that would include the RFID reader), while the RFID tag would be universally deployed in the licence plate. In other words, rating how others drive would be optional, but everyone within a certain radius would be able to rate your driving regardless.

As I see it, a city-run, Internet-accessible system would allocate a certain (small, at least initially) number of slots to each citizen on a periodic basis (say, monthly). It would have to be signature-based, but it would also need to scale well -- I'm thinking at least a million users (municipal in the sense of Ottawa, in other words... Manhattan would need a different system entirely) -- so I'm wondering if it would have to be session-based also, as opposed to some sort of asymmetric system that piggy-backs off the driver's licence renewal. Either way, the goal would be to make it somewhat difficult to spoof another person's identity, keeping in mind that cryptographic complexity is at odds with the 'simple' appliance I mentioned earlier.

With these slots, a person could choose to rate any other identifier that their RFID reader is picking up at that moment. Obviously, the more complex the rating system, the less safe it would be to operate while driving, so I'm thinking that each identifier around you (i.e., other drivers) is assigned a number, and once you press that number, you then press '1' through '5' to rate that person's driving. (And maybe you have a different set of buttons for the rating system, so that it's clear that '1' is poor or '1' is stellar -- colours introduce other problems... maybe smiley faces and frowns -- since no two surveys are ever the same in that regard.)

So, to go back to my earlier example, if I pick up and rate that truck on my rating system appliance, and the two others who were cut-off do the same, this city-run system would pick up three ratings of one identifier's driving with very similar timestamps. At this point, some sort of reputation system would qualify each of our ratings based on a number of factors: how often we submit ratings, how often those ratings are corroborated, both by drivers around us, and by other drivers at other times of the day, how other drivers rate our driving, the number of years we've driving, how many accidents we've had, etc. I'm going beyond the basic system with some of these factors, but the idea is that you would vary the number of slots each person gets, and the factors considered by the reputation system over time, studying whether there were any appreciable benefits to introducing any of these complexities.

Because one of the many unspoken costs behind this idea is the potential for abuse. It's fine to speculate on a secure, city-run system, but if we tie in too much information, or use the rating that pops out to impose serious penalties on people, the system would become too valuable a target to reasonably secure. However, if it's used to augment the systems we already have in place, I think it could work: if I knew that running this yellow light could get me my second poor rating of the day (and a strongly-corroborated, poor rating, if the intersection's busy), I'd probably think twice about doing it.

And that's where the real strength of this system would be: you would want there to be very little incentive to damaging a person's reputation, either by falsely submitting many uncorroborated ratings of others in their name, or by falsely submitting many poor ratings on their driving. The idea would be that identifiers that repeatedly came up as poor drivers, as rated by many different people, both at the same time, and over a significant period of time, would have that reflected in a permanent record of some sort, keeping in mind that the most recent year's record would carry more weight than the one before it (i.e., much like accident records now).

As a side-note, this is my hundredth post; and in just six short years! ;-)

Friday, May 11, 2007

Conservatives propose to extend voting period

Two thoughts come to mind: 1) Has the government determined that a significant portion of the people who aren't voting cite polling booth hours when asked why they don't? And 2) Have they considered how this will change the polling booth security environment?

On the first point, I believe government employees are guaranteed a break to vote if their shift spans the polling booth hours. Can anyone confirm this? Or shed light on any private-sector policies?

On the second point, the longer they have to ensure the integrity of those ballot boxes, the greater their vulnerability.

Finally, if the answer to the first question is no, then the government could be wasting a lot of money, in areas related to the second question and beyond.

Thursday, May 10, 2007

The U.S. Genetic Information Nondiscrimination Act

Sharon Terry, president of the Genetic Alliance:
The American public can now access genetic tests, feel safe about their genetic information not being misused and participate in research that involves genetic information.

This is certainly a step down that path, but there are still many to go: there are many uses for genetic information beyond screening related to employment and insurance. And the bigger problem is collecting, using and retaining these data properly.

Monday, October 04, 2004

Kernighan, debugging and assurance

I read a great Kernighan quote at Marquee de Sells today. For those who aren't familiar with Kernighan, he helped design the awk programming language and coauthored the first book on the C programming language.

I was struck by the security implications of programmers writing computer programs that they aren't smart enough to debug. The analysis and testing that makes up a security evaluation is comparable to debugging, and yet many of the evaluators I've worked with are not as smart as many of the programmers I know. My experience with evaluator qualifications states that a college or university degree in engineering or computer science, plus a few years apprenticing, is sufficient for evaluating source code. Is this realistic?

Thinking about debugging and security brought me to the concept of a reference monitor: a reference monitor enforces the authorized access relationships (i.e., the policy) between the subjects and the objects of a system.[1]

One implementation of the reference monitor concept was called a reference validation mechanism. Early examples of reference validation mechanisms were called security kernels, or that combination of hardware, firmware and software which implements the reference monitor concept.[2] Three design requirements of these reference validation mechanisms (and security kernels) were: 1) It must be tamper proof, 2) it must always be invoked, and 3) it must be small enough to analyze and test with complete assurance.[3]

It would seem to me that Kernighan's point speaks to a caveat on the third requirement: it must be small and simple enough to analyze and test with complete assurance.

Footnotes

1. Anderson, J. P., Computer Security Technology Planning Study, ESD-TR-73-51, vol. I, ESD/AFSC, Hanscom AFB, Bedford, Mass., October 1972 (NTIS AD-758 206).

2. Computer Security Technology Planning Study.

3. U.S. Department of Defense, Department of Defense Trusted Computer System Evaluation Criteria, December 1985.

Tuesday, July 13, 2004

Card Skimming

My bank disabled my banking card over the weekend; apparently, I used it at a location that's under investigation for card skimming. When I first got the call Sunday morning, I thought I'd pegged the compromised automated banking machine, but today the teller told me it could've been any location - including stores - I'd banked at in the last month. (They don't give out the location to avoid compromising the investigation.)

Now, one scheme I've heard of reads the information off the magnetic stripe on your banking card while a camera in the pamphlet holder records you entering your PIN. I learned today that a common scheme ignores the PIN, making a copy of your card and forcing a reset of the PIN with a master PIN, using the same machine you use to change your PIN at your branch. Schneier would love it! Foiled again by a global secret (in all card writers, in this case).

The good news is that so long as I notify the bank within 24 hours of learning my banking card has been lost or stolen, I'm not liable for any of the subsequent charges. Same goes for the scenario where the bank informs me of the compromise, obviously (which is why I still suspect that it was that ABM I used on Friday; the bank disabled the card right away because they knew they'd be footing any bill the skimmers or their friends racked up). Now, this all assumes that I haven't contributed to the compromise (e.g., helpfully writing my PIN on a Post-it stuck to my card, giving my card and PIN to my long-lost Uncle Bob so he can buy some smokes); should the bank be able to prove otherwise, I could be liable for even more than my account balance!

Saturday, July 10, 2004

On Shapiro's Understanding the Windows EAL4 Evaluation...

I should've done this a long time ago. It seems like every time a Common Criteria (CC) story hits Slashdot, some would-be expert hyperlinks to Jonathan S. Shapiro's Understanding the Windows EAL4 Evaluation, like it's the CC's Brutus. However, as often happens in discussions involving the CC, people misunderstand (and often overstate) what the CC and an EAL say about the security a product provides.

How the Common Criteria Really Works


While Shapiro gets the general idea, you should understand that the Protection Profile (PP) hasn't caught on like most of the those involved in the project probably expected. It's optional, first of all, and requires a lot of time and effort from groups who are independent of the security product companies (e.g., consumer rights groups) to work. There are exceptions (e.g., the Smart Card Protection Profile), but, for the most part, the consumers, or their independent representatives, must define their own security needs for this step to be of some utility.

The reason is that if the consumer doesn't understand the PP - the Controlled Access Protection Profile is a good example - then they won't understand the company's security target that claims conformance to that PP, nor what the evaluation - and its associated assurance level - really means.

The Security Target (ST) is the required document that Shapiro should be referencing. It scopes the evaluation, detailing all the security requirements the product satisfies, how it satisfies them, and what assurance the consumer should have that they really are satisfied (that's the evaluation assurance level or EAL). The ST can get those security requirements from one or more PPs, but that isn't required, or, like I said, what normally happens.

Now, that all changed to some degree in January 2000, when what's now the U.S. Committee on National Security Systems issued the National Security Telecommunications and Information Systems Security Policy No. 11 (or NSTISSP #11), requiring that all products dealing with national security information (read anything sold to the U.S. government) be evaluated against the CC. Oh, and here's a bunch of PPs that you can conform to.

So, just because a lot of companies - like Microsoft - chose to conform to those PPs doesn't mean it's the only route; it doesn't even mean it's the best route. It all depends on what you, the consumer, are looking for. Concerned about what access control mechanisms are in your operating system? Well, pick up the ST (available at the CC Web site), flip to the security functional requirements section and see what it claims from the Access Control Policy family (i.e., FDP_ACC; you can find your whole shopping list in the second part of the CC).

But Shapiro's bang on when he says that the EAL, the PP (if there is one) and the ST don't mean diddly-squat if you haven't made sure that your shopping list is covered off.

The Controlled Access Protection Profile


Shapiro is correct in saying that the requirements in the Controlled Access Protection Profile (CAPP) aren't enough to protect an Internet-connected system; but who said they were? No one. (At least I hope Microsoft didn't say that.) To quote the Common Criteria Evaluation and Validation Scheme Web site:
The CAPP was derived from the requirements of the C2 class of the U.S. Department of Defense (DoD) Trusted Computer System Evaluation Criteria (TCSEC), dated December, 1985...

1985, people! And, I would argue, it wasn't that the CAPP was the most complete validated PP. Shapiro's right when he implies that Microsoft couldn't hope to satisfy other "ported" set of requirements: the Labelled Security Protection Profile or B1 replacement; and it's those very mandatory access requirements that we need on the Internet. This is where Shapiro's going when he talks about EROS (SELinux is another example), and how it won't work well with the ubiquitous discretionary access operating systems of today.

Evaluation Assurance Level 4


To quote the Common Methodology for Information Technology Security Evaluation (i.e., the CEM: the document that states how to conduct CC evaluations at the various EALs):
EAL4 provides a moderate to high level of assurance. The security functions are analysed using a functional specification, guidance documentation, the high-level and low-level design of the TOE, and a subset of the implementation to understand the security behaviour...

Shapiro compares this analysis to an audit of a company's business practices. The CEM is not ISO 9001; it demands, among other things, that the product's design be sound. For example, to quote the high-level design content element ADV_HLD.2.3C:
The evaluator should make an assessment as to the appropriateness of the number of subsystems presented by the developer, and also of the choice of grouping of functions within subsystems. The evaluator should ensure that the decomposition of the [Target of Evaluation's (i.e., the product) Security Functions or TSF] into subsystems is sufficient for the evaluator to gain a high-level understanding of how the functionality of the TSF is provided.


Shapiro states that "essentially none of the code is inspected," and that such an examination isn't even required at EAL4. This is false. Notice the last input to the analysis quoted above: a subset of the implementation (i.e., some of the code). The size of this subset is left to the evaluator, with the understanding that if concerns arise while examining the initial sample, sampling should continue until the evaluator is confident in the product's ability to provide its stated security functions. To quote the implementation representation content element ADV_IMP.1.1C:
Other factors that might influence the determination of the subset include:
  1. the complexity of the design (if the design complexity varies across the [Target of Evaluation or TOE], the subset should include some portions with high complexity);

  2. the results of other design analysis sub-activities (such as work units related to the low-level or high-level design) that might indicate portions of the TOE in which there is a potential for ambiguity in the design; and

  3. the evaluator’s judgement as to portions of the implementation representation that might be useful for the evaluator’s independent vulnerability analysis (sub-activity AVA_VLA.2).


To continue the CEM quote on EAL4:
... The analysis is supported by independent testing of a subset of the TOE security functions, evidence of developer testing based on the functional specification and the high level design, selective confirmation of the developer test results, analysis of strengths of the functions, evidence of a developer search for vulnerabilities, and an independent vulnerability analysis demonstrating resistance to low attack potential penetration attackers. Further assurance is gained through the use of an informal model of the TOE security policy and through the use of development environment controls, automated TOE configuration management, and evidence of secure delivery procedures.


I'm not sure what Shapiro means by "no quantifiable measurements... of the software." No lines-of-code measurement? Something related to the number of system calls? While these properties certainly affect the keep-it-simple principle of security, I don't know that there are specific thresholds we could use in evaluating security products. (Well, beyond the extremes: I'm thinking of Schneier's Secrets & Lies... an estimated 60 million LOC in Windows 2000, over 3000 system calls in Windows NT 4.0!)

Certainly the CEM isn't that prescriptive. Things like the number of subsystems in the product's high-level design are left to the evaluator's judgment. Is the developer's choice of subsystems useful in understanding the product’s intended operation? Well then, whatever the number, it's served its purpose.

However, I do not believe, as Shapiro states, that an EAL4 evaluation "says absolutely nothing about the quality of the software itself;" in my opinion, the associated analysis is qualitative. I've mentioned the design requirements, but there's also ensuring that the developer has tested to that design, there's a verification of that testing and additional independent testing... And this is focused on the security functions claimed in the ST, remember. This evaluation says nothing about those functions that aren't security related. (Since they're where the money is, it's a safe bet that the developer's tested them.)

Documents related to the software development process are evaluated (e.g., configuration management and delivery), but the processes themselves are verified during a development site visit. And the quality of these processes is just as important as the quality of the software itself. If you can't guarantee that the evaluated software is what the customer is actually installing, what have you achieved?

Conclusion


Well, if you've made it this far, all I'm saying is that the CC is a tool; you, the consumer, can use it to specify your security needs (i.e., in a PP), and you, the developer, can use it to specify what your product secures and how (i.e., in an ST). That people compare products solely based on the EALs they were evaluated at is not the fault of the framework (use the STs, Luke!). Similarly, the levels themselves give the developer (or their sponsor) a choice in the amount of time and money they want to commit to the enterprise. How much assurance are your customers looking for? Do they simply want to know that the guidance documents you supply with the product will actually help them get it to a secure state (e.g., EAL1)? Or do they want to know that you've tested every single security function identified in your functional specification (e.g., EAL3)?

Having said all that, I would love to see a mandatory access operating system like SELinux go through a CC evaluation. If we could get that kind of security in the U.S. government, maybe, just maybe, it would get enough momentum to spill out into the U.S. population. And then? Oh, then the world, baby! :-)

Monday, December 22, 2003

In his latest Crypto-Gram, Bruce Schneier talks about the value of quantum cryptography:
I don't have any hope for this sort of [quantum-cryptographic] product. I don't have any hope for the commercialization of quantum cryptography in general; I don't believe it solves any security problem that needs solving. I don't believe that it's worth paying for, and I can't imagine anyone but a few technophiles buying and deploying it.

While I see his point, my understanding of the value of quantum cryptography - based almost solely on Simon Singh's The Code Book - is that it's yet to be seen. Secure communications that can only be broken today by an infeasible number of calculations will be broken in the time it takes to perform one such calculation in the age of quantum computing. This will be a new security problem that quantum cryptography can solve.

Update: 8:12:00 PM: Bruce responded to my message:
My point is that software and network security are so lousy that breaking communications never comes down to the calculations, feasible or otherwise. It makes no sense to put a third lock on your front door if your windows are wide open.

Tuesday, November 25, 2003

Well, Pete Lindstrom is at it again. This time, he's quoted in a Register article on some Diebold ATMs that were infected with the Nachi worm. While not as boneheaded as his comparing virus writing and sex, this quote is still a beaut:
I think of ATMs as a relative of SCADA systems, as those things not really being on the Internet, but being on some network, says Peter Lindstrom, an analyst with Spire Security. In some ways, it's kind of ironic, that I think standardization across the board has created some of the issues.

Merriam-Webster defines irony as 3 a (1) : incongruity between the actual result of a sequence of events and the normal or expected result (2) : an event or result marked by such incongruity.

So, what exactly is incongruent about standardization causing security problems? It may not be intuitive, but, as security professionals know, it's one of the disadvantages of homogenous systems, to be balanced against their many advantages. Defense-in-depth ring a bell, Pete? If your network design calls for layered firewalls, use different products at each layer. Exploits that work against one layer will likely fail against the other.

Man, this guy is really starting to bug me.

Thursday, October 30, 2003

The fact that high on the list of reasons (if not the #1 reason) for not offering on-line voting to Ottawa voters is inaccurate voter lists does not reassure me in the least. Presumably, we'd all be receiving plain white envelopes marked Important Municipal Election Information if they could iron out that little wrinkle.

Out of curiousity, a friend of mine (one of the voters in this apparent field test of Prescott Russell, and Stormont Dundas and Glengarry) held his envelope in front of a common light bulb, easily reading his six-digit identification number and his four-digit authentication number (CanVote calls it a PIN).

This is truly scary. I thought these discussions (let alone implementations) were years off.

Thursday, June 26, 2003

Boy, if nothing else, this controversy over the University of Calgary's malware-writing course has certainly put the institution on the international map! The Risks Digest Volume 22: Issues 76 and 77 continue the debate.

Wednesday, June 18, 2003

I'm pleased to say that Bruce Schneier's opinion on malware-writing courses is in line with my own (if a bit harsher). :-) No matter what you think of him, there's no arguing the weight his opinion carries in security circles.

If you're interested in my opinion (and you must be somewhat interested; you're reading this, after all), read my blogs around the beginning of the month.

Tuesday, June 03, 2003

There's a fiery series (Chapter 2, Chapter 3 and Chapter 4 - I have no idea what happened to the first chapter) on the University of Calgary's virus-writing course at vmyths.com.

I was irate after reading the second chapter, all ready to fill this blog with liberal "there shouldn't be any restrictions on registering for university courses." Then the third chapter took the wind out of my sails. I still believe it; it's just a relatively small issue in the face of the United States' evolving view of Canada.

Monday, June 02, 2003

Is it unethical to teach students how to write computer viruses? Part 3 of this saga follows:

From: "John Jarvis"
To: efc-talk@efc.ca
Subject: Re: [EFC-Talk] University of Calgary going to teach virus writing
Date: Monday, June 02, 2003 12:04 AM

----- Original Message -----
From: "M Taylor"
To: efc-talk@efc.ca
Sent: Sunday, June 01, 2003 1:37 PM
Subject: Re: [EFC-Talk] University of Calgary going to teach virus writing

Prove it. Explain how a student (of Dr. Brunnstein's) who has not written a malicious piece software is less equipped to deal with new security threats than the student who wrote a file virus or macro virus (in Dr. Aycock's class).


Obviously I can't prove that. All other education and experience being equal, Dr. Aycock's graduates will have one extra tool in their belts. Will that advantage amount to anything? I don't know, but I don't think it's unethical to teach it.

Explain how having written some malware will help any professional deal with unknown malware in the wild. I argue that having had additional time to study reverse engineering of unknown code/executables is far most useful to deal with new threats in the wild.

And I would agree that reverse engineering capabilities would be invaluable to these students; why is it a question of one or the other? Again, it's just one more option available to the student. You're inundated with information at university, most of which has little direct correlation to your future profession. However, on occasion you surprise yourself with an indirect application, something only clear in hindsight. One of Dr. Aycock's graduates may have one of those moments down the road, but I'm certainly not going to sit here and tell you I can map that connection.

There are basiclly about 20 categories of security vulnerabilities, the bulk of which were known about in the 1970's, and virtually all by 1990. I believe viruses and malware have around 10 attack vectors, the majority of which were written about by Fred Cohen in 1983-86. As a senior undergrad level course I do not expect a lot of novel research to done within this course.

I'll be the first to admit that I've been surprised by how much ground we're rediscovering in this field; however, I find your expectations very presumptuous.

Rehashing of old well-understood malicious software by writing their own implementation does not look towards the horizon, it will be a excerise in programming, possibly even at the scipting / macro language level (e.g. VB, VB for Apps).

I don't believe that this course will "rehash" anything. That's exactly what university courses avoid by teaching theory; it's up to the student to apply it to the state of the art. They may indeed use dated examples or assignments (that's to be seen), but to drive home concepts, not to teach them to write Uber-nimda.

I spend far too much of my professional and personal time fixing "experiments in software", and I do not see a good risk/reward benefit from such an unproven method of teaching that warrents a possibly reckless course of action.

I don't doubt your experience, but it seems that the UoC does see the benefit. How could they go about proving it to your satisfaction? Is it the lab safeguards that you're concerned about, or would nothing short of eras[ing] the student's brains at the end of the course satisfy you?

Saturday, May 31, 2003

The argument over teaching computer virus and malware writing continues on the EFC's talk mailing list:

From: "John Jarvis"
To: efc-talk@efc.ca
Subject: Re: [EFC-Talk] University of Calgary going to teach virus writing
Date: Saturday, May 31, 2003 7:18 PM

----- Original Message -----
From: "M Taylor"
To: efc-talk@efc.ca
Sent: Saturday, May 31, 2003 6:24 PM
Subject: Re: [EFC-Talk] University of Calgary going to teach virus writing

Dr. Brunnstein claims the teaching the writing of malicious software is unethical, not the teaching about malicious software, which is something he himself does.


Yes, I realize that and I don't agree with him. Graduates who do not know how to write malware will not be as effective at combatting it as those who do. It's a question of the level of knowledge we're equipping these students with, and raising that level, as a goal, is not unethical in my mind.

I tend to think that spending time writing malicious software may not be the best way to learn and understand.

But it may be. There's a world of difference between being able to speak to something and being able do it. Most people learn through application.

Just as most security professional do not write new exploits, and I do not think anyone would seriously argue that all security experts should publish new exploits into the public knowledge, especially while the vulnerability is not fixed in the target software. Even the full disclosure movement, such as some authors on the Bugtraq mailing list, has moved to the better security researchers giving reasonable lead times to affected vendors/authors before publishing the mere fact that a vulnerability exists. Fewer researchers publish actual exploits, and most are more concerned with reducing the threat of vulnerable systems.

So, what you're saying is, these graduates will only be good for writing exploits? Or they'll be more inclined than others? Who said anything about publishing exploits in the public domain? None of the course work leaves the lab.

Most security professionals are out there helping organizations defend themselves as best they can today. Of course those guys aren't writing the latest stuff. We need people looking ahead, developing the systems that will protect us from threats that are on the horizon. I don't know that these graduates will be any better at that than Dr. Brunnstein's graduates, but it's worth a try.

As for full disclosure vs. lead time, that's a tangent, and one that we certainly haven't figured out yet. There've been prominent cases of companies squandering reasonable lead times in inter-departmental blame wars.

Understanding malware is something the entire computing/IT community needs more of, but I am not certain that to get there we need more practicing (academic or otherwise) virus writers.

Yes, but should we shut the whole thing down because you aren't certain? This is a worthwhile experiment, in my mind.

I don't think Arson Investigators spend a lot of time setting fires, but do practice examining fires.

My point was that spin doctors *could* have a field day describing the courses, not that the association actually teaches that stuff.
I'm glad the University of Calgary isn't backing down on its decision to teach computer virus and malware writing. I particularly liked this quote from the statement they issued, defending their decision:
Is there another way to teach about stopping viruses without providing adequate knowledge so that the students could write a virus? The answer is simple: No. Anyone who claims they can fight a virus but could not write one is either uninformed or trying to mislead for other reasons.

And then there are the naysayers (from an InformationWeek article on the statement):
"That is utterly ridiculous," says Pete Lindstrom, research director for Spire Security. "There are plenty of ways to gain the same level of knowledge other than the destructive knowledge of having students create new viruses. We don't teach sex education by having students have sex in class."

I'll tell you what's utterly ridiculous: comparing malware to sexual intercourse. Any couple, and I do mean any couple, can have sex. Understanding malware well enough to design defenses against it? Not so much. Sex education protects our kids; we're pretty sure they've figured out most of the act by the time they're sitting in the class. If I complete that analogy, we're talking about a course that teaches students the dangers of executable e-mail attachments. :-/ I'm sorry, we're expecting a whole lot more from these graduates.
I sent the following message to Electronic Frontier Canada's talk mailing list in response to an attack on the new computer virus and malware writing course to be offered at the University of Calgary:

From: "John Jarvis"
To: efc-talk@efc.ca
Subject: Re: [EFC-Talk] University of Calgary going to teach virus writing
Date: Saturday, May 31, 2003 5:41 PM

I think it's important to challenge the axioms of IT security, and I was intrigued by Dr. Aycock's ideas. Setting aside his SARS analogy, I agree with him on the game of catch-up that security professionals are playing today. By blanketing most of the proposal as unethical, I feel that Mr. Brunnstein is doing it a disservice.

Yes, the idea of "thinking like an attacker" doesn't leave me with a warm feeling (both as an IT security professional and a connected citizen), but I didn't have a problem with it, given the context of the Web page, whenever I first read it. What students take away from that course will depend upon the professor, just like any other course. Knowing how to write malware *does* give you a weapon, just like knowing how to set fires well gives you a weapon. I'll bet the International Association of Arson Investigators, Inc. could make some of their course descriptions look pretty menacing too. In both cases, you *choose* what to do with that knowledge.

People appreciate knowledgeable and trustworthy professionals informing them about flaws in their home security system, regardless of whether that professional learned his or her trade in the classroom or first hand.

I'm not saying we shouldn't be concerned about teaching this sort of material; on the contrary, I think the course should be heavily audited to get some *informed* discussion going amongst academics and security professionals alike. All the absolutes thrown around in Mr. Brunnstein's message truly struck me as fear mongering.

John L. Jarvis, BCS

----- Original Message -----
The original message was forwarded verbatim from The Risks Digest Volume 22: Issue 75.