[E-voting] Testing REV systems. Was: UK shelves plans for e-voting trials

Craig Burton caburt at alphalink.com.au
Fri Oct 28 05:40:32 IST 2005


Brian, I haven't tried to defend our system against the efficacy of the 
attacks you propose.  Some attacks, however, might not render valuable 
results and I discuss these.  I suggest others.  I noted and have 
removed the other tests you suggest as the mail is long.

>
>Did these groups include or consult people with expertise in voting? 
>(Perhaps Rebecca Mercuri or even some esteemed members of this and 
>other citizen action groups). I presume they included people with 
>information security expertise.
>  
>
I can say Netcraft probably did not have this amount of voting app 
expertise.  They were looking purely at the voting client application, 
with voter privileges and at the public interfaces of the servers.

>Were these groups given a broad remit (find a flaw, any flaw) or were 
>they asked to test certain sections of the system?
>  
>
Netcraft : the voter interface, for anything awry or any holes in to the 
server.  They tried a list of known exploits against the servers, just 
before the election.  Prior to that, the interfaces of the machines from 
within the data centre. 

>If you ask such a group to look at the technology there is a strong 
>chance they will miss attacks that can be effected physically or 
>socially. The system as a whole must be open to attack, not just the 
>technology.
>  
>
The non-technical, non-transactional security was not Netcraft's brief.  
I expect an auditor could do this; Actica did examine broader aspects of 
the security. 

If you are interested in the pilots and any of the tests of this email 
having been considered by the UK Electoral Commission, here is the 
Stratford one (not technical)
http://www.electoralcommission.org.uk/templates/search/document.cfm/8267

The security approach we took is covered in detail on page 38 of the 
technical report of all pilots, at this link.  Also search for "Strand" 
as that was the UK consortium lead : a UK company was required as lead, 
mine is Australian.
http://www.electoralcommission.org.uk/templates/search/document.cfm/8944

I worked directly with Nedap/Powervote who incorporated internet vote 
EML totals (as totalled and issued by VoteHere.com, who were my "back 
office") with their pollsite machines.  You will read that our system 
was hobbled to make it fit the then EML2 requirements: seals, but no 
encryption.   Overall, I think EML is a good thing, but modularisation 
of voting components begs other kinds of tests on the interaction 
between them.

>Were the groups asked to work within a set of assumptions (eg: assume 
>the voter is vigilant enough to read dialogs about certified code)?
>  
>
Not those assumptions : I can't comment on Qinetiq or Actica.

Consider also that we need a random minority of voters to check 
signatures, not all.  We do have the ability to instruct voters to do 
this via the paper voting information (as opposed to relying on voters 
vigilance alone).   So the attack would have to have a significant, 
measurable effect.

>>>A few ground rules:
>>>- Announce the testing program well before the election (at least
>>>six months). Remember in the real world people can anticipate
>>>elections years in advance.
>>>      
>>>
>>They also get the source codes for the client application (which is
>>a java applet).  But this can only be published after close of
>>withdrawals as all candidate information is embedded in the applet.
>>    
>>
>
>Am I right in suspecting that the applet is largely coded before the 
>close of withdrawals, and that the embedded candidate information is 
>a relatively small or self-contained piece? In this case the code 
>from one election can be used to devise attacks on the next election.
>  
>
There are common parts of the applet that are re-used with the following 
considerations:
1. we change the applet code frequently as part of maintenance
2. we have used an "obfuscator" on the compiled java bytecodes. This 
approach changes the applet bytecodes.  The reasons for this that the 
applet is smaller afterward; the obfuscator replaces variable names with 
noise so the bytecodes cannot be exactly predicted from the source 
(which breaks an attempted bytecode trojan or a trojan listening for a 
certain sequence of bytecodes in the JVM) and decompiling renders very 
messy code.  The obvious disadvantages of this are that it breaks the 
option to compare the published applet with another compile of the 
source, and we have to trust the obfuscator. 

We have considered a code-writing routine that re-arranges the flow of 
the applet, each time one is made.  This means "dumb" trojans looking 
for bytecodes will not find them in the right sequence, but it doesn't 
fix a vulnerability in the code.  A vulnerability has to make it past 
the auditors and others looking on.  The auditors may request an erudite 
or obscure section of code be re-written by us.  I accept that a 
"strong" applet relies on "strong" execution of its bytecodes by the JVM.

Attackers should try to trojan a JVM.
Attackers should try to trojan a PC and then to try a window overlay on 
the voting application.

>
>>There might be other attacks against the "secure printer" or voter
>>credential delivery mechanism, against the certificate verification
>>/ revocation service (Thawte) for the voting client and the server
>>SSL service.
>>    
>>
>
>Absolutely, now you are getting into the spirit of it.
>  
>
The hard aspect of this is estimating the likelihood of any attack being 
successful in an ongoing way, and that the attack has a significant 
impact on the result.   The tests are made harder to execute and measure 
by the fact that they would  be executed in a real election.  Schneier 
describes how something approximating a real bomb was put in real 
luggage as a test of baggage security, then they lost that bag.  We 
would have to defend testing against examples like that.

I would like to see these tests done.  I'd hope it would be part of a 
virtuous cycle of improvement, which is the point of technology pilots. 

Even with improvements, there would have to be a practical, affordable 
amount of ongoing testing.  The systems may have to become simpler to 
facilitate practical testing, yet remain strong.
Best,
Craig.

>  
>
>>All the tests are valid and should be done.  I will try to have
>>them done, I can't pay the testers so I have to oblige the
>>Electoral Commission, a university or the ODPM to pay, if the next
>>pilot is in the UK.
>>
>>Thanks again,
>>Craig.
>>    
>>
>
>Brian.
>  
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.stdlib.net/pipermail/e-voting/attachments/20051028/dd61e924/attachment.htm


More information about the E-voting mailing list