[Fwd: Re: [E-voting] About Estonian e-voting]

Craig Burton caburt at alphalink.com.au
Mon Oct 24 14:47:04 IST 2005


>> The point of the mobile code, the signing etc is to make it so that 
>> the server code cannot affect votes.  If it can't affect votes, then 
>> it need not be trusted as much. The same is true for the data centre 
>> operators, sysadmins etc.
>>
>
> This doesn't make sense. If there are two components in a system
> say A and B. A is audited and the code is signed by someone. B is not. 
> A does some processing
> and passes the data to B. B can do whatever it wants with the data.
> The fact that A was signed doesn't help at all.

If A is signed, it does what it is meant to do, it is difficult to 
modify.  One of those things is to correctly encrypt our data.  It is 
far better to protect data than it is to protect machines.  The 
encrypted data are impossible to read.  We then sign the encrypted data 
so they too are difficult to modify.  Then the data can then pass 
through points B,C,D,E,F safely.  The remaining risks are that the 
encrypted votes are either copied or deleted.  Copies are easy to detect 
as they are identical and can be ignored.  Deleted votes are harder to 
detect as we need our sample of voters to use the VVAT system to detect 
a vote has gone missing.  This assumes all other processes for 
preventing deletion have failed.

Perhaps I haven't explained this very well.  A massive piece of software 
certainly is opaque.  What the above does it abstract out the parts that 
are required to make or modify data (the votes).   If these processes 
can be protected, and the votes they create can be protected, than the 
99% of the remaining software can do naught.

>
>>>
>>> Auditing and compiling code by a third party doesn't seem practical 
>>> to me.
>>> First, auditing is not gauaranteed to find all bugs. 
>>
>>
>>
>> The real threat is not bugs, its malware.  The Linux kernel hack of a 
>> missing "=" sign is typical of a bug but it was very unusual as a 
>> piece of malware.  With a very small piece of software of  few 
>> thousand lines, one that has no external dependencies (except 
>> compiled-in signed libraries), I think it would be hard to hide 
>> malware, especially malware we know is going to try to perform some 
>> very specific actions on vote data.
>>
>
> Malware is another threat. It's not *the* threat.

Well I can't argue the virtues of bugs over malware but bugs won't try 
to hide.  Bugs may be impartial.  Bugs may be caught by common tests 
such as regression tests or boundary tests.  Malware requires parallel 
testing.  I think malware will be a lot harder to hide in a small 
software application that is self-contained.  If the software is simple 
enough in its construction, it may be possible to execute exhaustive 
automated testing against it.  It may be practical to use 
program-proving techniques.  These would give us great confidence that 
bugs were insubstantial or non-existant.

>
>>> Second, the process of
>>> auditing, building and signing a piece of software is too complicated,
>>> too hard to observe, and too easy to be interfered with.
>>
>>
>>
>> Agree.  We have to trust someone somewhere.  Even in a paper election 
>> no one observer can be everywhere at once.   We have to trust other 
>> people didn't see something bad.   For the software,  the best way to 
>> do this is to give out the source codes (signed by us) to several 
>> groups for inspection.  If no people complain, one group compiles the 
>> code on a known clean compiler and they and others then sign the 
>> compiled object.  I concede that, unlike making ballot box seals, 
>> someone who wants to watch software signing will actually see very 
>> little but we should rely on several groups' satisfaction with the code.
>>
>> I got a quote from CSC.com to perform this audit-compile-sign task 
>> for an election.  They said it would take a person week.  We've had 
>> this done before by another software firm in 2003, it took 4 days.   
>> For a major election, this seems like an reasonable task.
>>
>
> I'm sure you would find some companies who would do it in an hour or 
> two :)

> Unfortunately, code reviews just don't catch very many bugs. Most bugs
> are found in testing and actual live use of a system. And if people 
> cannot
> verify their votes then there is no way of detecting them.

It's not really a code review.  Ideally, the process should be like 
program proving.

I would hope a good sized election would employ parallel testing.  We 
might ask a group of non-voters (outside jurisdiction or whatever) to 
use the service and individually record what they voted.  We make sure 
they get credentials that look like the real thing but are not in the 
register.  We don't have the person->credential link scrubbed for these 
voters.  We decrypt the votes and compare them to the voters own 
records.  Only a small sample of people would be needed, say 100 for 
10,000 real votes.

>
>>>
>>>> Only one voter needs to spot this signature not being intact and 
>>>> the game's up.  Probably 1% of the voters might do this, that's 
>>>> plenty.   Other parts of the system need to be audited and signed 
>>>> as well.  These are any parts that can make or modify votes.  
>>>
>>>
>>>
>>>
>>> Who is going to know which parts can modify votes or not? Ultimately,
>>> you end up having to trust the developers of the system.
>>
>>
>>
>> There are three reasons this can be done
>> 1. developers have to "out" the system (to certification authorities, 
>> academics, media whomever).  Just saying it's proprietary doesn't cut 
>> it.  It has to be an open system.
>> 2.  examining the mobile code confirms it encrypts and signs votes.  
>> Only the private key and the voter credential lists matter now.  No 
>> other software can intervene.
>> 3. developers of voting software had better make a demonstrably 
>> strong system lest they get their kneecaps broken by those who still 
>> think the operators can be coerced.
>> I have  a young family, why would I provide a system I couldn't 
>> demonstrate was very hard to game?  Bev Harris found 15 Diebold 
>> techo's home addresses...  its a lot cheaper to threaten and coerce 
>> than it is to bribe someone.
>>
>
> I'm afraid that doesn't really answer the question. Software running in
> a data center (or on a PC) cannot be seen by the voters. If some part 
> of that software
> is audited and signed, then that is really a meaningless assurance for
> the voters. In fact it's worse than that. It creates an impression
> of security, when there is really none.

My solution is that you don't just have to trust developers.  Their work 
is exposed and it is within the scope of the election to have their work 
seriously examined, at the code level.

This solution is to hopefully rely on less software, have it properly 
audited, have it use modern security techniques and try to keep it 
simple.  This is in fact a good foundation for a secure system.

>



More information about the E-voting mailing list