
On Sat, 7 Nov 2015 02:37:11 PM Rohan McLeod wrote:
But then I remembered mention of automotive OS hackers taking over control of a vehicle, http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
and all of a sudden the night-mare possibility of 'driving' in a vehicle which has the reliability of a Windows PC, augmented with the capacity to crash and kill me; struck full force [apologies to those who retain the original usage of hacking as 'DIY OS and IT enthusiasts'; this is obviously the more modern denigrating usage of 'OS cracker vandal etc']
There are already many computers controlling essential parts of cars. If the engine control system (which is a standard feature of every car since about 1990) was to stop working on a freeway it could be fatal. ABS and ESC increase the risk. For large scale risks the computers at all nuclear power plants are probably just as vulnerable to a Stuxnet type attack as the Iranian centrifuges. The risk of wholesale death from a series of Chernobyl type events is probably greater than that of small numbers of deaths from cars. It would be possible to implement an emergency stop system in autonomous cars. Press a button and a secondary computer that has no direct access to the primary computer takes over and slows the vehicle to a stop while broadcasting a warning to surrounding vehicles.
I guess the question which is bothering me; is in these automotive and perhaps (aeronautical ?); applications of OS's where bugs, hacks and security holes really are' life and death' issues; are the security holes which have been found; just programmer carelessness or reflective of the seemingly never ending generation and discovery of bugs and security glitches ?
A microkernel OS would probably be a good option. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/