Malware, Microsoft and virtualization-based security

[Moving this to luv-talk as I'm taking it off-topic for luv-main] Craig Sanders via luv-main <luv-main@luv.asn.au> wrote:
many are still "happily" using XP (i.e. they don't know any better) - as the recent virus fiasco at RMH shows. workstations across the entire hospital, from pharmacy to the wards taken out by really ancient XP viruses. I know, i've been stuck in here for much of it. some sections (fortunately, the transplant clinic was one) had upgraded to win7 but many were still running XP.
Just wondering what is currently considered best practice for protecting a modern Microsoft Windows machine against malware and exploitation? I read an interesting article over the weekend: http://www.malwaretech.com/2015/09/device-guard-beginning-of-end-for.html according to which MS have implemented new security measures based on the virtualization instructions of the CPU. If I understand correctly, parts of the kernel responsible for verifying signed executables are compartmentalized using virtualization. Malware can compromise the remainder of the kernel without compromising the hardware-protected code. Only signed executables can be run, and UEFI "secure boot" is used. Thus they reduce the size of the Trusted Computing Base considerably. Apparently, under Microsoft NT-derived systems, the windowing and graphics code all runs in kernel mode - surely providing plenty of opportunity for attackers.

On 2/02/2016 11:59 PM, Jason White via luv-talk wrote:
[Moving this to luv-talk as I'm taking it off-topic for luv-main] Craig Sanders via luv-main <luv-main@luv.asn.au> wrote:
Just wondering what is currently considered best practice for protecting a modern Microsoft Windows machine against malware and exploitation?
I'm not sure there is a good recommendation. Education is the best thing, when people don't do things they are not sure about, that is good; many of those people won't allow anything to be installed, that is better. The best advice is to make sure that people operate without admin privileges; create a separate admin user, make sure it works, then revoke admin from the normal use account. If ANY machine gets compromised with a virus or with malware, it is game over, time for a re-install or image replacement if you made one. The "never use root" for normal use in the Linux/Unix world is good; but it is many times more important in the Windows world. Any AV or other software that isn't from Microsoft (in this area), is an avenue for adding an attack surface. Certain components of Trend Micrro made the news recently, doing things very badly -- but I think it depends on the version of the software. The AV software has admin rights, so any security issues and the computer is owned BECAUSE of the AV product, in spite of it. Using AV and other "Internet Security" gives users a very false sense of security. Of course the only safe computer is one that never goes online, but that isn't much use for most people. I stick with MSE and Defender on Windows boxen; then I use judgement before installing anything -- and when something is installed, be very careful with possible "extras" of any kind. The less junk, the better. Any AV solution is only as good as the software itself and the definitions that are "known" at a point in time.
I read an interesting article over the weekend: http://www.malwaretech.com/2015/09/device-guard-beginning-of-end-for.html according to which MS have implemented new security measures based on the virtualization instructions of the CPU. If I understand correctly, parts of the kernel responsible for verifying signed executables are compartmentalized using virtualization. Malware can compromise the remainder of the kernel without compromising the hardware-protected code. Only signed executables can be run, and UEFI "secure boot" is used. Thus they reduce the size of the Trusted Computing Base considerably.
Malware can happen at all levels, even the BIOS or equivalent. Dell and Lenovo have pre-loaded very dangerous software on to newer machines. Trust is the biggest problem here, but what can you really do, except practice safe computing and hope that any hardware / software vendor is a good actor and anything they might risk YOUR security with is well considered (pros and cons) and only installed if it is secure.... Never use IE .... almost every single month, there are updates for IE of one sort or another. Boot verification, via UEFI -- we need to be careful of machines becoming Windows appliances and not being able to install any other OS of any kind. This is getting worse. Heck, even with Skylake processors, they will end up being unable to run Windows 7 due to Intel / Microsoft support limitations (I think mostly M$, but I'm not sure). Anyway, I'm getting off topic, so I'll quit now. Cheers A.

Andrew McGlashan via luv-talk wrote:
Never use IE .... almost every single month, there are updates for IE of one sort or another.
If that's your reason to avoid IE, you should also avoid iceweasel and chromium. Iceweasel has had 13 CVEs so far this year (i.e. in ~1mo); Chromium has had 13 CVEs so far this year (i.e. in ~1mo). https://security-tracker.debian.org/tracker/source-package/iceweasel https://security-tracker.debian.org/tracker/source-package/chromium-browser

On 3/02/2016 10:05 AM, Trent W. Buck via luv-talk wrote:
Andrew McGlashan via luv-talk wrote:
Never use IE .... almost every single month, there are updates for IE of one sort or another.
If that's your reason to avoid IE, you should also avoid iceweasel and chromium.
The CVE list for 2015 had IE highest in the browser list too. Chrome next, Firefox after that. It's also a /feature/ of Winblows that IE is closely integrated in to the system; so an IE bug can bring down the whole system. Oh and those "happily" using XP, many of those will be responsible for the never ending spam junk that traverses the Net.... A.

On Tue, Feb 02, 2016 at 07:59:55AM -0500, Jason White wrote:
Just wondering what is currently considered best practice for protecting a modern Microsoft Windows machine against malware and exploitation?
dunno, but in my experience, a lot of applications that require windows or specific versions of windows will run just fine on either wine or windows in virtualbox or vmware (vbox's and vmware's graphics support was better than kvm's last time i did this but kvm's has improved greatly since and might be a viable option now) in fact, some that require specific versions of windows work better with wine (where you can set the version to "emulate") than on newer windows. i'd be willing to bet that most of the XP apps that the hospital was depending on would work perfectly well in wine on linux or mac....much of it looks like fairly simple late-90s, early-2000s web apps from what i get to see on screen as a patient. the rest looks like non-descript windows apps. even specialised scientific instruments work fine like this - i remember a few (brand-new, latest models, selling for many tens of thousands of dollars) instruments that absolutely required NT 2000 or something similarly ancient at a $previous_employer. We installed NT under vbox, gave it access to the right PCI-e etc ports, and it just worked. It was configured to boot vbox in full screen mode. the idea was that a) it had no direct access to the rest of the network, the linux host acted as a firewall/bastion host, b) storage of capture data was to a samba share, so if it got compromised, we'd just blow away the NT vm and re-image it. presumably the same can be done with many medical diagnostic instruments. i don't think there is any product or specific procedures that will protect windows machines - you need skilled IT staff who know what they're doing and able to come up with appropriate solutions for the task at hand.
I read an interesting article over the weekend: http://www.malwaretech.com/2015/09/device-guard-beginning-of-end-for.html
I really can't help reading anything to do with Microsoft and "Trusted Computing" as "hardware-enforced vendor lock-in" what they're doing may incidentally benefit some of their customers, but mostly it's misfeatures that benefit MS at the expense of their customers. craig -- craig sanders <cas@taz.net.au>

Craig Sanders via luv-talk <luv-talk@luv.asn.au> wrote:
I really can't help reading anything to do with Microsoft and "Trusted Computing" as "hardware-enforced vendor lock-in"
what they're doing may incidentally benefit some of their customers, but mostly it's misfeatures that benefit MS at the expense of their customers.
I would put it slightly differently. I think the security features described in the article that I cited would in fact be very effective. On the other hand, they could also be described as what in another context is called a "dual-use technology". The security benefits are undeniable, but so are the potential restrictions on the user's freedom if he or she doesn't have keys with which to sign applications. There may be reasons founded in competition regulations why "secure boot" cannot be made mandatory in the x86 world; it's required by the specification, as I understand it, that the user can disable this feature. The ARM world is notoriously different, of course.

On Tue, Feb 02, 2016 at 06:47:53PM -0500, Jason White wrote:
I would put it slightly differently. I think the security features described in the article that I cited would in fact be very effective. On the other hand, they could also be described as what in another context is called a "dual-use technology". The security benefits are undeniable, but so are the potential restrictions on the user's freedom if he or she doesn't have keys with which to sign applications.
and if the user has a/the key, that pretty much invalidates most of the security benefits - which are predicated on the machine only running code known and signed by a central authority, such as Microsoft. Or maybe a corporation locking down their own PCs. There may be some hardware that allows us plebs to install and manage our own keys, but it will be rare. Allow the general public to install keys and there will be all sorts of apps and/or dodgy web sites telling people to install this key to get your amazing dancingsheep.exe screensaver working.
There may be reasons founded in competition regulations why "secure boot" cannot be made mandatory in the x86 world; it's required by the specification, as I understand it, that the user can disable this feature. The ARM world is notoriously different, of course.
Those are business decisions, not competition regulations. There's nothing about x86 that makes it any more or less subject to competition regulations than ARM cpus - nor should there be, such laws are and should be technology-neutral. MS can't/couldn't get away with locking down existing x86 designs because there's too long a history of people being able to install and run whatever they want on them. ARM is new, has all sorts of boot-time oddities anyway, very little standardisation as yet, and can be so locked down. when the precedent has been well and truly established on ARM, x86 etc will be next. In fact, it's already starting to happen. so, now is the time to try to get the ACCC interested in such matters, before the momentum is too hard to stop. craig -- craig sanders <cas@taz.net.au>

On 3 February 2016 at 08:00, Craig Sanders via luv-talk <luv-talk@luv.asn.au
wrote:
even specialised scientific instruments work fine like this - i remember a few (brand-new, latest models, selling for many tens of thousands of dollars) instruments that absolutely required NT 2000 or something similarly ancient at a $previous_employer. We installed NT under vbox, gave it access to the right PCI-e etc ports, and it just worked. It was configured to boot vbox in full screen mode. the idea was that a) it had no direct access to the rest of the network, the linux host acted as a firewall/bastion host, b) storage of capture data was to a samba share, so if it got compromised, we'd just blow away the NT vm and re-image it.
presumably the same can be done with many medical diagnostic instruments.
Hi Craig, I'm interested in your solution above. My team and I support research at Monash Uni and increasingly are supporting legacy hardware and windows OSs used to drive specialist instruments. Often, as you'd be familar, the PC is supplied by the instrument vendor with the instructions like we won;t support this if you deviate from our installation etc or we won't support if you use a different PC. A competing tension them comes from the researchers, we want to acces the corporate network etc. Then comes a time when the PC has aged/failed, we can't ge the acient OS (XP, NT) to run on modern hardware...and so on. So I assume by vbox you mean VirtualBox? I'm not overly familiar with it (I've run it for curisoty sake but haven't looked into at depth) but your post suggests that it's more apt to be given or configured for deeper access to hardware than say VMware? -- Colin Fee tfeccles@gmail.com

On Wed, 3 Feb 2016 02:31:06 PM Colin Fee via luv-talk wrote:
Then comes a time when the PC has aged/failed, we can't ge the acient OS (XP, NT) to run on modern hardware...and so on.
So I assume by vbox you mean VirtualBox? I'm not overly familiar with it (I've run it for curisoty sake but haven't looked into at depth) but your post suggests that it's more apt to be given or configured for deeper access to hardware than say VMware?
Some time ago I was asked to look at a special PC used for running diagnostics on expensive German cars (I think it was BMW). It was SCO Unix in a VM on Linux (I think). It was so awful that I gave up and didn't charge the client anything. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/

On Wed, Feb 03, 2016 at 02:31:06PM +1100, Colin Fee wrote:
Hi Craig,
sorry to take sso long to reply, just got back home from hospital (after 14 days! getting my T-cells wiped out so they stop treating my shiny new kidney as a foreign invader to be attacked). if you've got any specific questions or just want to pick my brains for some general tips, feel free to ask.
I'm interested in your solution above. My team and I support research at Monash Uni and increasingly are supporting legacy hardware and windows OSs used to drive specialist instruments. Often, as you'd be familar, the PC is supplied by the instrument vendor with the instructions like we won;t support this if you deviate from our installation etc or we won't support if you use a different PC.
well, support from the supplier is a different story. fortunately in this particular case we (IT @ chemistry, unimelb) got involved early in the purchase (which was quite unusual, it was far more common for the researchers to say "here's our shiny new machine, get it on the network for us") and were able to have some input and control into how the instrument could be connected to the network. the supplier wanted to make the sale, so worked with us (or at least, didn't obstruct us which was good enough) the easiest instruments to deal with use a standard interface, ethernet-connected devices are great - just give it a private network. otherwise GPIB or something "reasonably" standard is good. education of the researchers is vital - they need to know to ask about interfaces and connectivity, otherwise they'll spend a fortune on something with a proprietary interface which will be obsolete and deprecated by the supplier long before the useful life of the instrument is over....and when you're spending $100K on an instrument you want to get more than 3-5 years out of it.
A competing tension them comes from the researchers, we want to acces the corporate network etc.
that's why we used a linux box as a buffer - a relatively safe way of providing network access to the instrument. As well as access via the console, VNC and RDP clients also provided remote access to the virtualbox windows desktop, which had direct access to the instrument and access to the samba file shares.
Then comes a time when the PC has aged/failed, we can't ge the acient OS (XP, NT) to run on modern hardware...and so on.
one of the problem instruments I had to keep going was an ancient IFIR in one of the student teaching labs, connected via an ISA-bus instrument to a machine running win95 - for obvious reasons this was NOT connected to the rest of the network, so sneakernet was used to transfer files. the IFIR still worked perfectly (for something that was bought in the early 90s) and replacing it with something new would have cost $60K+, so keeping it going as long as possible was highly desirable. I have no idea if it's still there or not (it was when moved on to Nectar a few years ago), but it is doomed to eventually be scrapped, though, because our attempts to replace the win95 box with an industrial PC (about the only things still having ISA buses these days) kept running into all sorts of failures and problems (mostly that even the ind. PCs had CPUs and RAM and other built-in components that were too new for win95 or even win98).
So I assume by vbox you mean VirtualBox? I'm not overly familiar with it (I've run it for curisoty sake but haven't looked into at depth) but your post suggests that it's more apt to be given or configured for deeper access to hardware than say VMware?
yep, virtualbox. chosen mostly because it was free (i.e. all the required features were available in the free version), whereas vmware isn't. kvm at the time didn't allow direct pass-through to GPUs, USB ports, etc. for this sort of job, i'd probably still choose virtualbox - it just seems better suited to the task, as the instrument-controller PC is a specialised single-purpose machine. for server-type VMs, though, i'm inclined to use KVM, which is great for running dozens or hundreds of VMs on one server. i've got nothing against vmware in particular, just don't see any compelling reason to use it instead of either kvm or virtualbox. the management tools aren't anything special, perhaps prettier and simpler for novices than virsh etc, but that's not so important for someone experienced. craig -- craig sanders <cas@taz.net.au>

Quoting Craig Sanders (cas@taz.net.au):
for server-type VMs, though, i'm inclined to use KVM, which is great for running dozens or hundreds of VMs on one server.
I concur, and, shortly after my wife and I get back from being on holiday, I'm going to finish constructing a KVM-based replacement for my antique server (the PIII-based machine at my house in Menlo Park, California that runs linuxmafia.com aka unixmercenary.net, etc.). In testing, KVM has proven to have stunningly low overhead for server use. My main effort before completing migration will actually be on the host-OS side: I need to test and see how much hardening, starting with as much as feasible of grsecurity and PaX, is compatible with supporting KVM. (When I say 'on holiday', I mean from *cough* Australia -- but sadly not Victoria, this time: My trans-Pacific cruise ship arrived in Sydney this morning. My wife and I will be visiting NSW until the 20th. Still, in solidarity with my Melb. friends, I hoisted a VB after completing the Sydney Harbour Bridge climb this morning. Excellent as always.)

On 16 Feb 2016 7:56 pm, "Rick Moen via luv-talk" <luv-talk@luv.asn.au> wrote:
Quoting Craig Sanders (cas@taz.net.au):
(When I say 'on holiday', I mean from *cough* Australia -- but sadly not Victoria, this time: My trans-Pacific cruise ship arrived in Sydney this morning. My wife and I will be visiting NSW until the 20th. Still, in solidarity with my Melb. friends, I hoisted a VB after completing the Sydney Harbour Bridge climb this morning. Excellent as always.)
The bridge or the beer? If the latter I'm afraid I'll have to politely disagree.
participants (7)
-
Andrew McGlashan
-
Colin Fee
-
Craig Sanders
-
Jason White
-
Rick Moen
-
Russell Coker
-
Trent W. Buck