
On Wed, Jul 04, 2012 at 12:11:59AM +1000, Russell Coker wrote:
On Sun, 1 Jul 2012, Mark Trickett <marktrickett@bigpond.com> wrote:
But then the rest of us would not have had the news. I have been reading things about leap seconds, and now knowing that it can have a real impact is of value, along with that you cleared the matter by using date.
The below message which was sent out by Hetzner.de (a hosting company that provides excellent value for money and a quality service - which incidentally owns the server that runs my blog) should be of interset.
1MW for a couple of bugs which didn't even affect all servers!
another take on this would be that it's a shocking waste that these machines aren't using 1MW all the time - it means that they are basically idle and wasted. cloud is highly inefficient. those of us in HPC expect all machines to be running at ~90% of max power all the time. if they're not, then something is wrong.
That statement and most of the rest of what you said only makes sense if the hosting company is running a HPC.
I guess virtualisation doesn't work now any better than it ever has done.
That's the stupidest thing I've heard today. Virtualisation allows you to take a mostly idle workload off of (say) 100 servers and run it on a single server[1] which is well under 100x the cost and power consumption of the 100 servers. Even if that single server is still mostly idle, it still represents a massive saving of hardware and power, and certainly doesn't equate to virtualisation "not working". James [1] I know you wouldn't use a single server - i'm illustrating the ratio that a data centre might use.