I am having periodic ntp synchronisation problems.
ntp doesn't directly log anything - but I am using nagios3 to track
it's synchronisation and I periodically get problems - several times a
day. I can't figure out what is causing the loss of synchronisation.
Perhaps a burst of ntp packets dropped? But wouldn't ntp log something
about this?
Any suggestions on logging more info to find the cause or deeper
insights into what is going wrong would be appreciated.
Andrew
Here's the output of commands when things are bad:
config(0)# check_ntp_peer -H 127.0.0.1 -w 1.0 -c 2.0
NTP WARNING: Server has the LI_ALARM bit set, Offset 0.210925
secs|offset=0.210925s;1.000000;2.000000;
LI_ALARM apparently means not in sync ???
config(1)# ntpq -c rl
associd=0 status=c618 leap_alarm, sync_ntp, 1 event, no_sys_peer,
version="ntpd 4.2.6p2(a)1.2194-o Sun Oct 17 13:35:13 UTC 2010 (1)",
processor="x86_64", system="Linux/2.6.32-5-amd64", leap=11, stratum=3,
precision=-23, rootdelay=95.696, rootdisp=263.117, refid=192.189.54.33,
reftime=d2bccf2f.4854b34f Sun, Jan 15 2012 15:06:07.282,
clock=d2bcd1d6.e9ffea4c Sun, Jan 15 2012 15:17:26.914, peer=16519,
tc=10, mintc=3, offset=0.000, frequency=500.000, sys_jitter=35.804,
clk_jitter=0.000, clk_wander=91.828
It thinks it's in error by 16s???
config(0)# ntpdc -c kerninfo
pll offset: 0 s
pll frequency: 500.000 ppm
maximum error: 16 s
estimated error: 16 s
status: 4041 pll unsync mode=fll
pll time constant: 10
precision: 1e-06 s
frequency tolerance: 500 ppm
ntptime gives the same info
config(0)# ntptime
ntp_gettime() returns code 5 (ERROR)
time d2bcd2be.08d3a000 Sun, Jan 15 2012 15:21:18.034, (.034479),
maximum error 16000000 us, estimated error 16000000 us
ntp_adjtime() returns code 5 (ERROR)
modes 0x0 (),
offset 0.000 us, frequency 500.000 ppm, interval 1 s,
maximum error 16000000 us, estimated error 16000000 us,
status 0x4041 (PLL,UNSYNC,MODE),
time constant 10, precision 1.000 us, tolerance 500 ppm,
Then mysertiously everything is okay:
config(0)# ntpdc -c kerninfo
pll offset: 0.00998 s
pll frequency: 500.000 ppm
maximum error: 1.6291 s
estimated error: 0.004771 s
status: 0001 pll
pll time constant: 10
precision: 1e-06 s
frequency tolerance: 500 ppm
My leap becomes none (no leap_alarm) and things are ok?
config(0)# ntpq -c rl
associd=0 status=0618 leap_none, sync_ntp, 1 event, no_sys_peer,
version="ntpd 4.2.6p2(a)1.2194-o Sun Oct 17 13:35:13 UTC 2010 (1)",
processor="x86_64", system="Linux/2.6.32-5-amd64", leap=00, stratum=3,
precision=-23, rootdelay=95.272, rootdisp=983.007, refid=192.189.54.33,
reftime=d2bcd33b.bbc10580 Sun, Jan 15 2012 15:23:23.733,
clock=d2bcd852.ec6be1b9 Sun, Jan 15 2012 15:45:06.923, peer=16519,
tc=10, mintc=3, offset=13.497, frequency=500.000, sys_jitter=7.251,
clk_jitter=4.772, clk_wander=151.809
Hi everyone,
Update on secure boot from the Melbourne Free Software mailing list.
---------- Forwarded message ----------
From: Chris Samuel <chris(a)csamuel.org>
Date: 31 May 2012 16:43
Subject: [free-software-melb] Draft Fedora plan to cope with Secure Boot on
x86 hardware
To: Melbourne Free Software Interest Group <
free-software-melb(a)lists.softwarefreedom.com.au>
Hi all,
Matthew Garrett has just posted a draft plan on how Fedora 18 plans to
cope with Windows 8 certified x86 hardware that has Secure Boot
enabled in EFI.
http://mjg59.dreamwidth.org/12368.html
Basically it involves signing up with Microsoft, paying a $99 one-off
fee and then getting them to sign a boot shim that will boot Grub2
that has been signed by a Fedora key. Then it has to be signed code
all the way down to user space, so no loading out-of-tree drivers,
filesystems or other modules, either FLOSS or proprietary (and
certainly not a custom kernels) whilst Secure Boot is enabled.
For those who've not come across what this means, he has a nice
summary:
# Secure boot is built on the idea that all code that can touch the
# hardware directly is trusted, and any untrusted code must go through
# the trusted code. This can be circumvented if users can execute
# arbitrary code in the kernel. So, we'll be moving to requiring
# signed kernel modules and locking down certain aspects of kernel
# functionality. The most obvious example is that it won't be possible
# to access PCI regions directly from userspace, which means all
# graphics cards will need kernel drivers. Userspace modesetting will
# be a thing of the past. Again, disabling secure boot will disable
# these restrictions.
#
# Signed modules are obviously troubling from a user perspective.
# We'll be signing all the drivers that we ship, but what about out
# of tree drivers? We don't have a good answer for that yet. As
# before, we don't want any kind of solution that works for us
# but doesn't work for other distributions. Fedora-only or
# Ubuntu-only drivers are the last thing anyone wants, and this
# really needs to be handled in a cross-distribution way.
Interestingly he also shows that you can use Secure Boot to ensure
that your system will only be able to boot Fedora (etc) and never boot
a proprietary OS:
# A system in custom mode should allow you to delete all existing keys
# and replace them with your own. After that it's just a matter of
# re-signing the Fedora bootloader (like I said, we'll be providing
# tools and documentation for that) and you'll have a computer that
# will boot Fedora but which will refuse to boot any Microsoft code.
# It may be a little more awkward for desktops because you may have
# to handle the Microsoft-signed UEFI drivers on your graphics and
# network cards, but this is also solvable. I'm looking at ways to
# implement a tool to allow you to automatically whitelist the
# installed drivers. Barring firmware backdoors, it's possible to
# configure secure boot such that your computer will only run software
# you trust. Freedom means being allowed to run the software you want
# to run, but it also means being able to choose the software you
# don't want to run.
Interesting times!
cheers,
Chris
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
This email may come with a PGP signature as a file. Do not panic.
For more info see: http://en.wikipedia.org/wiki/OpenPGP
_______________________________________________
Free-software-melb mailing list
Free-software-melb(a)lists.softwarefreedom.com.au
http://lists.softwarefreedom.com.au/mailman/listinfo/free-software-melb
Hi to all,
ordered some stuff from itspot (specifically a bunch of corsair usb
drives) more than 2 weeks ago. never been delivered, no response to
e-mails. has anybody else had similar experience?
can you suggest another online shop?
cheers
Linux has used demand-loading of executables and shared objects for ages.
This means that if an executable or shared object has some pages that happen
to never get called (IE for corner conditions that don't happen in the usual
case) then they never get read from disk. The same applies for debugging data
when the object isn't being debugged. Even if you have unused data on the
same pages as executable data thus increasing the number of executable pages
loaded into RAM it shouldn't be a big deal as RAM keeps getting bigger so that
only a small portion of the multi-gigabytes of RAM in a low-end system is used
for executable pages.
Disks are getting bigger all the time. Nowadays it would be silly to consider
purchasing a disk for a desktop system that's less than 2TB in size and
laptops have had hundreds of GB for ages. Currently the biggest root
filesystem on a system I run is 12G, that is 0.6% of the space on a desktop
disk you might purchase (*) and less than 10% of the low end laptop disks that
were on sale a couple of years ago. If I had to make that 24G for the root
filesystem it wouldn't be a big deal.
In terms of reducing binary size there has been some discussion about a port
of Linux that uses 32bit instructions with 64bit data operations and registers
on an AMD64 architecture. The idea is to save RAM and TLBs by not using 64/64
while still getting some performance benefits of a 64bit CPU. But there is
little interest in this and it seems that Debian won't support it due to no-
one caring.
So the question is, why strip binaries? Back in the days when we ran servers
with 100MB hard drives there was a real need to save space. When a 128Kb/64Kb
ADSL link was considered fast there was a real need to reduce download time.
But now that most of us have ADSL links that allow 100KB/s (800Kb/s) UPLOAD
speeds and significantly faster download speeds.
The thing about the debugging symbol table is that you never know when you
will need it. Having it always there seems to have no cost that matters but
it can provide significant benefits. So why ship a program or shared object
that's stripped?
(*) The system in question has a 160G disk I got from a junk pile. But as
half the disk space is on my /junk filesystem and there is still unallocated
space I think this supports my general point.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
On Wed, 30 May 2012, Trent W. Buck wrote:
> I suppose there are caveats: it would have to be pleasant, like
> OpenWRT, not like a Thecus NAS; and I'd have a strong preference for a
Funny you should say that. I ended up giving up on the godawful hacked up
scripts (that recursively run chmod/chown immediately on any external
drive plugged in, without asking the user) in my thecus n4200, and
installed debian on it.
I've been meaning to write a blog post on it, if nothing else other than
to say "don't buy thecus unless you're prepared to put a lot of work in".
--
Tim Connors
I have a Linksys SRW2048 switch and the UI fails under IE[789] because it uses too many concurrent connections for the switch to handle. I can use IETab in Firefox which works because it indirectly limits the number of concurrent HTTP requests, but that isn't always a solution.
I have a squid proxy running on a linux router though, so I was hoping I could limit the number of concurrent connections. The docs tell me I can limit them by using a deny but that will result in the connection failing rather than simply being delayed until other connections are complete.
Any suggestions? I'll use netfilter to drop new connection requests if I can't figure it out using squid.
Thanks
James
Hi,
I have installed Virtual Box using the package manager for Suse 12.1,
and I want to load a Win 7 64bit version, so I set up the Win7 VM but
when I try to run it I get this error:
Could not start the VM
0x80bb0005 (Could not launch a process for the machine 'Windows 7 (64
bit)' (VERR_ACCESS_DENIED))
What do I need to do to overcome this problem, please?
Andrew Greig
Hi all,
I am using ZFS and "zfs snapshot/send/receive", to send ZFS snapshots
to a "mirroring" server, just the incremental changes instead of the full
snapshot everytime.
I am aware of LVM snapshots but I never used it for incremental mirroring
(and did not use them a lot, anyway, so I never looked too much under the
hood - maybe I should have..)
The kernel documentation makes me wonder whether it is possible to achieve
the same with LVM:
http://www.mjmwired.net/kernel/Documentation/device-mapper/snapshot.txt
According to that, you have access to the COW storage as well:
# lvcreate -L 1G -n base volumeGroup
# lvcreate -L 100M --snapshot -n snap volumeGroup/base
# ls -lL /dev/mapper/volumeGroup-*
93 brw------- 1 root root 254, 11 29 ago 18:15
/dev/mapper/volumeGroup-base-real
94 brw------- 1 root root 254, 12 29 ago 18:15
/dev/mapper/volumeGroup-snap-cow
95 brw------- 1 root root 254, 13 29 ago 18:15
/dev/mapper/volumeGroup-snap
96 brw------- 1 root root 254, 10 29 ago 18:14
/dev/mapper/volumeGroup-base
97
Is it possible to send copies of it to a remote server and apply the
changes as described in the doc, using the merge options?
The idea is to establish a "rotation regime" and just to send the -cow
block device for continuous mirroring.
Volker Grabsch, on a German mailing list, wrote about his improved
"block-level rsync", https://github.com/vog/bscp
He plans to use it for LVM mirroring.
I just wondered whether he really has to search the whole block device for
changes, if LVM knows about them already.
Regards
Peter