I just realized (too late, unfortunately) that Mutt's reply to list feature is
now generating a "To" field like this:
I think the cause may be this header in messages distributed via the list:
whereas the "To" header gives luv-main(a)luv.asn.au as the address.
Would it be possible for our hard-working and diligent administrators to get
rid of the lists.luv.asn.au domain in the headers?
I don't know enough about mailman configuration to appreciate how much work
this would require.
I might be able to work around this in my Mutt configuration, but I think it's
best fixed at the source, if possible.
On 12/09/2011, at 6:50 PM, Trent W. Buck wrote:
>> this has to be illegal.
> Under what law?
Depending on the claims being made (and perhaps where/how), it could be a form of misleading or deceptive conduct under the Competition and Consumer Act 2010 (formerly Trade Practices Act 1974).
There is a bit of info here on misleading and deceptive conduct under Australian consumer law at:
Anyone know what the default grub options you get on a RHEL6
box are ? Do they include acpi=off ? If so, is that conditional
upon something that Anaconda/kickstart detects ? Or something
that must be explicitly selected ?
(Got a friend in the US who has found a box with acpi=off set,
which is bad for NUMA hardware if the kernel is relying on ACPI
to get that info).
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
This email may come with a PGP signature as a file. Do not panic.
For more info see: http://en.wikipedia.org/wiki/OpenPGP
Resending this now the LUV list seems to be back up, as I never saw it
come through yesterday.
Apologies if this is a duplicate.
-------- Original Message --------
Some months ago, I ran some (probably naive) benchmarks looking at how
pgbench performed on an identical system with differing filesystems and
Since then some of you have pointed out that ZFS is looking pretty good
on Linux now, and I'm sure there's been a bunch of btrfs fixes too, and
no doubt various updates in the Linux kernel and PostgreSQL that should
I ran the tests on Ubuntu 11.04 with Pg 9.0 first, then upgraded the
system to Ubuntu 11.10 (beta) with Pg 9.1 and ran them again.
The latter combination showed a considerable performance improvement
overall - although I didn't investigate to find out whether this was due
to kernel improvements, postgres improvements, or virtio improvements.
The results are measured in transactions-per-second, with higher numbers
natty: didn't test
natty: didn't test
natty: didn't test
Last time I ran these tests, xfs and ext4 pulled very similar results,
and both were miles ahead of btrfs. This time around, ext4 has managed
to get a significantly faster result than xfs.
However we have a new contender - ZFS performed *extremely* well on the
latest Ubuntu setup - achieving triple the performance of regular ext4!
I'm not sure how it achieved this, and whether we're losing some kind of
data protection (eg. like the "barrier" options in XFS and ext4).
If ext4 has barriers disabled, it surpasses even ZFSs high score.
Perhaps some of you can shed some light on this for me?
Oddly, ZFS performed wildly differently on ubuntu 11.04 vs 11.10b. I
can't explain this, as the ZFS kernel module was identical (coming from
a third-party apt repository). Any ideas?
I've raised this on the StrongSwan list over the last several weeks, but with
no reply so far - hence I'm raising the question here as well to gain a
I have been experimenting with Strongswan as an implmentation of IKEv2 (my
ultimate interest is also in its implementation of mobile IPv6, but that's not
of immediate concern).
If I set up an IPSec tunnel with StrongSwan 4.5.2 between my laptop and an
external host (or a virtual machine and another host with a single network
interface such as eth0) all appears to work; but my desktop machine has both
eth0 and ppp0 interfaces.
The tunnel appears to be established correctly on both sides and the IPSec
policy appears to be correct, but my machine can't send packets over the
tunnel. The kernel log contains messages regarding pmtu discovery, and packet
monitoring shows that neighbour discovery packets are being sent out the eth0
interface rather than ppp0, i.e., if I try to ping the remote host over the
tunnel, I get a lot of neighbour discovery packets on eth0, whereas the
traffic needs to be routed through the ESP encapsulation to ppp0 and onward to
Obviously I can provide much more detail; the main problem at this stage is
how to bring the problem to the attention of someone who is familiar with
Linux IPSec sufficiently to identify the cause.
I may yet get a response on the strongSwan list, of course.
I have a command that has output like:
I want to execute that command in a shell script such that those
statements are execute into my environment, so $VAR1 is now equal to
I can just put the output in a temporary file and then source that file,
but is there a more direct way?
Has anyone used iet or tgt and can comment on the performance? I am
using iet at the moment because it's what I've used before and because
tgt doesn't have 'iscsi' in the package name I missed it when I was
first searching the Debian archives, but tgt seems to have a few
features that might be nice.
At 07:29 PM 9/15/2011, Joel W Shea wrote:
>On 15 September 2011 17:40, Peter Lieverdink
><me(a)cafuego.net> wrote: <...> > Earlier this
>week LUVs mail server broke down and a current
>backup of the list subscribers was not
>available. As I setup new lists ysterday, I
>temporarily used an older backup of list
>subscribers to populate these new lists. > >
>Today I rescued current subscriber data from the
>old server and used that to populate the new
>lists. Before I did that, I had to clear the
>lists, and I forgot to make the system NOT send
>notifications when doing that. You were all
>(silently) re-subscribed approximately a minute
>later. > I know this kind of work can usually go
>underappreciated, so here's a big
>THANKYOU!OU! > Apologies if I caused any
>unnecessary freak-outs :-) If it weren't for
>that, or the slight change in the domain in the headers; I think we'd barely
Thank you for such a smooth changeover. If it
wasn't for the change of domain and that little
slip with the unsubscribe, I'd have not really
noticed. Good job and much appreciated. :)
73 de VK3JED / VK3IRL
On 15 September 2011 17:40, Peter Lieverdink <me(a)cafuego.net> wrote:
> Earlier this week LUVs mail server broke down and a current backup of the list subscribers was not available. As I setup new lists ysterday, I temporarily used an older backup of list subscribers to populate these new lists.
> Today I rescued current subscriber data from the old server and used that to populate the new lists. Before I did that, I had to clear the lists, and I forgot to make the system NOT send notifications when doing that. You were all (silently) re-subscribed approximately a minute later.
I know this kind of work can usually go under–appreciated, so here's a
> Apologies if I caused any unnecessary freak-outs :-)
If it weren't for that, or the slight change in the domain in the
headers; I think we'd barely even notice there was an issue
> The list archives are not yet available on the new system, but rest assured these archives were backed up properly and are not lost. You'll be notified when the come back online.
Once again, thanks for the recovery effort.
(Apologies to everyone for the slight cross-post)
Joel Shea <jwshea(a)gmail.com>
I'm hoping somebody can help me understand what has changed in recent
kernels, with regard to routing. I'm having a problem with Centos 6 hosts
(and the same problem I had with a Ubuntu 10.10 host), that I dont have
with Centos5 hosts. I can only conclude that its changes in IP handling
that may have occurred with these later kernels. (Or changes in IP kernel
configuration in these later distros?)
I have this configuration:
* Firewall Router
PPOEServer (running over eth0) - handing out public addresses to
* Centos 5 hosts (many)
eth0 10.10.3.0/26 - (with a route 10/8 via this interface)
ppp0 (running over eth0) - getting a A.B.C.x address (this is a
public accessible address)
* My laptop
eth0/wlan0 10.10.3.128/26 (a different interface on the router)
My laptop has been very successful in communicating with C5 hosts via eth0
address and ppp0 address. Specifically, on my laptop, when I connect to
A.B.C.x, the packet arrives on the host on the ppp0 interface, and the
reply goes out the eth0 interface (because of a default 10/8 route via that
interface). Obviously if I talk to the host on its 10.10.3.0/26 address,
the packet arrives and leaves via the eth0 interface.
I implemented both Ubuntu 10.10 and Centos 6 servers with exactly the same
ppp0 (running over eth0) - getting A.B.C.x address
However, I cannot talk to the hosts via the ppp0 interface, when I'm on a
network that the reply would go via the eth0 interface. Specifically, if
the packet arrives on the ppp0 interface and *would* leave via the eth0
interface, it appears as if the host is not even processing the packet.
iptables confirms that it doesnt count the packets coming in.
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
0 0 all -- ppp0 * 0.0.0.0/0
If I communicate with the host, where the packet arrives and leaves on the
same interface - no problem.
If I change my routing so that access to my laptop (10.10.10.128/26) is
back via the ppp0 interface - then all is good (but not want I want) - if I
access the host via its 10.10.3.0/26 address, then no problems either -
again not really what I want.
Here's some network tracing to show whats going on...
(And yes, all this is done with firewall rules turned off - to make sure
that there wasnt an offending rule)
Here is what I see (if I ping the host via its A.B.C.x address from my
Router (pppoes interface):
10:41:45.461588 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
1, length 64
10:41:46.461205 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
2, length 64
10:41:47.460768 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
3, length 64
10:41:48.461881 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
4, length 64
Host: (ppp0) (ICMP echo, but no reply)
listening on ppp0, link-type LINUX_SLL (Linux cooked), capture size 65535
10:41:44.220727 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
1, length 64
10:41:45.219741 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
2, length 64
10:41:46.218733 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
3, length 64
10:41:47.219735 IP 10.10.3.130 > A.B.C.28: ICMP echo request, id 38224, seq
4, length 64
Host: (eth0) (only the PPPoE packets - no reply)
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
10:41:44.219726 PPPoE [ses 0x4b] IP 10.10.3.130 > A.B.C.28: ICMP echo
request, id 38224, seq 1, length 64
10:41:45.219005 PPPoE [ses 0x4b] IP 10.10.3.130 > A.B.C.28: ICMP echo
request, id 38224, seq 2, length 64
10:41:46.218227 PPPoE [ses 0x4b] IP 10.10.3.130 > A.B.C.28: ICMP echo
request, id 38224, seq 3, length 64
10:41:47.218992 PPPoE [ses 0x4b] IP 10.10.3.130 > A.B.C.28: ICMP echo
request, id 38224, seq 4, length 64
And for info, here's hosts routing table:
172.31.3.1 0.0.0.0 255.255.255.255 UH 0 0 0
10.10.3.0 0.0.0.0 255.255.255.224 U 0 0 0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0
10.0.0.0 10.10.3.1 255.0.0.0 UG 0 0 0
0.0.0.0 172.31.3.1 0.0.0.0 UG 0 0 0
What's changed in the new kernels - or the new distro versions?