Just got what looks to be a pretty good deal on what appears to have
GPL version of the code made available.
Got this D-Link Modem - hopes weren't high but it appears pretty
reasonable 802.11bgn + 8 LAN ports switch + WAN router.
I noticed that it contained a "D-LINK GPL Code Statement" with link
http://tsd.dlink.com.tw/GPL.asp
which turns out to be their GPL web interface which lists masses of
products and a download links to source code
which includes my DIR-632 (took a little time to find it amongst the
massive number of items).
I am downloading it now - 185Mb. See what you actually get in that.
The modem itself is rather nice - it has most features you could think
of (no CLI though, Web interface) and has a
built in manual that explains each feature in detail!
I suspect they are letting them go at that price because the new
802.11ac modems have arrived.
See http://www.msy.com.au/images/ADbanner/eletter/04042013/online/html.html
for the original promotion.
Anyway thought others might be interested in this and perhaps an
example of GPL compliance that seems pretty good...
Andrew
Hi all,
I just play around with Samba 4.
At the moment, there is no official support for external LDAP (e.g.
OpenLDAP).
My original understanding was: We want Samba 4 "out" as fast as possible
so we concentrate on "core functionality" (e.g. using internal LDAP, DNS
etc.), and look at issues related to external sources later.
Yesterday I found this:
----
http://us.generation-nt.com/re-samba-windows-8-pro-no-domain-logon-possible…
(20th Sep 2012)
We spent considerable effort over a period of years in attempting to
make this possible. It is not. Even if it was, it would not involve
'simply' reading the companies LDAP server, it would be a very intrusive
change no more acceptable than using our own built-in LDAP server.
Andrew Bartlett
----
I wonder whether it means: Samba will not use external LDAP at all (that
would rule it out for me here)
"Very intrusive changes".. to Samba or LDAP?
Does anybody has insight of the "roadmap", especially about the future of
external LDAP sources?
I know that you can make it work, somehow, now. But if it does not have
support by the Samba team it will be fiddly and fragile and you have to
worry about future releases all the time. I am not really keen on that.
Regards
Peter
If anyone has a few seconds spare, could you please run the following and post the results:
dd if=/dev/zero of=test.bin bs=512 count=128 oflag=sync
along with the kernel version and arch, filesystem, layers (lvm, md, etc), and underlying hardware?
When I do it on a bare 7200RPM sata disk on a modern server running xfs I get 10-15kbytes/second. I've repeated this on 3 other servers on different hardware with similar results, but when I do it on a ~10yo PC with ext3 I get around 500kbytes/second - 50x faster. I suspect it might be the kernel version (3.8 on new server, 2.6 on old pc) and the implementation of O_SYNC in older kernels wrt metadata but I don't have enough data points to form any conclusions...
Thanks
James
Thought this might be of interest to Linux (often security talks are
windows issues related) and other people needing to analyse strange
binaries under Linux.
Andrew
-------
Ruxmon Melbourne is on this Friday. As usual, we will adjourn to the
Oxford Scholar for a meal and a bit of a meet and greet.
Ruxmon is a free monthly community security event organised and run by
the Ruxmon team.
We are currently looking for some new Ruxmon speakers. Please email me
if you would like to speak on anything security related for between
10-30 minutes in front a small and non-intimidating audience.
For more information please visit our website: http://www.ruxmon.com
Presentations
Introductory level hooking in Linux - Ryan Platt
This talk will walk through methods of tracking execution of Linux
binaries, including an example of a simple Linux Kernel Module hooking
the system call table and usermode hooking by patching binaries on
disk.Introductory level hooking in Linux.
Introducing Daisho: an open source multi-protocol network tap - Dominic Spill
Every communication technology should have a widely available network
tap, preferably operating as close to the physical layer as possible.
If we can't tap the comms, assessing their security is a much harder
task. As we try to squeeze more bandwidth out of our links the problem
gets even tougher.
Our solution to this problem is project Daisho; an open source
hardware and software project to build a device that can monitor high
speed communication links and pass all of the data back to a host
system for analysis. Daiso will include a modular, high bandwidth
design that can be extended to monitor future technologies. The
project will also produce the first open source USB 3.0 FPGA core,
bringing high speed data transfer to any projects that build on the
open platform.
As a proof of concept at this early stage, Dominic will demonstrate
monitoring of a low bandwidth RS-232 connection using our first round
of hardware and discuss the challenges involved with the high speed
targets that we will take on later this year.
http://www.ruxmon.com/Melbourne/
Details
Date: Friday, 26th April
Time: 6:00PM
Location: RMIT University, City Campus
https://my.rmit.edu.au/portal/page/portal/RMITPortal/campusmaps?dsize=max
Room 080.09.012 (Building 80, Level 9, Room 12)
RMIT Building 80 entrance is off Swanston Street (just past Swanston
and A'Beckett St) and next door to the Oxford Scholar Hotel. Please
take the lift to Level 9 and make your way to Room 12.
I've posted the question on the bacula-users list too but everyone is asleep there I think.
My Bacula installation has suddenly slowed to a crawl and I've tracked it down (I think) to temporary table inserts. I've switched Bacula to use attribute spooling so it's fairly obvious that the backup job runs really fast (to the limit of USB2 connected backup medium) but then at the end of the job when it inserts the records into the database it almost stops. The backup job took about 2 minutes to copy a few GB of data but the database operation is still running a few hours later - it is making progress just slowly.
It seems that Bacula has created a temporary table called batch and is inserting records into that, and then at the end I think it inserts the records into the main job table.
The system load is ~1.15, and mysqld is highest on the list of processes but is only 1-2%, which I assume means that the cpu is being blocked somewhere due to IO (?).
Disk write performance inside the VM where Bacula and mysql runs is around 90MB/second, which is more than sufficient to insert a few hundred thousand rows.
iostat while all this is going on shows pretty much nothing happening.
I think this has happened before but then came good before I got time to look at it properly.
Can anyone please recommend how I can figure out where it's all going wrong? I've used the general log in mysql to determine that it's only the INSERT operations running, and iostat and strace to look for anything obviously loaded, but have come up empty.
Thanks
James
On Sat, 27 Apr 2013, David Zuccaro <david.zuccaro(a)optusnet.com.au> wrote:
> Also I don't have a video card, I'm using an onboard graphics
> controller:
>
> 00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ
> Integrated Graphics Controller (rev 02)
The Intel video chipsets have a history of being well supported for the common
functions. So while 3G etc will usually perform badly the basic functions
including scaling of videos will usually perform well. Every time I've
compared Intel video on a motherboard to a AGP or PCIe video card that
performs better for 3G I've found the Intel hardware to work better for
playing movies.
> This is the motherboard:
>
> http://www.gigabyte.com/products/product-page.aspx?pid=2304#sp
>
> I put in an extra 2GB stick so now its memory capacity has been maxed
> out.
http://en.wikipedia.org/wiki/List_of_Intel_chipsets
According to the above Wikipedia page you are limited to 2G or 4G of RAM with
that motherboard as you have a 945GZ chipset.
> Using mplayer seems to give much better performance than totem.
As a general rule if there are multiple programs to perform a task then
testing another if the first one doesn't work well enough is a good strategy.
Video players are very complex and there's lots of different options, you
really don't want to know about all the work that goes in to optimising them.
But it's complex enough that there are lots of ways for a player to not be
optimised ideally for some systems.
> > When iceweasel performs poorly how much swap is in use and is the hard
> > drive light on a lot?
>
> No. This is not a swap issue. No swap is in use just complex (buggy
> resource intensive, badly put together web pages. I think I will make do
> with what I have at this stage. I will definitely need a bigger HD and
> as you say swapping to a new HD is going to be a PITA but it needs to be
> done. top also gives memory usage.
Last time I tested Chromium seemed to perform better with slow Javascript.
But Chromium does use more RAM. As you've upgraded your RAM you should try
Chromium and see how it goes for the slow sites. Even if you don't like
Chromium it's still good to know where the problems are.
> In summary I think I will do some upgrading before I buy a new system;
> and look into getting a graphics card. Thanks everyone for your
> comments.
http://www.graysonline.com/lot/0001-611545/computers-and-it-equipment/dell-
optiplex-780-small-form
As an aside, the fastest CPU supported by your motherboard is the type of CPU
in systems that some people put out as e-waste. Buying a faster system at
auction isn't going to cost much. The above auction has a system with a CPU
that's considerably faster than the fastest that is supported on your current
motherboard and the current price is $100 with less than a day to go.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
I have a printer that's being strange. Since the printer itself runs
ipp, I'm trying to send it jobs directly with
lp/lpstat/loptions/cancel &c, without talking to cupsd.
I get some responses, but nothing very helpful. Are the cups client
utilities supposed to be able to talk generic IPP, or do they assume
the far end is cups?
Since IPP is based on HTTP, I also tried just using netcat, but
RFC2910 §13.1 seems to be sying that it has a bunch of byte-encoded
stuff, rather than just VERB LOCATION PROTO/VERSION.
Google and IRC haven't helped, so my next step I guess is to actually
read the RFCs instead of just skimming them.
$ ssh root@printserver grep DeviceURI /etc/cups/printers.conf
DeviceURI ipp://mfd
[....]
$ ping -c1 mfd
PING mfd.cyber.com.au (203.7.155.91) 56(84) bytes of data.
64 bytes from MFD.cyber.com.au (203.7.155.91): icmp_seq=1 ttl=254 time=0.723 ms
--- mfd.cyber.com.au ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms
This saves adding -h every time. Also I found that lpstat -t -h foo
will STILL talk to the default printserver (not foo) for some of its
queries -- WTF?
$ cat ~/.cups/client.conf
ServerName mfd
$ lpstat -t
scheduler is running
no system default destination
lpstat: server-error-operation-not-supported
lpstat: server-error-operation-not-supported
lpstat: server-error-operation-not-supported
lpstat: server-error-operation-not-supported
$ date | lp -d canthappen
request id is canthappen-498 (0 file(s))
$ date | lp -d foo
request id is foo-500 (0 file(s))
$ date | lp -d ''
request id is -502 (0 file(s))
...printer still hasn't printed anything.
$ lpstat -t
scheduler is running
no system default destination
lpstat: server-error-operation-not-supported
lpstat: server-error-operation-not-supported
lpstat: server-error-operation-not-supported
lpstat: server-error-operation-not-supported
MFD speaks PS (I think), so try that as well as text/plain.
Doesn't help.
$ date | a2ps -o- | lp -d '' # the mfd speaks PS (I think), so try that.
[stdin (plain): 1 page on 1 sheet]
[Total: 1 page on 1 sheet] sent to the standard output
request id is -504 (0 file(s))
$ nc mfd ipp
GET /
HTTP/1.1 400 Bad Request
Server: gSOAP/2.7
Content-Length: 0
Connection: close
$ nc mfd ipp
Print-Job 1 IPP/1.1
HTTP/1.1 400 Bad Request
Server: gSOAP/2.7
Content-Length: 0
Connection: close
$
Recently the micro SD card in my Samsung Galaxy S3 stopped working. As I
wasn't sure whether the card or the phone was broken I tried connecting it to
my Thinkpad T61 running Debian/Unstable. It gave the following log entries
when I inserted the card (in a holder to make it fit the full size SD socket in
the Thinkpad). Is this proof that the micro SD card is broken?
I will of course try other cards in the phone (which needs more storage) and
try the card in another phone (it probably has data I want to keep). But I
lack enough hardware to be sure that there isn't some strange partial
compatability issue.
Apr 25 20:03:30 linux kernel: [ 599.446191] mmc0: error -110 whilst
initialising SD card
Apr 25 20:03:30 linux kernel: [ 599.449766] sdhci-pci 0000:15:00.2: Will use
DMA mode even though HW doesn't fully claim to support it.
Apr 25 20:03:30 linux kernel: [ 599.514728] sdhci-pci 0000:15:00.2: Will use
DMA mode even though HW doesn't fully claim to support it.
Apr 25 20:03:30 linux kernel: [ 599.584550] sdhci-pci 0000:15:00.2: Will use
DMA mode even though HW doesn't fully claim to support it.
Apr 25 20:03:30 linux kernel: [ 599.659497] sdhci-pci 0000:15:00.2: Will use
DMA mode even though HW doesn't fully claim to support it.
As an aside, where can I get a cheap micro SD card?
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
On Mon, 22 Apr 2013, Craig Sanders <cas(a)taz.net.au> wrote:
> if your m/b takes DDR2 RAM then it' still possible that upgrading RAM
> alone may be enough, but it's a bigger gamble....if it doesn't work,
> you'll just be throwing the money away unless you have a use for a 2nd
> computer after you buy a new one. DDR2 is pretty much obsolete, and
> more expensive than newer DDR3 - 4GB of DDR2 costs $56.
Of course if they are going to keep using the old system then upgrading the
RAM won't be such a waste. Having a second PC is a really good idea for
anyone who's serious about a computer hobby and it could also be given to a
friend or relative.
> Alternatively, if you do go for a new system with an i7 CPU, you may
> find that the built-in Intel GPU is good enough - they're low-end if
> you're a gamer, but more than adequate for video playing, and they have
> good open source drivers.
My experience of Intel video cards is that they are quite good for playing
video with the free Linux drivers. So far the only time I've found a built-in
Intel video controller to be inadequate for my use (which is occasional video
playing and other tasks that aren't particularly challenging - I'm not a
serious gamer) is when I got a monitor with a resolution higher than FullHD.
> The FX-8350 @ $209 is $100 cheaper than the i7-3770, and a decent AM3
> motherboard is also about $50 to $100 cheaper (e.g. Sabertooth Z77 for
> i7 @ $244 vs Sabertooth 990FX for AMD CPUs at $197)
http://www.graysonline.com
When I'm buying new hardware I visit Grays Online. You can get entire systems
for less than the cost of the CPUs you list. Sure they won't be as fast, but
I'd rather buy a new system every year or so than wait years for a big
upgrade.
> as Russell suggested, a RAID-1 array is better/safer. but if you don't
> need the extra terabyte of storage, 2x2TB drives is a lot cheaper than
> the 2x3TB drives he suggested.
http://www.tecs.com.au/shop/storage/desktop-sata-hard-drives.html?limit=30
At the above there's a Seagate 2TB disk for $119 and a Seagate 3TB disk for
$139. Not a lot cheaper, only $20 for the extra TB.
There's a WD Green 2TB disk for $99 and a WD Green 3TB disk for $159, so if
you want the WD Green series then there's a significant price difference. But
generally you won't want WD Green for a RAID array due to the head parking
issues.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
On Mon, 22 Apr 2013, "Trent W. Buck" <trentbuck(a)gmail.com> wrote:
> I assume you'd have a non-negligible amount of zlib CPU churn when
> paging out to zram -- is that noticable? I suppose it is, but only
> when you have pegged BOTH CPU and RAM.
Using zram is a trade-off of CPU performance vs disk IO performance. As CPU
performance has been increasing steadily at an exponential rate for the last
20 years and disk performance has been increasing at a slow linear rate over
that period such a trade-off becomes increasingly beneficial.
Pegging both CPU and RAM is something that's difficult to do given that running
out of RAM results in processes being blocked on disk IO (regular file IO
that's not in cache or swap).
A quick test on an Intel T7500 CPU (which is far from the fastest CPU
available today and wasn't even the fastest laptop CPU when I bought it) shows
that gzip -9 on a 10MB file takes half a second of CPU time. While this is
quite a bit of time (more than it would take to do a direct contiguous write
of the same data) when you count random seeks for small access it's going to
average a much higher rate than HDD access. Rates lower than 3MB/s are often
seen on modern disks in real-world use due to seek overhead.
Whether zram is better than SSD is another issue. With the Intel SSD I've
tried the contiguous read and write speeds are quite a bit lower than that of
hard disks and while the overall performance is a significant improvement over
hard disks it's still not nearly as good as I'd hoped for. If SSD was as good
as some people claim then zram might not offer much benefit. But with the 120G
Intel SSDs I've tried there is plenty of scope for zram to boost performance.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/