Is it possible to find out when errors occurred on a RAID-Z other than just
monitoring the output of "zpool status" regularly and looking for changes?
I have a RAID-Z that I just discovered has between 3 and 7 checksum errors on
each of 7 disks. I want to know why disks that had worked without errors on
ZFS since 6TB was a big disk have got such errors in the past couple of weeks.
If I knew the date and time of the errors it might give me a clue. The system
in question has 9*6TB and 9*10TB disks in 2 RAID-Z arrays. None of the 10TB
disks had a problem while 7/9 of the 6TB disks reported errors. The 6TB disks
are a recent addition to the pool and the 9*10TB RAID-Z was almost full before
I added them, so maybe the checksum errors are related to which disks had the
most data written.
If I knew which day the errors happened on I might be able to guess at the
cause. But ZFS doesn't seem to put anything in the kernel log.
Any suggestions about what I can do?
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
Hi there,
I have tried both Netgear A6210 & TP-LINK Archer T2UH using a few
different sources from git hub. All of them fail at *make* command.
This is a common error for some sources: IEEE80211_NUM_BANDS’ undeclared
here (not in a function)
I am clutching at straws here but is there a library that I am missing
is needed?
I am CentOS 7 / 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24
UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Thanks
Piers
#1
[root@qld Netgear-A6210]# make
export DBGFLAGS
*** Building driver with debug messages ***
cp -f os/linux/Makefile.6
/home/webgen/Downloads/Netgear-A6210/os/linux/Makefile
make -C /lib/modules/3.10.0-514.16.1.el7.x86_64/build DBGFLAGS=-DDBG
SUBDIRS=/home/webgen/Downloads/Netgear-A6210/os/linux modules
make[1]: Entering directory `/usr/src/kernels/3.10.0-514.16.1.el7.x86_64'
CC [M] /home/webgen/Downloads/Netgear-A6210/os/linux/../../sta/assoc.o
In file included from
/home/webgen/Downloads/Netgear-A6210/include/os/rt_linux.h:77:0,
from
/home/webgen/Downloads/Netgear-A6210/include/rtmp_os.h:30,
from
/home/webgen/Downloads/Netgear-A6210/include/rtmp_comm.h:64,
from
/home/webgen/Downloads/Netgear-A6210/include/rt_config.h:34,
from
/home/webgen/Downloads/Netgear-A6210/os/linux/../../sta/assoc.c:28:
/home/webgen/Downloads/Netgear-A6210/include/cfg80211.h:45:49: error:
‘IEEE80211_NUM_BANDS’ undeclared here (not in a function)
struct ieee80211_supported_band Cfg80211_bands[IEEE80211_NUM_BANDS];
^
make[2]: ***
[/home/webgen/Downloads/Netgear-A6210/os/linux/../../sta/assoc.o] Error 1
make[1]: *** [_module_/home/webgen/Downloads/Netgear-A6210/os/linux] Error 2
make[1]: Leaving directory `/usr/src/kernels/3.10.0-514.16.1.el7.x86_64'
make: *** [debug] Error 2
[root@qld Netgear-A6210]#
#2
[root@qld rtl8812AU-driver-4.3.20]# make
make ARCH=x86_64 CROSS_COMPILE= -C
/lib/modules/3.10.0-514.16.1.el7.x86_64/build
M=/home/webgen/Downloads/rtl8812AU-driver-4.3.20 modules
make[1]: Entering directory `/usr/src/kernels/3.10.0-514.16.1.el7.x86_64'
CC [M]
/home/webgen/Downloads/rtl8812AU-driver-4.3.20/os_dep/linux/ioctl_cfg80211.o
/home/webgen/Downloads/rtl8812AU-driver-4.3.20/os_dep/linux/ioctl_cfg80211.c:92:12:
error: ‘IEEE80211_BAND_2GHZ’ undeclared here (not in a function)
.band = IEEE80211_BAND_2GHZ, \
^
/home/webgen/Downloads/rtl8812AU-driver-4.3.20/os_dep/linux/ioctl_cfg80211.c:150:2:
note: in expansion of macro ‘CHAN2G’
CHAN2G(1, 2412, 0),
^
/home/webgen/Downloads/rtl8812AU-driver-4.3.20/os_dep/linux/ioctl_cfg80211.c:101:12:
error: ‘IEEE80211_BAND_5GHZ’ undeclared here (not in a function)
.band = IEEE80211_BAND_5GHZ, \
^
I am using sendxmpp to send notifications of system errors. That requires the
XMPP client keep running. Xabber on Android sometimes stops for no apparent
reason (I have it configured to always have a notification so it should never
stop).
Is there a command-line program to check presence on XMPP so that I could have
it notify me by some other method if it detects that I haven't been on for a
while?
There are plenty of text mode XMPP clients that use curses. But I want a
script to just check if an account has logged in recently.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
Some time back I spent a great deal of time looking at the progress of a
major engineering project and took something like 400 shots showing most
of what was done and I have been requested to put them up on the
internet. Just for local consumption, ie a personal showing of the
project I have set up a series of slide shows using S5. All one does is
point a browser at the directory and it comes up with the title page and
one can step through it at will with the space bar or the up/down arrow
keys.
To keep it managable I have broken up the project into 2 month intervals
each 2 month set containing 35 to 40 photographs with suitable
descriptions.
Is this reasonable for a web site on such a project. So far only a
handfull of people have seen these slide shows, So far all with quite
glowing comments.
Note: I have no experience with web page development and in fact it does
not bother me if no one ever sees them, but the few who have seen then
say it would be a waste for them to disappear.
Ray
On Thursday, 25 May 2017 2:59:47 PM AEST Erik Christiansen via luv-main wrote:
> > As long as they're committing their changes regularly to the version
> > control system,
>
> I had the developers working on individual branches, then performed all
> main branch commits myself, after checking that there were no CRLFs or
> other nonsense, and that it compiled. Adding a useful version tag and
> management-relevant commit message is also best done with an overview.
One large project that I once worked on had an automated build checking system
that would email the developer and their team leader about any potential build
issues with code they checked in. This happened regularly due to conflicting
changes and some limitations of the language and development environment (in
house language that had some deficiencies in some regards but I can't remember
clearly). The standard practice was to work on such issues as soon as you
arrived at work and expect the team leader to ask for a progress report just
before lunch. It wasn't ideal in some ways, but it worked reasonably well.
> > built-in CI tools (e.g. you can configure it to automatically try to
> > compile the software on every commit and report the outcome to the
> > developer), and more.
>
> Maybe things are different in the embedded world, but I can't remember a
> developer submitting code which didn't build - that would be a
> professional embarrassment never lived down. And the makefile could
> readily support a "make commit", to automate a pre-commit build. I have
> to admit to relying on healthy paranoia to ensure I checked that it
> built, before the commit.
That depends on the scope of the project. If you have a project where a full
build takes a few hours then developers tend to just compile the bits that
they are working on which has some potential for conflicting commit. If you
have a project with dependencies on external libraries then if the developers
don't all have the same versions of those libraries you can have commits that
don't compile for others.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
On Thu, May 25, 2017 at 02:59:47PM +1000, luv-main(a)luv.asn.au wrote:
> > and following the required coding style,
>
> As for "required", I let the team drive the coding style requirement, as
> my only needs were consistency and readability. Since the team had set
> the style standard (mostly pilfered clauses), it was self-enforcing. Any
> cowboy was soon lassoed by annoyed colleagues.
by style requirements, i was mostly referring to formatting like spaces,
tabs, end-of-line markers etc that you mentioned. also things like where
braces belong - stupidly wasting a whole screen line on a brace or
putting it on the end of the line starting the block (e.g. subroutine
definition) where it belongs :)
and sometimes other stuff like self-documenting code with common
templates for functions etc (input, output, notes including algorithm
summary, known bugs and limitations, etc)
> > built-in CI tools (e.g. you can configure it to automatically try to
> > compile the software on every commit and report the outcome to the
> > developer), and more.
>
> Maybe things are different in the embedded world, but I can't remember
> a developer submitting code which didn't build - that would be a
> professional embarrassment never lived down. And the makefile could
> readily support a "make commit", to automate a pre-commit build. I
> have to admit to relying on healthy paranoia to ensure I checked that
> it built, before the commit.
i've seen lots of developers write and submit code that compiles
perfectly on their machine in their heavily customised (idiosyncratic
mess) environment that fails to build anywhere else. automated tools
to build and run a test suite on every checkin is amazingly useful for
catching such problems early.
craig
--
craig sanders <cas(a)taz.net.au>
Hi all,
We run CentOS on our servers and our dev machines are Linux or Windows
(and probably a Mac somewhere but we don't like to talk about that guy!)
We have grown quite a bit and having each dev running their pet dev
environment seems eclectic and difficult to manage (aka manage down
when you need to help a colleague and it take you 5 minutes to work out
how their IDE / screen is working).
I use The Eclipse for PHP IDE and one of the functions I like about it
is /** @todo ... **/ creates a task and this task can be managed on some
central service providers.
I am seeking advice about cross platform IDE's that work well with
shared tasks. Ideally something that can import messages from GMail as
tasks would be great (I see another plugin that does this in Eclipse).
It doesn't have to be Eclipse but the main aim is to manage individual
coding tasks better. If we can get some metrics from a management point
of view that would be great. If a platform can support multiple IDE's
efficiently then great.
Currently we use FreshDesk as out KB platform.
Any ideas appreciated (as are rants/feedback)
Have a great weekend.
Thanks
Piers
Earlier today I changed the IPv6 address used for the LUV server. I didn't
keep the old address working because almost no-one uses IPv6 and IPv6 clients
can generally fall back to IPv4 if necessary, so the time that the old address
remains in DNS caches shouldn't be a problem. Hetzner had decided to remove
the address range they had assigned to that server and I had to apply for a
new /64.
On Friday afternoon the hardware hosting the LUV VM crashed for unknown
reasons. A couple of hours later it booted again of it's own accord and there
was nothing in the logs. So that I can fix such things faster in future I have
setup 2 monitoring systems with Jabber that monitor each other. Now I just
need to monitor the Jabber client on my phone (see my previous message).
Some time recently an update to Dovecot had caused it to stop working
correctly. I didn't notice as I don't have an IMAP account on the LUV server.
To prevent that sort of thing happening again I have written a monitoring
script for IMAP that will alert if there is no new mail for more than 10
minutes in a test account. I believe that some mail to the LUV president was
delayed because of this. But it won't happen again.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
The traditional way of limiting memory use by users is to set a per-process
limit of memory use "as" in /etc/security/limits.conf and also limit the
number of processes. Therefore the amount of memory that may be used is the
address space limit multiplied by the number of processes. This isn't that
good. For example if a user is compiling C++ software the g++ program can
take quite a lot of RAM and there will also often be plenty of shell scripts
etc running. The RAM requirements for a g++ process multiplied by the number
of processes reasonably needed for a couple of login sessions and all the
scripts for building may be more than you want to allocate to them. As an
aside I'm thinking of how I killed some Unix servers while doing assignments
for Unix Systems programming when I was in university. :-#
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=860410
An additional problem is that there's a bug in recent versions of sshd in that
limits.conf applies to the sshd process.
https://manpages.debian.org/testing/systemd/logind.conf.5.en.html
When you use systemd the systemd-logind creates a new cgroup named
user-$UID.slice.
# cat \
/sys/fs/cgroup/memory/user.slice/user-506.slice/memory.max_usage_in_bytes
99999744
http://tinyurl.com/mhjb8ct
I've set the max_usage_in_bytes to 100M (see the above Red Hat URL for an
explanation of this). But it doesn't seem to work, I've written a test
program that allocates memory and sets it via memset() and it gets to the
ulimit setting without being stopped by the cgroup limit.
Any suggestion on how to get this going properly?
The next problem of course will be having systemd-logind set the limit when it
creates the cgroup. Any suggestions on that will be appreciated.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
Hello All,
I am wanting to cut the file size of photos from my phone. I have
tried opening in GIMP, but takes a bit of mousing and clicking around,
and even saving/exporting several times to get the size down. I think
the imagemagik suite should be able to do, but my reading of the man
pages does not make it apparent to me. They talk of resizing, but it
looks like the linear extent, rather than loosing some detail of the
same extent of image. I would appreciate any contributions.
regards,
Mark Trickett