# free -m
total used free shared buff/cache available
Mem: 7962 2212 498 533 5251 4942
Swap: 10719 1732 8986
The above is from my workstation. It's running KDE, Chrome, KTorrent, and not
much else. My understanding of the above is that most RAM is being used for
cache and it's quite likely that this achieves the goal of reducing the number
of storage accesses.
The problem is that I don't want to reduce the number of storage accesses, I
want to improve the performance of interactive tasks. Ktorrent is configured
to only upload 60KB/s so a lack of caching of the torrents shouldn't prevent
it from uploading at the maximum speed I permit. When large interactive
programs like Chrome and Kmail get paged out it causes annoying delays when I
want to perform what should be quick tasks like replying to a single message
or viewing a single web page.
Any suggestions as to how to optimise for this use case? I already have swap
on one of the fastest SSDs I own and don't feel like buying NVMe for this
purpose or buying a system with more RAM, so software changes are required.
When replying please feel free to diverge from the topic. I think this is an
area where most Linux users know less than they would like so randomly
educational replies will be appreciated even if they don't help me with this
problem.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
https://en.wikipedia.org/wiki/DDR4_SDRAM
At the LUV meeting today someone asked me how servers can have so much RAM.
Firstly the above page is worth reading, DDR3 (which is in most desktop PCs in
use now) can have a maximum of 16G per DIMM, some PCs only have 2 slots, 4 is
very common, and 6 is reasonably common for high end systems (I own a couple
of those). 6*16G=96G as the theoretical maximum in a high-end DDR3 desktop
system and 6*64G=384G as the theoretical maximum of a DDR4 desktop, of course
the BIOS or chipset might not support so much.
https://en.wikipedia.org/wiki/List_of_Intel_chipsets
The above page lists Intel chipsets and gives the maximum RAM supported. A
lot of the Core 2 family were limited to 4G of address space, NB that's NOT 4G
of RAM, that's 4G including address space for video cards etc - so 3.25G was a
common amount of usable RAM on such systems. The Wikipedia page doesn't give
information on the RAM limits of the i3/i5/i7 systems.
The DDR4 Wikipedia page says that one of the benefits of DDR4 is a maximum
module size of 64G. A quick check of my favorite PC parts store showed that
they don't sell DDR3 larger than 8G modules or DDR4 larger than 16G (preumably
I could get larger via mail order). If I got a motherboard that took 6*DDR4
DIMMs then I could have 96G of RAM, but that would cost me 6*$295=$1770 so I'm
not about to do it.
http://www.dell.com/en-au/work/shop/povw/poweredge-t640
The Dell page for the PowerEdge T640 says that it has 24*DDR4 slots for up to
3TB of RAM with a caveat that the 128G modules aren't available yet (as an
aside it seems like the DDR4 Wikipedia page needs an update in this regard).
https://en.wikipedia.org/wiki/Fully_Buffered_DIMM
Threre are significant engineering issues related to supporting large numbers
of DIMM sockets. The above Wikipedia page is a good place to start if you
want to learn about how server RAM is different from (and incompatible to)
desktop RAM because of such issues.
https://en.wikipedia.org/wiki/ECC_memoryhttps://en.wikipedia.org/wiki/Hamming_code
Another difference with server RAM is the use of Error Correcting Codes (see
the above Wikipedia pages). Server RAM has extra bits in each word to store
codes that allow single bit errors to be detected and corrected and double bit
errors to be detected. It's a really good feature to have if you don't want
your data to be corrupted. It's also supported in high-end workstation
systems and some systems have support for both ECC and non-ECC RAM. ECC RAM
is usually "buffered" and won't work in systems that take "unbuffered" RAM (IE
all the desktop systems that don't use ECC RAM). There is unbuffered ECC RAM
for low-end server systems. Apart from buffered vs unbuffered there's
apparently no reason why ECC RAM shouldn't work in non-ECC systems, although
when I tried this back in the P4 days (before buffered RAM was invented) it
didn't work.
One of the more dedicated members of this list got a free server system from
LUV and uses it as his personal workstation. It has something like 96G of RAM
but makes more noise than most people want in the same building they are in.
There's no reason why you couldn't design a system with 24 DIMM sockets that
doesn't sound like an aircraft taking off, but most people who want so much
RAM have a soundproofed server room.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
On Thu, May 24, 2018 at 03:38:15PM +1000, Paul van den Bergen wrote:
> I currently take the approach that unless I have specific IO needs for a
> volume, I will work with one partition for OS and data as it is the most
> efficient use of disk space.
This is true for standard partitioning, or LVM logical volumes. It's not
true for either btrfs or zfs. Disk usage efficiency for them is completely
unaffected by using multiple sub-volumes/datasets.
With partitions or LVs you have to decide how big you want them at the time
you create them. Changing their size is a moderately complicated task - not
terribly difficult once you know how to do it, but it does require care and
attention to detail to ensure you don't screw it up. and depending on the
filesystem the partition or LV is formatted with, you may be restricted to
only growing the partition, never shrinking it (which makes, e.g., shrinking
/home to grow / even more of a PITA)
With sub-volumes on btrfs and datasets on zfs, they just share space on the
entire pool. Unless you set entirely optional quotas or reservations, you will
never have to resize anything. And if you do set a quota or reservation,
it's trivially easy and risk-free to change them at any time...they're "soft"
limits, not hard.
I used to do one big partition for everything - same as you, for the same
reason. Now I use zfs datasets so that I can enable different attributes
(like compression type, acl types, quotas, recordsize, etc) for specific
needs - e.g. mysql and postgres perform better if their files are stored on a
dataset where the recordsize is 8K rather than the ZFS default of 128K. And
systemd's journald complains if it can't use posix acls, so I'm getting into
the habit of setting 'acltype=posixacl' and 'xattr=sa' on /var/log for my zfs
machines. And using gzip rather than lz4 for /var/log too. Videos, music,
and deb files are already compressed so their datasets have 'compression=off'.
Having /home and /var and other directories separated from / is useful - but
in the old days of fixed partition sizes, it just wasn't worth the hassle
or the risk of running out of space on one partition while there's plenty
available on other partitions. Now it's no hassle or risk at all.
> synology takes the first slice (~2-3GB) of every disk in the device and
> makes a RAID 1 volume for the operating system, then does the same with the
> second slice to make a swap partition. You can lose all but one disk and
> still have a bootable working machine. the rest of the disk is available
> to make volumes out of.
yep, this is a good idea. it's similar to what I do on all my machines.
craig
--
craig sanders <cas(a)taz.net.au>
After reading the discussion about printer "drivers" (I use quotes due to the
different definitions of the term - obviously we aren't talking about kernel
drivers here) I've been thinking about how to manage such things. I've used
proprietary printer drivers in the past myself, printers are things that
sometimes just get added to a network without the sysadmin being consulted (it
works for all the Windows systems) and then we are stuck with making them work
for so many years that the manufacturer stops providing software updates and
it needs shared objects that aren't even supported any more.
It seems to me that Docker and similar technologies are a good solution to
this. They can encapsulate the shared objects needed (a driver from a badly
made .deb or from a .tar.ge won't stop "apt autoremove" from removing things
it needs), and deal with architectural issues (I probably don't really need
the full overhead of multi-arch just to have an i386 printer driver running on
an AMD64 system). Docker etc all have security features that cups lacks which
are needed to prevent the (presumably badly written) printer driver from
having exploitable security flaws or from just using all memory or disk space.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
On Wednesday, 23 May 2018 1:10:08 PM AEST Craig Sanders via luv-main wrote:
> far too much RAM to be worth doing. It's a great way to minimuse use of
> cheap disks ($60 per TB or less) by using lots of very expensive RAM ($15
> per GB or more).
>
> A very rough rule of thumb is that de-duplication uses around 1GB of RAM per
> TB of storage. Definitely not worth it. About the only good use case I've
> seen for de-duping is a server with hundreds of GBs of RAM providing
> storage for lots of mostly-duplicate clone VMs, like at an ISP or other
> hosting provider. It's only worthwile there because of the performance
> improvement that comes from NOT having multiple copies of the same
> data-blocks (taking more space in the ARC & L2ARC caches, and causing more
> seek time delays if using spinning rust rather than SSDs). Even then, it's
> debatable whether just adding more disk would be better.
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-113-si…
Some Google results suggest it's up to 5G of RAM per TB of storage, the above
URL seems to suggest 2.4G/TB. At your prices 2.4G of RAM costs $36 so if it
could save you 600G of disk space (IE 1.6TB of regular storage deduped to 1TB
of disk space which means 38% of blocks being duplicates) it would save money
in theory. In practice it's probably more about which resource you run out of
and which you can easily increase. Buying bigger disks generally seems to be
easier than buying more RAM due to limited number of DIMM slots and
unreasonable prices for the larger DIMMs.
> Compression's worth doing on most filesystems, though. lz4 is a very fast,
> very low cpu usage algorithm, and (depending on what kind of data) on
> average you'll probably get about 1/3rd to 1/2 reduction of space used by
> compressible files. e.g. some of the datasets on the machine I just built
> (called "hex"):
>
> # zfs get compressratio hex hex/home hex/var/log hex/var/cache
> NAME PROPERTY VALUE SOURCE
> hex compressratio 1.88x -
> hex/home compressratio 2.00x -
> hex/var/cache compressratio 1.09x -
> hex/var/log compressratio 4.44x -
>
> The first entry is the overall compression ratio for the entire pool.
> 1.88:1 ratio. So compression is currently saving me nearly half of my disk
> usage. It's a new machine, so there's not much on it at the moment.
Strangely I never saw such good compression when storing email on ZFS. One
would expect email to compress well (for starters anything like Huffman coding
will give significant benefits) but it seems not.
> I'd probably get even better compression on the logs (at least 6x, probably
> more) if I set it to use gzip for that dataset with:
>
> zfs set compression=gzip hex/var/log
I never knew about that, it would probably have helped the mail store a lot.
> (note that won't re-compress existing data. only new data will be
> compressed with the new algorithm)
If you are storing logs on a filesystem that supports compression you should
turn off your distribution's support for compressing logs. That will read and
rewrite the log files from a cron job and end up not providing much benefit to
size.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
On Mon, May 21, 2018 at 05:23:39PM +1000, pushin.linux wrote:
> Reply to list wasnt offered in my phone. Apologies.
no problem. I'll reply back to the list, so it goes to the right place.
> My photographic data is critical. Music, videos etc are unimportant.
You've probably heard this before but:
******************************************
******************************************
** **
** RAID IS NOT A SUBSTITUTE FOR BACKUP! **
** **
******************************************
******************************************
RAID is convenient, and it allows your system to keep going without having to
restore from backup but you will still need to backup your photos and other
important data regularly.
> I could buy another 2 Tb drive, but what to do with the 1Tb drive. I thought
> I could have bare system running onthe 1Tb and all storage on a RAID pair.
What you have will work fine, there's nothing wrong with it. I just think
that you're better off having your OS disk on some form of RAID as well.
The easiest way to do that is to just get another 1TB drive (approx $60).
Then you'd have two mirrored drives, one for OS & home dir and other stuff,
and one for your photos.
If you're using ZFS, you could even set it up so that you have a combined pool
with two mirroed pairs (2 x 1TB drives and 2 x 2TB), giving a total of 3TB
shared between OS and your photos. This is probably the most flexible setup.
LVM would also allow you to combine the storage, but it's quite a bit more
work and more complicated to set up.
BTW, just to state the obvious - each mirrored pair of drives should be the
same size (if they're different, you'll only get the capacity of the smallest
drive in the pair), but you can have multiple mirrors of different sizes in a
pool.
e.g. here's the root zfs pool on my main system. It has the OS, my home
directories, and some other stuff on it. Most of my data is on a second 4 TB
pool, and this machine also has an 8TB pool called "backup" which has regular
(hourly, daily, weekly, monthly) snapshotted backups of every machine on my
home network.
# zpool status ganesh
pool: ganesh
state: ONLINE
scan: scrub repaired 0B in 0h10m with 0 errors on Sat Apr 28 02:10:27 2018
config:
NAME STATE READ WRITE CKSUM
ganesh ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-Crucial_CT275MX300SSD1_163313AADD8A-part5 ONLINE 0 0 0
ata-Crucial_CT275MX300SSD1_163313AAEE5F-part5 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-Crucial_CT275MX300SSD1_163313AAF850-part5 ONLINE 0 0 0
ata-Crucial_CT275MX300SSD1_163313AB002C-part5 ONLINE 0 0 0
That has two mirrored pairs of drives (roughly equivalent to RAID-10 in mdadm
terms), called mirror-0 and mirror-1. BTW, "mirror-0" and "mirror-1" are
what are known as "vdev"s or "virtual devices". A vdev can be a mirrored
set of drives as above, or a raid-z, or even a single drive (but there's no
redundancy for a single drive and adding one to a pool effectively destroys
the entire pool's redundancy, so never do that). A ZFS pool is made up of
one or vdevs. Also BTW, a mirrored set can be pairs as I have, or (just like
RAID-1 mirrors) you can mirror to three or four or more drives if you want
extra redundancy (and extra read speed)
Anyway, the vdevs here happen to be both the same size because I bought 4
identical SSDs to set it up with (4 x 256 GB was slightly more expensive than
2 x 512GB, but by spreading the IO over 4 drives rather than just two, I get
about double the read performance), but there's no reason at all why they
couldn't be different sizes.
e.g. the easiest and fastest way for me to double the capacity of that pool
would be to just add a pair of 512 GB SSDs to it. It's nowhere near full, so
I won't be doing that any time in the forseeable future.
In fact, that's one of the advantages of using mirrored-pairs - you can
upgrade the pool two drives at a time. Either by adding another pair of
drives, or by replacing both drives in a pair with larger drives.
For comparison, here's the main storage pool "export" of my MythTV box. It
has one vdev called "raidz1-0", with 4 x 2TB drives.
# zpool status export
pool: export
state: ONLINE
scan: scrub repaired 0B in 15h14m with 0 errors on Sat May 19 18:55:01 2018
config:
NAME STATE READ WRITE CKSUM
export ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST2000DL003-9VT166_5YD1QFAG ONLINE 0 0 0
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA5379164 ONLINE 0 0 0
ata-WDC_WD20EARX-008FB0_WD-WCAZAJ827116 ONLINE 0 0 0
ata-WDC_WD20EARS-00MVWB0_WD-WCAZA5353040 ONLINE 0 0 0
If I wanted to upgrade it, I could either add a second vdev to the pool (a
mirrored pair, or another raid-z vdev), OR i could replace each of the 2 TB
drives with, say, 4TB drives. I'd only see the extra capacity when ALL drives
in the vdev had been replaced.
In practice, that would take so long that it would be much faster to just
create a new pool with 4 x 4 TB drives and use 'zfs send' to copy everything
to the new pool, then retire the old pool.
BTW, You can see that I've had to replace one of the Western Digital drives
with a Seagate at some time in the past. If I wanted to know when that
happened, I could run "zpool history" - ZFS stores a history of every
significant thing that happens to the pool. e.g.
# zpool history export | grep ata-ST2000DL003-9VT166_5YD1QFAG
2016-06-05.15:00:03 zpool replace -f export ata-WDC_WD20EARX-00PASB0_WD-WCAZA8430027 /dev/disk/by-id/ata-ST2000DL003-9VT166_5YD1QFAG
and if I wanted to know when I created the pool:
# zpool history export | head -2
History for 'export':
2012-07-15.09:13:43 zpool create -f -o ashift=12 export raidz scsi-SATA_WDC_WD20EARX-00_WD-WCAZA8436337 scsi-SATA_WDC_WD20EARS-00_WD-WCAZA5379164 scsi-SATA_WDC_WD20EARX-00_WD-WCAZA8430027 scsi-SATA_WDC_WD20EARS-00_WD-WCAZA5353040
The ashift=12 option tells 'zpool' to create the pool aligned for 4K sectors
(2^12 = 4096) instead of the default 512 byte sectors (ashift=9, 2^9 = 512).
And The "scsi-SATA-" and "ata-" prefixes refer to the same drives. I expect
that I exported the pool and then re-imported it at some point and the drive
names changed slightly. or maybe after an upgrade the kernel stopped caring
about the fact that the SATA drives were on a SAS scsi controller. Don't know,
don't care, not important...the model and serial numbers identify the drives,
and I have sticky labels with the serial numbers on the hot-swap bays.
if you carefully compare the zpool create command with the status output
above, you'll notice that the drives listed in the create command aren't the
same as those in the status output. I've had to replace a few of those WD
EARX drives in that pool.
# zpool history export | grep replace
2012-07-18.09:27:30 zpool replace -f export scsi-SATA_WDC_WD20EARX-00_WD-WCAZA8430027 scsi-SATA_WDC_WD20EARX-00_WD-WMAZA9502728
2013-01-03.21:15:57 zpool replace -f export scsi-SATA_WDC_WD20EARX-00_WD-WMAZA9502728 scsi-SATA_WDC_WD20EARX-00_WD-WCAZAJ827116
2016-05-18.22:24:28 zpool replace -f export ata-WDC_WD20EARX-00PASB0_WD-WCAZA8436337 /dev/disk/by-id/ata-WDC_WD20EARX-00PASB0_WD-WCAZA8430027
2016-06-05.15:00:03 zpool replace -f export ata-WDC_WD20EARX-00PASB0_WD-WCAZA8430027 /dev/disk/by-id/ata-ST2000DL003-9VT166_5YD1QFAG
In fact, you can see that on 18 May 2016, I tried to replace one of the drives
(WCAZA8436337) with one i'd previously removed and replaced (WCAZA8430027),
then about three weeks later on 6 June 2016 replaced it with a seagate drive.
That's what happens when you leave dead/dying drives just lying around without
writing "dead" on them.
> I really appreciate the enormous amount of support. The only issue now is
> how to create a roadmap from it all.
Again, no problem. And, like I said, the best thing you can do is to start
playing with this stuff in some virtual machines. practice with it until it's
completely familiar, and until you understand it well enough to be able to
make informed decisions that suit your exact needs.
VMs are great for trying stuff out in a safe environment that won't mess
with your real system. You can take some stupid risks and learn from them
- in fact, that's one of the great things about VMs for learning, you can
deliberately do all the things you've read not to do so you understand WHY
you shouldn't and also hopefully learn how you can recover from making such
disastrous mistakes.
In this case, VMs are also a good way to compare the differences between LVM
alone, mdadm alone, LVM+mdam, btrfs, and ZFS. and more. Learn what each is
capable of and how to use the tools to control them. Learn what happens to
an array or pool when you tell KVM to detach one or more of the virtual disks
from the running VM. Or if you some garbage data to one or more of the vdisks.
I've got a few VMs for doing that. One of them, called "ztest" (because it
started out being just for ZFS testing) has a 5GB boot disk (debian sid) plus
another 12 virtual disks attached to it, each about 200MB in size. These get
combined in various configurations for zfs, btrfs, lvm, mdadm depending on
what I want to experiment with at the time.
It's still worth doing this even if you've already finished setting up your
new drives. as long as you've got somewhere to make a complete fresh backup of
your data, you can always rebuild your system if you find a better way to set
things up for your needs.
And also worth it because the better you know mdadm or lvm or zfs or whatever
you end up using, the less likely you are to panic and make a terrible mistake
if a drive dies and you have to replace it, or deal with some other problem.
the best time to learn system recovery techniques is before you need them, not
at the exact moment that you need them :)
craig
--
craig sanders <cas(a)taz.net.au>
Hi all
This topic branched out of another subject namely how to find out if a
certain package was installed in Ubuntu
Andrew Greig wrote
I need to install some drivers for an Epson XP 6000 and it depends on Linux
Standard Base 3.2
Craig replied
It's always a bad idea to install proprietary drivers from manufacturers.
IMO, that should only be done if there aren't any open source drivers and
you've already bought the hardware - if you haven't bought it yet, look for an
alternative with open source drivers.
anyway:
http://www.openprinting.org/driver/epson-escpr
That lists support for several other Epson XP models, but doesn't explicitly
mention "XP 6000" and the Stylus CX6000. Dunno if it supports an XP 6000.
Worth checking out, anyway.
There are .deb packages for both 32-bit and 64-bit available there for
download.
craig
After I installed the lsb, my printer was loaded and configured automatically in my desktop computer.
I did not install the lsb in my laptop Ubuntu 16.04 last week, but I did load the Epson drivers without issue.
And when I printed one of my images in tiff (75Mb) on a 10"x 8" high quality inkjet paper I obtained a gorgeous output, no sign of banding, skin tone was perfect. The Epson output from my desktop Ubuntu 18.04 was awful, clour was wrong gamma was wrong and it suffered from banding. I have to get some sleep now but tomorrow I shall print the same tiff from my laptop, and then I will know if the problem is the Linux driver.
Hi All,
I was a very happy RPMer but now I am on the other side
before, I could enter #rpm -q lsb
and get a result
how do I make that sort of query in Ubuntu, please
I need to install some drivers for an Epson XP 6000 and it depends on
Linux Standard Base 3.2
Andrew Greig
On Sunday, 20 May 2018 2:01:14 PM AEST Craig Sanders via luv-main wrote:
> > In the morning I will install the 2 new 2Tb HDDs , and load the DVD to
> > launch myself into unfamiliar territory, so when I get to the partition
> > stage of the process I will have 1 x 1Tb HDD for the system and /home and
> > the 2 x 2Tb drives for the RAID.
>
> Is there any reason why you want your OS on a single separate drive with no
> RAID?
Some people think that it's only worth using RAID for things that you can't lose. But RAID
also offers convenience. If your system with RAID has one disk die you would probably
like it to keep running while you go to the shop to buy a new disk.
> If I were you, I'd either get rid of the 1TB drive (or use it as extra
> storage space for unimportant files) or replace it with a third 2TB drive
> for a three-way mirror - or perhaps RAID-5 (mdadm) or RAID-Z1 (zfs) if
> storage capacity is more important than speed.
I expect that if he's just starting out with RAID then he doesn't even have 2TB of data to
store.
> One thing I very strongly recommend is that you get some practice with mdadm
> or LVM or ZFS before you do anything to your system. If you have KVM or
> Virtualbox installed, this is easy. If not, install & configure libvirt +
> KVM and it will be easy. BTW, virt-manager is a nice GUI front-end for
> KVM.
https://etbe.coker.com.au/2015/08/18/btrfs-training/
A few years ago I ran a LUV training session on BTRFS and ZFS which included deliberately
corrupting disks to be prepared for real life corruption. I think this is worth doing.
Everyone knows that backups aren't much good unless they are tested and the same
applies to data integrity features of filesystems.
> A /boot filesystem isn't really necessary these days, but I like to have
> one. It gives me a standard, common filesystem type (ext4) to put ISOs
> (e.g. a rescue disk or gparted or clonezilla) that can be booted directly
> from grub with memdisk.
If you want to have a Linux software RAID-1 for the root filesystem then a separate
filesystem for /boot doesn't give much benefit. If you want to use BTRFS or ZFS for root
then you want a separate /boot. You can have /boot on BTRFS but that seems likely to
give you more pain than you want for no apparent benefit.
On Sunday, 20 May 2018 10:27:53 AM AEST Mike O'Connor via luv-main wrote:
> I suggest the following.
> 1. Do not use ZFS unless you have ECC ram
If you use a filesystem like BTRFS or ZFS and have memory errors then it will become
apparent fairly quickly so you can then restore from backups. If you use a filesystem like
Ext4 and have memory errors then you can have your data become increasingly corrupt
for an extended period of time before you realise it.
> 2. btrfs has real issues in a number of area so unless you are very
> experienced I would not use it.
If you use the basic functionality and only RAID-1 (not RAID-5 or RAID-6) then it's pretty
solid. I've been running my servers on BTRFS for years without problems.
> So why do it this way ?
> Well LVMs give a lot of options which as not available if there not
> there. This site has only a very simple example but give it a read
> http://tldp.org/HOWTO/LVM-HOWTO/benefitsoflvmsmall.html
That document says "Joe buys a PC with an 8.4 Gigabyte disk". I just checked the MSY
pricelist, the smallest disk they sell is 1TB and the smallest SSD they sell is 120G. Any
document referencing 8G disks is well out of date.
https://www.tldp.org/HOWTO/Large-Disk-HOWTO-4.html
The above document explains the 8.4G limit.
--
My Main Blog http://etbe.coker.com.au/
My Documents Blog http://doc.coker.com.au/
Craig Sanders via luv-main wrote:
> On Sun, May 20, 2018 at 01:17:24AM +1000, Russell Coker wrote:
>
>> One of the more dedicated members of this list got a free server system from
>> LUV and uses it as his personal workstation. It has something like 96G of
>> RAM but makes more noise than most people want in the same building they are
>> in. There's no reason why you couldn't design a system with 24 DIMM sockets
>> that doesn't sound like an aircraft taking off, but most people who want so
>> much RAM have a soundproofed server room.
> Or they could replace the shitty server fans with high-quality low noise fans.
Or for $50 - $100; they could just replace the "shitty server fans" ,
with a proprietary water-cooler;
- removing both the noise and the possibility that the clunky aluminium
heat sink,
loosens from the CPU
eg http://www.msy.com.au/453-water-cooling-
regards Rohan McLeod