
On Saturday, 26 December 2020 9:32:43 PM AEDT Mark Trickett via luv-main wrote:
I still remember effective wordprocessing with well under a megabyte of memory on the early PC's, whether IBM or quasi compatible. My first efforts were on a DEC Rainbow, under CPM 86/80 with "WPS". There was an equivalent package for the PDP8A running with something like 64 K or RAM, and supporting two concurrent users, and the daisy wheel printer, true letter quality printing. I can appreciate that the
Fixed width fonts made things a lot simpler. Whether the end results came out on dot-matrix (very ugly in the early 7 pin versions but not too bad in the final versions which had something like 24 pins) or daisy wheel (which always looked nice but changing fonts was difficult and couldn't be done during a page) it didn't take much RAM to do things. But people always wanted the basic functions that printers had, varying widths for letters and varying space size to make the line ends match up. This was always going to take more RAM. But Describe on OS/2 did all the DTP functions most people wanted in 16M of RAM. I don't think I've ever done any sort of "wordprocessing" task that I couldn't do on Describe in 16M in 1993. At the same time at uni we were using some HPPA Unix workstations, I don't know the specs of them but I'm certain that they had significantly less than 256M of RAM and they had a wordprocessor that ran nicely on diskless workstations over a 10baseT network which had more features than Describe.
current software "frameworks" or libraries make it relatively quick and easy to put together larger software packages, and mostly enforce good multithreading and memory overrun protection practices, but they tend to produce significantly larger code and data memory footprints, and sometimes quite slow execution.
The size of libc and 64bit pointers makes things bigger. I note on one of my servers that I have getty taking 1980KB of resident data even though it will never have a console login. On the machines I have most convenient access to that I currently run workstations range from 3M to 6M of cache and servers from 12M to 16M. The latest kernel package from Debian/Unstable is 280MB installed. My first Linux system had a 100M hard drive which allowed me to cross-compile GCC for HPPA.
I prefer the way that open source does it, where it can be seen and considered, rather than the closed source software where it becomes necessary to just trust the competence of the programmers, and their CVS practices.
Yes, but there's no shortage of bloated free software code. As RAM and storage gets bigger there isn't a lot of incentive to do otherwise. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/