
On Thu, 9 Oct 2014, Andrew McGlashan <andrew.mcglashan@affinityvision.com.au> wrote:
On 9/10/2014 7:46 PM, Erik Christiansen wrote:
ISTM that if neither upstart nor systemd deliver the goods once finished, then a third new offering will arise. Parallel starting of services, and effective handling of events will be provided, one way or another, I expect.
Perhaps. What makes a monolithic piece of software, like systemd the answer? Traditional Linux/Unix is built around processes that do ONE thing very well and without re-inventing the wheel. Sure we have choices where multiple wheels are available, but usually those wheels can act stand alone from competing wheels without locking in the car to use a specific wheel ... so to speak.
You may be thinking of GNU/HURD. Linux has always had a monolithic kernel because it performs better.
I'm quite prepared to blow raspberries at systemd too, but would need a real-world reason to do so.
There is real world experience in other systems that can count as well. The enormity of the systemd change should not be understated. There is often a case for no change or more limited change when change is actually necessary. There is also a place for other modular type solutions to perceived problems of sysvinit as well as /fixing/ the broken scripts that are to blame for pushing forward a replacement when none is really warranted.
But when developing software real-world experience in developing such software counts for a lot. I've been a DD for ~14 years. I know how Debian works. -- My Main Blog http://etbe.coker.com.au/ My Documents Blog http://doc.coker.com.au/