In article <vdvvae$1k931$2@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
Dave Cutler came from DEC (where he was one of the resident
Unix-haters) to mastermind the Windows NT project in 1988. When did
the OS/2-NT pivot take place?
1990, after the release of Windows 3.0, which was an immediate commercial success. It was the first version that you could get serious work out of. It's been compared to a camel: a vicious brute at times, but capable of
doing a lot of carrying.
<https://en.wikipedia.org/wiki/OS/2#1990:_Breakup>
Funny, you'd think they would use that same _personality_ system to
implement WSL1, the Linux-emulation layer. But they didn't.
They were called subsystems in Windows NT, and ran on top of the NT
kernel. The POSIX one came first, and was very limited, followed by the Interix one that was called Windows Services for Unix. Programs for both
of these were in PE-COFF format, not ELF. There was also the OS/2
subsystem, but it only ran text-mode programs.
The POSIX subsystem was there to meet US government purchasing
requirements, not to be used for anything serious. I can't imagine Dave Cutler was keen on it.
WSL1 seems to have been something odd: rather than a single subsystem, a bunch of mini-subsystems. However, VMS/NT kernels just have different assumptions about programs from Unix-style kernels, so they went to lightweight virtualisation in WSL2.
<https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux#History>
The same problem seems to have messed up all the attempts to provide good Unix emulation on VMS. It's notable that MICA started out trying to
provide both VMS and Unix APIs, but this was dropped in favour of a
separate Unix OS before MICA was cancelled.
<https://en.wikipedia.org/wiki/DEC_MICA#Design_goals>
I think the whole _personality_ concept, along with the supposed
portability to non-x86 architectures, had just bit-rotted away by
that point.
Some combination of that, Microsoft confidence that "of course we can do something better now!" - they are very prone to overconfidence - and the terrible tendency of programmers to ignore the details of the old code.
John
The same problem seems to have messed up all the attempts to provide
good Unix emulation on VMS.
In article <vdvvae$1k931$2@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
I think the whole _personality_ concept, along with the supposed
portability to non-x86 architectures, had just bit-rotted away by that
point.
Some combination of that, Microsoft confidence that "of course we can do something better now!" - they are very prone to overconfidence - and the terrible tendency of programmers to ignore the details of the old code.
The Posix interface support was there so *MS* could bid on US government
and military contracts which, at that time frame, were making noise
about it being standard for all their contracts.
The Posix DLLs didn't come with WinNT, you had to ask MS for them
specially.
Back then "object oriented" and "micro-kernel" buzzwords were all the
rage.
On Tue, 8 Oct 2024 22:28 +0100 (BST), John Dallman wrote:
The same problem seems to have messed up all the attempts to provide
good Unix emulation on VMS.
Was it the Perl build scripts that, at some point their compatibility
tests on a *nix system, would announce “Congratulations! You’re not running EUNICE!”.
In article <vdvvae$1k931$2@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
I think the whole _personality_ concept, along with the supposed
portability to non-x86 architectures, had just bit-rotted away by that
point.
Some combination of that, Microsoft confidence that "of course we can do
something better now!" - they are very prone to overconfidence - and the
terrible tendency of programmers to ignore the details of the old code.
It was the Microsoft management that did it -- the culmination of a
whole
sequence of short-term, profit-oriented decisions over many years ... decades. What may have started out as an “elegant design” finally became unrecognizable as such.
Compare what was happening to Linux over the same time interval, where
the
programmers were (largely) not beholden to managers and bean counters.
On Wed, 09 Oct 2024 13:37:41 -0400, EricP wrote:
The Posix interface support was there so *MS* could bid on US
government and military contracts which, at that time frame, were
making noise about it being standard for all their contracts.
The Posix DLLs didn't come with WinNT, you had to ask MS for them specially.
And that whole POSIX subsystem was so sadistically, unusably awful,
it just had to be intended for show as a box-ticking exercise,
nothing more.
<https://www.youtube.com/watch?v=BOeku3hDzrM>
Back then "object oriented" and "micro-kernel" buzzwords were all
the rage.
OO still lives on in higher-level languages. Microsoft’s one attempt
to incorporate its OO architecture--Dotnet--into the lower layers of
the OS, in Windows Vista, was an abject, embarrassing failure which
hopefully nobody will try to repeat.
On the other hand, some stubborn holdouts are still fond ofSeems, you are not ashamed to admit your trolling tactics.
microkernels -- you just have to say the whole idea is pointless, and
they come out of the woodwork in a futile attempt to disagree ...
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
OO still lives on in higher-level languages. Microsoft_s one
attempt to incorporate its OO architecture--Dotnet--into the
lower layers of the OS, in Windows Vista, was an abject,
embarrassing failure which hopefully nobody will try to repeat.
AFAIK, .net is hugely successful application development technology
that was never incorporated into lower layers of the OS.
On the other hand, some stubborn holdouts are still fond of
microkernels -- you just have to say the whole idea is pointless,
and they come out of the woodwork in a futile attempt to disagree
In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence >D'Oliveiro) wrote:
On the other hand, some stubborn holdouts are still fond of
microkernels -- you just have to say the whole idea is pointless,
and they come out of the woodwork in a futile attempt to disagree
The idea is impractical, not pointless. A hybrid kernel gives most of the >advantages of a microkernel to its developers, and avoids the need for
lots of context switches. It doesn't let you easily replace low-level OS >components, but not many people actually want that.
In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence >D'Oliveiro) wrote:
On the other hand, some stubborn holdouts are still fond of
microkernels -- you just have to say the whole idea is pointless,
and they come out of the woodwork in a futile attempt to disagree
The idea is impractical, not pointless. A hybrid kernel gives most of the >advantages of a microkernel to its developers, and avoids the need for
lots of context switches. It doesn't let you easily replace low-level OS >components, but not many people actually want that.
<https://en.wikipedia.org/wiki/Hybrid_kernel>--- Synchronet 3.20a-Linux NewsLink 1.114
Windows NT and Apple's XNU, used in all their operating systems, are both >hybrid kernels, so the idea is somewhat practical.
John
On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
wrote:
In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On the other hand, some stubborn holdouts are still fond of
microkernels -- you just have to say the whole idea is pointless,
and they come out of the woodwork in a futile attempt to disagree
The idea is impractical, not pointless. A hybrid kernel gives most of the
advantages of a microkernel to its developers, and avoids the need for
lots of context switches. It doesn't let you easily replace low-level OS
components, but not many people actually want that.
Actually, I think there are a whole lot of people who can't afford
non-stop server hardware but would greatly appreciate not having to
waste time with a shutdown/reboot every time some OS component gets
updated.
YMMV.
George Neuner wrote:
On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
wrote:
In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On the other hand, some stubborn holdouts are still fond of
microkernels -- you just have to say the whole idea is pointless,
and they come out of the woodwork in a futile attempt to disagree
The idea is impractical, not pointless. A hybrid kernel gives most of
the
advantages of a microkernel to its developers, and avoids the need for
lots of context switches. It doesn't let you easily replace low-level OS >>> components, but not many people actually want that.
Actually, I think there are a whole lot of people who can't afford
non-stop server hardware but would greatly appreciate not having to
waste time with a shutdown/reboot every time some OS component gets
updated.
YMMV.
This is _exactly_ (one of) the problem(s) cloud infrastructure solves:
As soon as you have more than a single instance of a particular server/service, then you replace them in groups so that the service sees zero downtime even though all the servers have been updated/replaced.
On the other hand, some stubborn holdouts are still fond of microkernels
-- you just have to say the whole idea is pointless, and they come out of
the woodwork in a futile attempt to disagree ...
On 16/10/2024 07:36, Terje Mathisen wrote:
George Neuner wrote:
On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
wrote:
In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On the other hand, some stubborn holdouts are still fond of
microkernels -- you just have to say the whole idea is pointless,
and they come out of the woodwork in a futile attempt to disagree
The idea is impractical, not pointless. A hybrid kernel gives most of >>>> the
advantages of a microkernel to its developers, and avoids the need for >>>> lots of context switches. It doesn't let you easily replace low-level OS >>>> components, but not many people actually want that.
Actually, I think there are a whole lot of people who can't afford
non-stop server hardware but would greatly appreciate not having to
waste time with a shutdown/reboot every time some OS component gets
updated.
YMMV.
This is _exactly_ (one of) the problem(s) cloud infrastructure solves:
As soon as you have more than a single instance of a particular
server/service, then you replace them in groups so that the service sees
zero downtime even though all the servers have been updated/replaced.
That's fine - /if/ you have a service that can easily be spread across >multiple systems, and you can justify the cost of that. Setting up a >database server is simple enough.
Setting up a database server along with a couple of read-only
replications is harder. Adding a writeable failover secondary is harder >still. Making sure that everything works /perfectly/ when the primary
goes down for maintenance, and that everything is consistent afterwards,
is even harder. Being sure it still all works even while the different >parts have different versions during updates typically means you have to >duplicate the whole thing so you can do test runs. And if the database >server is not open source, your license costs will be absurd, compared
to what you actually need to provide the service - usually just one
server instance.
Clouds do nothing to help any of that.
But clouds /do/ mean that your virtual machine can be migrated (with
zero, or almost zero, downtime) to another physical server if there are >hardware problems or during hardware maintenance. And if you can do
easy snapshots with your cloud / VM infrastructure, then you can roll
back if things go badly wrong. So you have a single server instance,
you plan a short period of downtime, take a snapshot, stop the service, >upgrade, restart. That's what almost everyone does, other than the
/really/ big or /really/ critical service providers.
On Wed, 16 Oct 2024 09:17:03 +0200, David Brown
<david.brown@hesbynett.no> wrote:
On 16/10/2024 07:36, Terje Mathisen wrote:
George Neuner wrote:
On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
wrote:
In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On the other hand, some stubborn holdouts are still fond of
microkernels -- you just have to say the whole idea is pointless,
and they come out of the woodwork in a futile attempt to disagree
The idea is impractical, not pointless. A hybrid kernel gives most of >>>>> the
advantages of a microkernel to its developers, and avoids the need for >>>>> lots of context switches. It doesn't let you easily replace low-level OS >>>>> components, but not many people actually want that.
Actually, I think there are a whole lot of people who can't afford
non-stop server hardware but would greatly appreciate not having to
waste time with a shutdown/reboot every time some OS component gets
updated.
YMMV.
This is _exactly_ (one of) the problem(s) cloud infrastructure solves:
As soon as you have more than a single instance of a particular
server/service, then you replace them in groups so that the service sees >>> zero downtime even though all the servers have been updated/replaced.
That's fine - /if/ you have a service that can easily be spread across
multiple systems, and you can justify the cost of that. Setting up a
database server is simple enough.
Setting up a database server along with a couple of read-only
replications is harder. Adding a writeable failover secondary is harder
still. Making sure that everything works /perfectly/ when the primary
goes down for maintenance, and that everything is consistent afterwards,
is even harder. Being sure it still all works even while the different
parts have different versions during updates typically means you have to
duplicate the whole thing so you can do test runs. And if the database
server is not open source, your license costs will be absurd, compared
to what you actually need to provide the service - usually just one
server instance.
Clouds do nothing to help any of that.
But clouds /do/ mean that your virtual machine can be migrated (with
zero, or almost zero, downtime) to another physical server if there are
hardware problems or during hardware maintenance. And if you can do
easy snapshots with your cloud / VM infrastructure, then you can roll
back if things go badly wrong. So you have a single server instance,
you plan a short period of downtime, take a snapshot, stop the service,
upgrade, restart. That's what almost everyone does, other than the
/really/ big or /really/ critical service providers.
For various definitions of "short period of downtime". 8-)
Fortunately, Linux installs updates - or stages updates for restart -
much faster than Windoze. But rebooting to the point that all the
services are running still can take several minutes.
That can feel like an eternity when it's the only <whatever> server in
a small business.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 991 |
Nodes: | 10 (0 / 10) |
Uptime: | 120:09:12 |
Calls: | 12,958 |
Files: | 186,574 |
Messages: | 3,265,642 |