• Re: Byte ordering

    From EricP@ThatWouldBeTelling@thevillage.com to comp.arch on Wed Oct 9 13:37:41 2024
    From Newsgroup: comp.arch

    John Dallman wrote:
    In article <vdvvae$1k931$2@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:

    Dave Cutler came from DEC (where he was one of the resident
    Unix-haters) to mastermind the Windows NT project in 1988. When did
    the OS/2-NT pivot take place?

    1990, after the release of Windows 3.0, which was an immediate commercial success. It was the first version that you could get serious work out of. It's been compared to a camel: a vicious brute at times, but capable of
    doing a lot of carrying.

    <https://en.wikipedia.org/wiki/OS/2#1990:_Breakup>

    Funny, you'd think they would use that same _personality_ system to
    implement WSL1, the Linux-emulation layer. But they didn't.

    They were called subsystems in Windows NT, and ran on top of the NT
    kernel. The POSIX one came first, and was very limited, followed by the Interix one that was called Windows Services for Unix. Programs for both
    of these were in PE-COFF format, not ELF. There was also the OS/2
    subsystem, but it only ran text-mode programs.

    The POSIX subsystem was there to meet US government purchasing
    requirements, not to be used for anything serious. I can't imagine Dave Cutler was keen on it.

    The Posix interface support was there so *MS* could bid on US government
    and military contracts which, at that time frame, were making noise about
    it being standard for all their contracts.
    The Posix DLLs didn't come with WinNT, you had to ask MS for them specially.

    The US government eventually stopped pushing for Posix and Windows
    support for it quietly disappeared.

    WinNT's OS2 subsystem also quietly disappeared.

    WSL1 seems to have been something odd: rather than a single subsystem, a bunch of mini-subsystems. However, VMS/NT kernels just have different assumptions about programs from Unix-style kernels, so they went to lightweight virtualisation in WSL2.

    Yes. VMS and WinNT handle memory sections differently than *nix.
    That difference makes fork() system call essentially impossible to
    implement on VMS/WinNT except by copying the address space.

    Note that back then Posix did not require fork be supported,
    just fork-exec (aka spawn) which does not require duplicating memory space, just carrying file and socket handles to the child process which
    NT handles natively.

    In the VMS/WinNT way, each memory section is defined as either shared
    or private when created and cannot be changed. This allows optimizations
    in page table and page file handling.

    Whereas in *nix a process can map a file and there is just one section user, then fork and now there are multiple section users. Then that child can
    change the address space and fork again. *nix needs to maintain various
    data structures to support forking memory just in case it happens.

    WSL1 was an _emulation_ of Linux essentially as a subsystem like OS2 and
    Posix were supported. WSL1 apparently supported fork() but did so by
    copying memory space making it slow, whereas fork-exec/spawn would be fast. Trying to emulate Linux with a privileged subsystem of helper processes
    was likely (I never used it) a lot of work, slow, and flaky.

    WSL2 sounds like they tossed the whole WSL1 thing and built a hyper-V
    virtual machine to run native Linux on top of WinNT as a host.

    <https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux#History>

    The same problem seems to have messed up all the attempts to provide good Unix emulation on VMS. It's notable that MICA started out trying to
    provide both VMS and Unix APIs, but this was dropped in favour of a
    separate Unix OS before MICA was cancelled.

    <https://en.wikipedia.org/wiki/DEC_MICA#Design_goals>

    I think the whole _personality_ concept, along with the supposed
    portability to non-x86 architectures, had just bit-rotted away by
    that point.

    Some combination of that, Microsoft confidence that "of course we can do something better now!" - they are very prone to overconfidence - and the terrible tendency of programmers to ignore the details of the old code.

    John

    Back then "object oriented" and "micro-kernel" buzzwords were all the rage.


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.arch on Mon Oct 14 23:51:27 2024
    From Newsgroup: comp.arch

    On Tue, 8 Oct 2024 22:28 +0100 (BST), John Dallman wrote:

    The same problem seems to have messed up all the attempts to provide
    good Unix emulation on VMS.

    Was it the Perl build scripts that, at some point their compatibility
    tests on a *nix system, would announce “Congratulations! You’re not running EUNICE!”.

    In article <vdvvae$1k931$2@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:

    I think the whole _personality_ concept, along with the supposed
    portability to non-x86 architectures, had just bit-rotted away by that
    point.

    Some combination of that, Microsoft confidence that "of course we can do something better now!" - they are very prone to overconfidence - and the terrible tendency of programmers to ignore the details of the old code.

    It was the Microsoft management that did it -- the culmination of a whole sequence of short-term, profit-oriented decisions over many years ...
    decades. What may have started out as an “elegant design” finally became unrecognizable as such.

    Compare what was happening to Linux over the same time interval, where the programmers were (largely) not beholden to managers and bean counters.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.arch on Mon Oct 14 23:55:59 2024
    From Newsgroup: comp.arch

    On Wed, 09 Oct 2024 13:37:41 -0400, EricP wrote:

    The Posix interface support was there so *MS* could bid on US government
    and military contracts which, at that time frame, were making noise
    about it being standard for all their contracts.
    The Posix DLLs didn't come with WinNT, you had to ask MS for them
    specially.

    And that whole POSIX subsystem was so sadistically, unusably awful, it
    just had to be intended for show as a box-ticking exercise, nothing more.

    <https://www.youtube.com/watch?v=BOeku3hDzrM>

    Back then "object oriented" and "micro-kernel" buzzwords were all the
    rage.

    OO still lives on in higher-level languages. Microsoft’s one attempt to incorporate its OO architecture--Dotnet--into the lower layers of the OS,
    in Windows Vista, was an abject, embarrassing failure which hopefully
    nobody will try to repeat.

    On the other hand, some stubborn holdouts are still fond of microkernels
    -- you just have to say the whole idea is pointless, and they come out of
    the woodwork in a futile attempt to disagree ...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From mitchalsup@mitchalsup@aol.com (MitchAlsup1) to comp.arch on Tue Oct 15 00:17:04 2024
    From Newsgroup: comp.arch

    On Mon, 14 Oct 2024 23:51:27 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 8 Oct 2024 22:28 +0100 (BST), John Dallman wrote:

    The same problem seems to have messed up all the attempts to provide
    good Unix emulation on VMS.

    Was it the Perl build scripts that, at some point their compatibility
    tests on a *nix system, would announce “Congratulations! You’re not running EUNICE!”.

    In article <vdvvae$1k931$2@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    I think the whole _personality_ concept, along with the supposed
    portability to non-x86 architectures, had just bit-rotted away by that
    point.

    Some combination of that, Microsoft confidence that "of course we can do
    something better now!" - they are very prone to overconfidence - and the
    terrible tendency of programmers to ignore the details of the old code.

    It was the Microsoft management that did it -- the culmination of a
    whole
    sequence of short-term, profit-oriented decisions over many years ... decades. What may have started out as an “elegant design” finally became unrecognizable as such.

    Compare what was happening to Linux over the same time interval, where
    the
    programmers were (largely) not beholden to managers and bean counters.

    Last 5 words are unnecessary.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Michael S@already5chosen@yahoo.com to comp.arch on Tue Oct 15 11:16:55 2024
    From Newsgroup: comp.arch

    On Mon, 14 Oct 2024 23:55:59 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 09 Oct 2024 13:37:41 -0400, EricP wrote:

    The Posix interface support was there so *MS* could bid on US
    government and military contracts which, at that time frame, were
    making noise about it being standard for all their contracts.
    The Posix DLLs didn't come with WinNT, you had to ask MS for them specially.

    And that whole POSIX subsystem was so sadistically, unusably awful,
    it just had to be intended for show as a box-ticking exercise,
    nothing more.

    <https://www.youtube.com/watch?v=BOeku3hDzrM>

    Back then "object oriented" and "micro-kernel" buzzwords were all
    the rage.

    OO still lives on in higher-level languages. Microsoft’s one attempt
    to incorporate its OO architecture--Dotnet--into the lower layers of
    the OS, in Windows Vista, was an abject, embarrassing failure which
    hopefully nobody will try to repeat.

    It sounds like you confusing .net with something unrelated.
    Probably with Microsoft's failed WinFS filesystem.
    WinFS was *not* object-oriented.
    AFAIK, .net is hugely successful application development technology
    that was never incorporated into lower layers of the OS.
    If you are interested in failed attempts to incorporate .net into
    something it does not fit then please consider Silverlight.
    But then, the story of Silverlight is not dissimilar to the story of
    in-browser Java, with main difference that the latter was more harmful
    to the industry.
    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless, and
    they come out of the woodwork in a futile attempt to disagree ...
    Seems, you are not ashamed to admit your trolling tactics.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.arch on Tue Oct 15 18:41:40 2024
    From Newsgroup: comp.arch

    In article <20241015111655.000064b3@yahoo.com>, already5chosen@yahoo.com (Michael S) wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    OO still lives on in higher-level languages. Microsoft_s one
    attempt to incorporate its OO architecture--Dotnet--into the
    lower layers of the OS, in Windows Vista, was an abject,
    embarrassing failure which hopefully nobody will try to repeat.

    AFAIK, .net is hugely successful application development technology
    that was never incorporated into lower layers of the OS.

    You're correct. There was an experimental Microsoft OS that was almost
    entirely written in .NET but it was never commercialised.

    <https://en.wikipedia.org/wiki/Singularity_(operating_system)>

    John
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From jgd@jgd@cix.co.uk (John Dallman) to comp.arch on Tue Oct 15 18:41:40 2024
    From Newsgroup: comp.arch

    In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless,
    and they come out of the woodwork in a futile attempt to disagree

    The idea is impractical, not pointless. A hybrid kernel gives most of the advantages of a microkernel to its developers, and avoids the need for
    lots of context switches. It doesn't let you easily replace low-level OS components, but not many people actually want that.

    <https://en.wikipedia.org/wiki/Hybrid_kernel>

    Windows NT and Apple's XNU, used in all their operating systems, are both hybrid kernels, so the idea is somewhat practical.

    John
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.arch on Tue Oct 15 18:57:07 2024
    From Newsgroup: comp.arch

    jgd@cix.co.uk (John Dallman) writes:
    In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence >D'Oliveiro) wrote:

    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless,
    and they come out of the woodwork in a futile attempt to disagree

    The idea is impractical, not pointless. A hybrid kernel gives most of the >advantages of a microkernel to its developers, and avoids the need for
    lots of context switches. It doesn't let you easily replace low-level OS >components, but not many people actually want that.

    It's useful to note that the primary shortcoming of a
    microkernel (domain crossing latency) is mostly not a problem
    on RISC processors (like ARM64) where the ring change
    takes about the same amount of time as a function call.

    One might also argue that in many aspects, a hypervisor is
    a 'microkernel' with some hardware support on most modern
    CPUs.

    Disclaimer: I spent most of the 90's working with the
    Chorus microkernel.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From George Neuner@gneuner2@comcast.net to comp.arch on Tue Oct 15 19:51:27 2024
    From Newsgroup: comp.arch

    On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
    wrote:

    In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence >D'Oliveiro) wrote:

    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless,
    and they come out of the woodwork in a futile attempt to disagree

    The idea is impractical, not pointless. A hybrid kernel gives most of the >advantages of a microkernel to its developers, and avoids the need for
    lots of context switches. It doesn't let you easily replace low-level OS >components, but not many people actually want that.

    Actually, I think there are a whole lot of people who can't afford
    non-stop server hardware but would greatly appreciate not having to
    waste time with a shutdown/reboot every time some OS component gets
    updated.

    YMMV.


    <https://en.wikipedia.org/wiki/Hybrid_kernel>

    Windows NT and Apple's XNU, used in all their operating systems, are both >hybrid kernels, so the idea is somewhat practical.

    John
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Terje Mathisen@terje.mathisen@tmsw.no to comp.arch on Wed Oct 16 07:36:29 2024
    From Newsgroup: comp.arch

    George Neuner wrote:
    On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
    wrote:

    In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless,
    and they come out of the woodwork in a futile attempt to disagree

    The idea is impractical, not pointless. A hybrid kernel gives most of the
    advantages of a microkernel to its developers, and avoids the need for
    lots of context switches. It doesn't let you easily replace low-level OS
    components, but not many people actually want that.

    Actually, I think there are a whole lot of people who can't afford
    non-stop server hardware but would greatly appreciate not having to
    waste time with a shutdown/reboot every time some OS component gets
    updated.

    YMMV.

    This is _exactly_ (one of) the problem(s) cloud infrastructure solves:
    As soon as you have more than a single instance of a particular server/service, then you replace them in groups so that the service sees
    zero downtime even though all the servers have been updated/replaced.

    Terje
    --
    - <Terje.Mathisen at tmsw.no>
    "almost all programming can be viewed as an exercise in caching"
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch on Wed Oct 16 09:17:03 2024
    From Newsgroup: comp.arch

    On 16/10/2024 07:36, Terje Mathisen wrote:
    George Neuner wrote:
    On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
    wrote:

    In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless,
    and they come out of the woodwork in a futile attempt to disagree

    The idea is impractical, not pointless. A hybrid kernel gives most of
    the
    advantages of a microkernel to its developers, and avoids the need for
    lots of context switches. It doesn't let you easily replace low-level OS >>> components, but not many people actually want that.

    Actually, I think there are a whole lot of people who can't afford
    non-stop server hardware but would greatly appreciate not having to
    waste time with a shutdown/reboot every time some OS component gets
    updated.

    YMMV.

    This is _exactly_ (one of) the problem(s) cloud infrastructure solves:
    As soon as you have more than a single instance of a particular server/service, then you replace them in groups so that the service sees zero downtime even though all the servers have been updated/replaced.


    That's fine - /if/ you have a service that can easily be spread across multiple systems, and you can justify the cost of that. Setting up a
    database server is simple enough.

    Setting up a database server along with a couple of read-only
    replications is harder. Adding a writeable failover secondary is harder still. Making sure that everything works /perfectly/ when the primary
    goes down for maintenance, and that everything is consistent afterwards,
    is even harder. Being sure it still all works even while the different
    parts have different versions during updates typically means you have to duplicate the whole thing so you can do test runs. And if the database
    server is not open source, your license costs will be absurd, compared
    to what you actually need to provide the service - usually just one
    server instance.

    Clouds do nothing to help any of that.

    But clouds /do/ mean that your virtual machine can be migrated (with
    zero, or almost zero, downtime) to another physical server if there are hardware problems or during hardware maintenance. And if you can do
    easy snapshots with your cloud / VM infrastructure, then you can roll
    back if things go badly wrong. So you have a single server instance,
    you plan a short period of downtime, take a snapshot, stop the service, upgrade, restart. That's what almost everyone does, other than the
    /really/ big or /really/ critical service providers.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Paul A. Clayton@paaronclayton@gmail.com to comp.arch on Wed Oct 16 11:34:31 2024
    From Newsgroup: comp.arch

    On 10/14/24 7:55 PM, Lawrence D'Oliveiro wrote:
    [snip]
    On the other hand, some stubborn holdouts are still fond of microkernels
    -- you just have to say the whole idea is pointless, and they come out of
    the woodwork in a futile attempt to disagree ...

    While the argument that only microkernels can provide modularity
    with respect to software development seems highly flawed,
    modularity with respect to privilege seems more challenging
    (impossible?) for a monolithic kernel and modularity with respect
    to fault isolation seems to require substantially more discipline/
    constraint than typical for a monolithic design.

    Data isolation seems possible in a monolithic kernel such that a
    failure could be isolated to a specific subsystem and that
    subsystem could be restarted into a known good state.
    Microrebooting seems uncommon. I am guessing this comes from
    extremely high availability not being that important and/or other
    mechanism are used for availability, especially at warehouse
    scale.

    Physical distribution of functionality may also be more foreign to
    a monolithic kernel design. E.g., pinning functionality to a
    particular core or kind of core may urge message passing. In
    theory, something like MWAIT could be used for a fast and targeted inter-processor interrupt, but the limit of one wait condition per
    active thread is a significant constraint.

    The primary argument against microkernels seem to be the poor
    performance due to changing permission and more abstracted
    communication. Most of the overhead for permission change is not
    physically fundamental; the overhead can be nearly equal to that
    of a function call. Since the overhead of indirect function calls
    seems to be considered acceptable in a monolithic kernel, the
    performance overhead argument seems limited to existing hardware
    rather than implementable hardware.

    (This also depends on permission metadata being present in a
    nearby cache. If the code/data and permission caches have similar
    persistence, this would mean the fast case would be nearly equal.
    With hierarchical page tables — especially if nested — the slow
    case for permission change can be much worse for a permission
    change.)

    Software like FUSE (Filesystem in Userspace) hints that some
    microkernel aspects are desirable even in a monolithic kernel
    system.

    PA-RISC and Itanium had page groups, which could allow fast
    permission removal (invalidating or removing some permissions from
    a page group key could be fast). Fast de-privileging might be
    useful. Scanning a binary for (not) re-enabling might be practical
    if the operation is not simply a store, and this would allow
    re-enabling permissions to be fast. However, actually removing
    the permission to grant permissions seems better.

    Itanium's Enter Privileged Code (EPC) instruction was intended to
    provide fast system calls, but it had some complications in
    interacting with other Itanium features (I vaguely recall).

    I know relatively little about OSes, but the arguments I have read
    on both sides seem to have been very biased.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From George Neuner@gneuner2@comcast.net to comp.arch on Wed Oct 16 21:19:34 2024
    From Newsgroup: comp.arch

    On Wed, 16 Oct 2024 09:17:03 +0200, David Brown
    <david.brown@hesbynett.no> wrote:

    On 16/10/2024 07:36, Terje Mathisen wrote:
    George Neuner wrote:
    On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
    wrote:

    In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless,
    and they come out of the woodwork in a futile attempt to disagree

    The idea is impractical, not pointless. A hybrid kernel gives most of >>>> the
    advantages of a microkernel to its developers, and avoids the need for >>>> lots of context switches. It doesn't let you easily replace low-level OS >>>> components, but not many people actually want that.

    Actually, I think there are a whole lot of people who can't afford
    non-stop server hardware but would greatly appreciate not having to
    waste time with a shutdown/reboot every time some OS component gets
    updated.

    YMMV.

    This is _exactly_ (one of) the problem(s) cloud infrastructure solves:
    As soon as you have more than a single instance of a particular
    server/service, then you replace them in groups so that the service sees
    zero downtime even though all the servers have been updated/replaced.


    That's fine - /if/ you have a service that can easily be spread across >multiple systems, and you can justify the cost of that. Setting up a >database server is simple enough.

    Setting up a database server along with a couple of read-only
    replications is harder. Adding a writeable failover secondary is harder >still. Making sure that everything works /perfectly/ when the primary
    goes down for maintenance, and that everything is consistent afterwards,
    is even harder. Being sure it still all works even while the different >parts have different versions during updates typically means you have to >duplicate the whole thing so you can do test runs. And if the database >server is not open source, your license costs will be absurd, compared
    to what you actually need to provide the service - usually just one
    server instance.

    Clouds do nothing to help any of that.

    But clouds /do/ mean that your virtual machine can be migrated (with
    zero, or almost zero, downtime) to another physical server if there are >hardware problems or during hardware maintenance. And if you can do
    easy snapshots with your cloud / VM infrastructure, then you can roll
    back if things go badly wrong. So you have a single server instance,
    you plan a short period of downtime, take a snapshot, stop the service, >upgrade, restart. That's what almost everyone does, other than the
    /really/ big or /really/ critical service providers.

    For various definitions of "short period of downtime". 8-)

    Fortunately, Linux installs updates - or stages updates for restart -
    much faster than Windoze. But rebooting to the point that all the
    services are running still can take several minutes.

    That can feel like an eternity when it's the only <whatever> server in
    a small business.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From David Brown@david.brown@hesbynett.no to comp.arch on Thu Oct 17 14:39:45 2024
    From Newsgroup: comp.arch

    On 17/10/2024 03:19, George Neuner wrote:
    On Wed, 16 Oct 2024 09:17:03 +0200, David Brown
    <david.brown@hesbynett.no> wrote:

    On 16/10/2024 07:36, Terje Mathisen wrote:
    George Neuner wrote:
    On Tue, 15 Oct 2024 18:40 +0100 (BST), jgd@cix.co.uk (John Dallman)
    wrote:

    In article <vekb2f$1co97$6@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On the other hand, some stubborn holdouts are still fond of
    microkernels -- you just have to say the whole idea is pointless,
    and they come out of the woodwork in a futile attempt to disagree

    The idea is impractical, not pointless. A hybrid kernel gives most of >>>>> the
    advantages of a microkernel to its developers, and avoids the need for >>>>> lots of context switches. It doesn't let you easily replace low-level OS >>>>> components, but not many people actually want that.

    Actually, I think there are a whole lot of people who can't afford
    non-stop server hardware but would greatly appreciate not having to
    waste time with a shutdown/reboot every time some OS component gets
    updated.

    YMMV.

    This is _exactly_ (one of) the problem(s) cloud infrastructure solves:
    As soon as you have more than a single instance of a particular
    server/service, then you replace them in groups so that the service sees >>> zero downtime even though all the servers have been updated/replaced.


    That's fine - /if/ you have a service that can easily be spread across
    multiple systems, and you can justify the cost of that. Setting up a
    database server is simple enough.

    Setting up a database server along with a couple of read-only
    replications is harder. Adding a writeable failover secondary is harder
    still. Making sure that everything works /perfectly/ when the primary
    goes down for maintenance, and that everything is consistent afterwards,
    is even harder. Being sure it still all works even while the different
    parts have different versions during updates typically means you have to
    duplicate the whole thing so you can do test runs. And if the database
    server is not open source, your license costs will be absurd, compared
    to what you actually need to provide the service - usually just one
    server instance.

    Clouds do nothing to help any of that.

    But clouds /do/ mean that your virtual machine can be migrated (with
    zero, or almost zero, downtime) to another physical server if there are
    hardware problems or during hardware maintenance. And if you can do
    easy snapshots with your cloud / VM infrastructure, then you can roll
    back if things go badly wrong. So you have a single server instance,
    you plan a short period of downtime, take a snapshot, stop the service,
    upgrade, restart. That's what almost everyone does, other than the
    /really/ big or /really/ critical service providers.

    For various definitions of "short period of downtime". 8-)

    Yes, indeed.


    Fortunately, Linux installs updates - or stages updates for restart -
    much faster than Windoze. But rebooting to the point that all the
    services are running still can take several minutes.


    My experience is that the updates on Linux servers are usually fast (for desktops they can be slow, but that is usually because you have far more
    and bigger programs). Updates for virtual machines are particularly
    fast because you generally have a minimum of programs in the VM.
    Restarts are also fast for virtual machines - physical servers are often
    slow to restart, sometimes taking many minutes before they get to the
    point of starting the OS boot.

    That can feel like an eternity when it's the only <whatever> server in
    a small business.

    Sure. But for most small businesses, it's not hard to find off-peak
    times when you can have hours of downtime without causing a problem.

    --- Synchronet 3.20a-Linux NewsLink 1.114