From Newsgroup: comp.arch
Michael S <
already5chosen@yahoo.com> writes:
Even if 99% is correct, there were still 6-7 figures worth of
dual-processor x86 systems sold each year and starting from 1997 at
least tens of thousands of quads.
Absence of ordering definitions should have been a problem for a lot of >people. But somehow, it was not.
I remember Andy Glew posting here about the strong ordering that Intel
had at the time, and that it leads to superior performance compared to
weak ordering.
mitchalsup@aol.com (MitchAlsup1) wrote:
Also note: this was just after the execution pipeline went
Great Big Our of Order, and thus made the lack of order
problems much more visible to applications. {Pentium Pro}
Nonsense. Stores are published in architectural order, and loads have
to be architecturally ordered wrt local stores already in a
single-core system. And once you have that, why should the ordering
wrt. remote stores be any worse than on an in-order machine?
Note that the weak ordering advocacy (such as [adve&gharachorloo95)
arose in companies with (at the time) only in-order CPUs.
Actually OoO technology offers a way to make the ordering strong
without having to pay for barriers and somesuch; we may not yet have
enough buffers for implementing sequential consistency efficiently,
though, but maybe if we ask for sequential consistency, hardware
designers will find a way to provide enough buffers for that.
@TechReport{adve&gharachorloo95,
author = {Sarita V. Adve and Kourosh Gharachorloo},
title = {Shared Memory Consistency Models: A Tutorial},
institution = {Digital Western Research Lab},
year = {1995},
type = {WRL Research Report},
number = {95/7},
annote = {Gives an overview of architectural features of
shared-memory computers such as independent memory
banks and per-CPU caches, and how they make the (for
programmers) most natural consistency model hard to
implement, giving examples of programs that can fail
with weaker consistency models. It then discusses
several categories of weaker consistency models and
actual consistency models in these categories, and
which ``safety net'' (e.g., memory barrier
instructions) programmers need to use to work around
the deficiencies of these models. While the authors
recognize that programmers find it difficult to use
these safety nets correctly and efficiently, it
still advocates weaker consistency models, claiming
that sequential consistency is too inefficient, by
outlining an inefficient implementation (which is of
course no proof that no efficient implementation
exists). Still the paper is a good introduction to
the issues involved.}
}
- anton
--
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <
c17fcd89-f024-40e7-a594-88a85ac10d20o@googlegroups.com>
--- Synchronet 3.20a-Linux NewsLink 1.114