On 4/5/25 3:40 PM, The Natural Philosopher wrote:
On 05/04/2025 20:22, c186282 wrote:
Analog ...
Massive arrays of non linear analogue circuits for modelling things
like the Navier Stokes equations would be possible: Probably make a
better stab at climate modelling then the existing shit.
Again with analog, it's the sensitivity to especially
temperature conditions that add errors in. Keep
carrying those errors through several stages and soon
all you have is error, pretending to be The Solution.
Again, perhaps some meta-material that's NOT sensitive
to what typically throws-off analog electronics MIGHT
be made.
I'm trying to visualize what it would take to make
an all-analog version of, say, a payroll spreadsheet :-)
Now discrete use of analog as, as you suggested, doing
multiplication/division/logs initiated and read by
digital ... ?
Oh well, we're out in sci-fi land with most of this ...
may as well talk about using giant evil brains in
jars as computers :-)
As some here have mentioned, we may be closer to the
limits of computer power that we'd like to think.
Today's big trick is parallelization, but only some
kinds of problems can be modeled that way.
Saw an article the other day about using some kind
of disulfide for de-facto transistors, but did not
get the impression that they'd be fast. I think
temperature resistance was the main thrust - industrial
apps, Venus landers and such.
On 4/5/25 18:27, c186282 wrote:
On 4/5/25 3:40 PM, The Natural Philosopher wrote:
On 05/04/2025 20:22, c186282 wrote:
Analog ...
Massive arrays of non linear analogue circuits for modelling things
like the Navier Stokes equations would be possible: Probably make a
better stab at climate modelling then the existing shit.
Again with analog, it's the sensitivity to especially
temperature conditions that add errors in. Keep
carrying those errors through several stages and soon
all you have is error, pretending to be The Solution.
Again, perhaps some meta-material that's NOT sensitive
to what typically throws-off analog electronics MIGHT
be made.
I'm trying to visualize what it would take to make
an all-analog version of, say, a payroll spreadsheet :-)
Woogh! That makes my brain hurt.
Now discrete use of analog as, as you suggested, doing
multiplication/division/logs initiated and read by
digital ... ?
Oh well, we're out in sci-fi land with most of this ...
may as well talk about using giant evil brains in
jars as computers :-)
As some here have mentioned, we may be closer to the
limits of computer power that we'd like to think.
Today's big trick is parallelization, but only some
kinds of problems can be modeled that way.
Saw an article the other day about using some kind
of disulfide for de-facto transistors, but did not
get the impression that they'd be fast. I think
temperature resistance was the main thrust - industrial
apps, Venus landers and such.
Actually, one of the things that Analog's still good at is real world control systems with feeback loops and all the like.
I had one project some time 'way back in the 80s where we were troubleshooting a line that had a 1960s era analog control system, and
one of the conversations that came up was if to replace it with digital.
It got looked into and was determined that digital process controls
weren't fast enough for the line.
Fast-forward to ~2005. While back visiting that department, I found out that that old analog beast was still running the line and they were
trolling eBay for parts to keep it running.
On another visit ~2015, the update: they finally found a new digitally based control system that was fast enough to finally replace it & did.
Indeed ! However ... probably COULD be done, it's
a bunch of shifting values - input to some accts,
calx ops, shift to other accts ....... lots and
lots of rheostats ........
On Mon, 07 Apr 2025 17:59:45 -0400, c186282 wrote:
Indeed ! However ... probably COULD be done, it's
a bunch of shifting values - input to some accts,
calx ops, shift to other accts ....... lots and
lots of rheostats ........
How is conditional branching (e.g. an if-then-else statement)
to be implemented with analog circuits? It cannot be
done.
Analog computers are good for modelling systems that are
described by differential equations. Adders, differentiators,
and integrators can all be easily implemented with electronic
circuits. But beyond differential equation sytems analog
computers are useless.
The Norden bomb site of WWII wan an electro-mechanical
computer. It's job was to calculate the trajectory of
a bomb released by an aircraft and the trajectory is described
by a differential equation.
One of my professors told a story about a common "analog"
practice among engineers of the past. To calculate an integral,
which can be described as the area under a curve, they would plot
the curve on well made paper and then cut out (with scissors)
the plotted area and weigh it (on a lab balance). The ratio
of the cut-out area with a unit area of paper would be the
value of the integral. (Multi-dimensional integrals would
require carving blocks of balsa wood or a similar material.)
Of course it worked but today integration is easy to perform
to unlimited accuracy using digital means.
On Mon, 07 Apr 2025 17:59:45 -0400, c186282 wrote:
Indeed ! However ... probably COULD be done, it's
a bunch of shifting values - input to some accts,
calx ops, shift to other accts ....... lots and
lots of rheostats ........
How is conditional branching (e.g. an if-then-else statement)
to be implemented with analog circuits? It cannot be
done.
Analog computers are good for modelling systems that are
described by differential equations. Adders, differentiators,
and integrators can all be easily implemented with electronic
circuits. But beyond differential equation sytems analog
computers are useless.
The Norden bomb site of WWII wan an electro-mechanical
computer. It's job was to calculate the trajectory of
a bomb released by an aircraft and the trajectory is described
by a differential equation.
One of my professors told a story about a common "analog"
practice among engineers of the past. To calculate an integral,
which can be described as the area under a curve, they would plot
the curve on well made paper and then cut out (with scissors)
the plotted area and weigh it (on a lab balance). The ratio
of the cut-out area with a unit area of paper would be the
value of the integral. (Multi-dimensional integrals would
require carving blocks of balsa wood or a similar material.)
Of course it worked but today integration is easy to perform
to unlimited accuracy using digital means.
<snip>
Fast-forward to ~2005. While back visiting that department, I found out that that old analog beast was still running the line and they were
trolling eBay for parts to keep it running.
On another visit ~2015, the update: they finally found a new digitally based control system that was fast enough to finally replace it & did.--
On 4/7/25 4:39 PM, -hh wrote:
...
I had one project some time 'way back in the 80s where we were
troubleshooting a line that had a 1960s era analog control system, and
one of the conversations that came up was if to replace it with
digital. It got looked into and was determined that digital process
controls weren't fast enough for the line.
Fast-forward to ~2005. While back visiting that department, I found
out that that old analog beast was still running the line and they
were trolling eBay for parts to keep it running.
Hey, so long as it works well !
On another visit ~2015, the update: they finally found a new
digitally based control system that was fast enough to finally replace
it & did.
What was the thing doing ?
Oh, on-theme, apparently Team Musk's nerd squad
managed to CRASH a fair segment of the SSA customer
web sites while trying to add some "anti-fraud"
feature :-)
PROBABLY no COBOL involved ... well, maybe ....
On 4/7/25 17:59, c186282 wrote:
On 4/7/25 4:39 PM, -hh wrote:
...
I had one project some time 'way back in the 80s where we were
troubleshooting a line that had a 1960s era analog control system,
and one of the conversations that came up was if to replace it with
digital. It got looked into and was determined that digital process
controls weren't fast enough for the line.
Fast-forward to ~2005. While back visiting that department, I found
out that that old analog beast was still running the line and they
were trolling eBay for parts to keep it running.
Hey, so long as it works well !
It did, so long as there were parts for it.
On another visit ~2015, the update: they finally found a new
digitally based control system that was fast enough to finally
replace it & did.
What was the thing doing ?
It was running a high speed manufacturing line. If memory serves,
roughly 1200ppm, so 20 parts per second.
For a digital system that's a budget of ~50 milliseconds total
processing time per part, which one can see how early digital stuff
couldn't maintain that pace, but as PCs got faster, it wasn't really
clear why it remained a "too hard".
That seemed to have come from the architecture. Its a series of linked tooling station heads, with each head has 22? sets of tools running basically in parallel, but because everything was indexed, a part that
went through Station 1 on Head A, then went through Station 1 too on
Heads B, and Station 1 on C, 1 on D, 1 on E, etc ...
The process had interactive feedback loops all over the place between multiple heads (& other stuff), such that if head E started to report
its hydraulic psi was running high, that was because of an insufficient anneal back between B & C, so turn up the voltage on the annealing station...and if that was already running high, then turn up the voltage
on an earlier annealing station.
But that wasn't all: it would make similar on-the-fly adjustments for
each of the individual Stations too, so if Tool 18 on Head G was complaining, they could adjust settings on Tools 18 on Heads ABCDEF
upstream of G .. and HIJK downstream too if that was a fix too.
It must have been an incredible project back in the 1960s to get it all
so incredibly figured out and well balanced.
The modernization eventually came along because the base machines were expensive - probably "lost art" IMO - but were known to be capable of running much faster, and it was finally a modernization to have it run faster that got over the goal line for digitization. I think they ended
up just a shade over 2000ppm; I'll ask the next time I stop by.
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes, that's a deliberate waste of taxpayer dollars.
On 4/8/25 10:28, c186282 wrote:
Oh, on-theme, apparently Team Musk's nerd squad
managed to CRASH a fair segment of the SSA customer
web sites while trying to add some "anti-fraud"
feature :-)
PROBABLY no COBOL involved ... well, maybe ....
Oh, its worse than that.
"The network crashes appear to be caused by an expansion initiated by
the Trump team of an existing contract with a credit-reporting agency
that tracks names, addresses and other personal information to verify customers’ identities. The enhanced fraud checks are now done earlier in the claims process and have resulted in a boost to the volume of
customers who must pass the checks."
<https://gizmodo.com/social-security-website-crashes-blamed-on-doge-software-update-2000586092>
Translation:
They *moved* where an existing credit agency check is done, but didn't
load test it before going live ... and golly, they broke it!
But the more important question here is:
**WHY** did they move where this check is done?
Because this check already existed, so moving where its done isn't going
to catch more fraud.
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes, that's a deliberate waste of taxpayer dollars.
The only motivation I can see is propaganda: this change will find more 'fraud' at the contractor's check ... but not more fraud in total.
Expect them to use the before/after contractor numbers only to falsely
claim that they've found 'more' fraud. No, they're committing fraud.
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
There's a fundamental political rule, esp in 'democracies',
that goes "ALWAYS be seen as *DOING SOMETHING*"
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you
don't have to worry AS much about the inner layers
of the city.
On 2025-04-09, c186282 <c186282@nnada.net> wrote:
There's a fundamental political rule, esp in 'democracies',
that goes "ALWAYS be seen as *DOING SOMETHING*"
Something must be done. This is something.
Therefore, this must be done.
-- Yes, Prime Minister
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means that
your operating expenses to this contractor service go UP not down. Yes, >>> that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have to worry AS much about the inner layers
of the city.
On 4/8/25 22:29, c186282 wrote:
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means that >>>> your operating expenses to this contractor service go UP not down.
Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
It might not be, but in this case, the benefit of the change is
literally zero ... and the expenses are not only more money to the contractor who gets paid by the check request, but also the cost of
higher bandwidth demands which is what caused the site to crash.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have to >> worry AS much about the inner layers
of the city.
I don't really buy that, because of symmetry: when the workflow is that
a request has to successfully pass three gates, its functionally
equivalent to (A x B x C) and the sequence doesn't matter: one gets the same outcome for (C x B x A), and (A x C x B), etc.
The primary motivation for order selection comes from optimization
factors, such as the 'costs' of each gate: one puts the cheap gates
which knock down the most early, and put the slow/expensive gates late, after the dataset's size has already been minimized.
On 4/9/25 2:18 PM, -hh wrote:
On 4/8/25 22:29, c186282 wrote:
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means
that
your operating expenses to this contractor service go UP not down.
Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
It might not be, but in this case, the benefit of the change is
literally zero ... and the expenses are not only more money to the
contractor who gets paid by the check request, but also the cost of
higher bandwidth demands which is what caused the site to crash.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have to >>> worry AS much about the inner layers
of the city.
I don't really buy that, because of symmetry: when the workflow is
that a request has to successfully pass three gates, its functionally
equivalent to (A x B x C) and the sequence doesn't matter: one gets
the same outcome for (C x B x A), and (A x C x B), etc.
The primary motivation for order selection comes from optimization
factors, such as the 'costs' of each gate: one puts the cheap gates
which knock down the most early, and put the slow/expensive gates
late, after the dataset's size has already been minimized.
I understand your reasoning here.
The point I was trying to make is a bit different
however - less to really do with people trying to
defraud the system but with those seeking to
corrupt/destroy it. I see every web page, every
bit of HTML/PHP/JS executed, every little database
opened, as a potential source of fatal FLAWS enemies
can find and exploit to do great damage.
In that context, the sooner you can lock out pretenders
the better - less of the system exposed to the state-
sponsored hacks to analyze and pound at relentlessly.
Now Musk's little group DID make a mistake in
not taking bandwidth into account (and we do
not know how ELSE they may have screwed up
jamming new code into something they didn't
write) but 'non-optimal' verification order
MIGHT be worth the extra $$$ in an expanded
'security' context.
On 4/9/25 16:51, c186282 wrote:
On 4/9/25 2:18 PM, -hh wrote:
On 4/8/25 22:29, c186282 wrote:
On 4/8/25 7:18 PM, Charlie Gibbs wrote:
On 2025-04-08, -hh <recscuba_google@huntzinger.com> wrote:
Plus front-loading it before you've run your in-house checks means >>>>>> that
your operating expenses to this contractor service go UP not down. >>>>>> Yes,
that's a deliberate waste of taxpayer dollars.
You'd think someone would want to try to reduce that waste.
Maybe set up a Department Of Government Efficiency or something...
Hey ... humans are only JUST so smart, AI is
even more stupid, and govt agencies .........
Likely the expense of the earlier checks do NOT add
up to much.
It might not be, but in this case, the benefit of the change is
literally zero ... and the expenses are not only more money to the
contractor who gets paid by the check request, but also the cost of
higher bandwidth demands which is what caused the site to crash.
I did mention one possible gain in doing the ID checks
earlier - giving Vlad and friends less access to the
deeper pages/system, places where more exploitable
flaws live.
In short, put up a big high city wall - then you> don't have >>>> to worry AS much about the inner layers
of the city.
I don't really buy that, because of symmetry: when the workflow is
that a request has to successfully pass three gates, its functionally
equivalent to (A x B x C) and the sequence doesn't matter: one gets
the same outcome for (C x B x A), and (A x C x B), etc.
The primary motivation for order selection comes from optimization
factors, such as the 'costs' of each gate: one puts the cheap gates
which knock down the most early, and put the slow/expensive gates
late, after the dataset's size has already been minimized.
I understand your reasoning here.
The point I was trying to make is a bit different
however - less to really do with people trying to
defraud the system but with those seeking to
corrupt/destroy it. I see every web page, every
bit of HTML/PHP/JS executed, every little database
opened, as a potential source of fatal FLAWS enemies
can find and exploit to do great damage.
In that context, the sooner you can lock out pretenders
the better - less of the system exposed to the state-
sponsored hacks to analyze and pound at relentlessly.
Sure, but that's not relevant here, because from a threat vulnerability perspective, its just one big 'black box' process. Anyone attempting to probe doesn't receive intermediary milestones/checkpoints to know if
they successfully passed/failed a gate.
Now Musk's little group DID make a mistake in
not taking bandwidth into account (and we do
not know how ELSE they may have screwed up
jamming new code into something they didn't
write) but 'non-optimal' verification order
MIGHT be worth the extra $$$ in an expanded
'security' context.
Might be worth it if it actually enhanced security. It failed to do so, because their change was just a "shuffling of the existing deck chairs".
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,030 |
Nodes: | 10 (0 / 10) |
Uptime: | 15:16:20 |
Calls: | 13,344 |
Calls today: | 1 |
Files: | 186,574 |
D/L today: |
1,225 files (314M bytes) |
Messages: | 3,357,531 |