• Re: ZFC solution to incorrect questions: reject them

    From Ross Finlayson@ross.a.finlayson@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Tue Mar 12 15:50:53 2024
    From Newsgroup: comp.ai.philosophy

    On 03/12/2024 01:28 PM, olcott wrote:
    On 3/12/2024 2:59 PM, Ross Finlayson wrote:
    On 03/12/2024 12:32 PM, olcott wrote:
    On 3/12/2024 2:08 PM, Ross Finlayson wrote:
    On 03/12/2024 08:52 AM, olcott wrote:

    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    There is some input TMD to every H such that
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    When we disallow decider/input pairs that are incorrect
    questions where both YES and NO are the wrong answer
    (the same way the ZFC disallowed self-referential sets) then
    pathological inputs are not allowed to come into existence.

    Does the barber that shaves everyone that does not shave
    themselves shave himself? is rejected as an incorrect question.
    https://en.wikipedia.org/wiki/Barber_paradox#



    People learn about ZFC when their mathematical curiousity
    brings them to questions of foundations.

    Then it's understood that it's an axiomatic theory,
    and that axioms are rules, and in an axiomatic theory,
    there result theorems derived from axioms, with axioms
    themselves being considered theorems, and that no theorems
    break any axioms.

    This is that axioms are somehow "true", and in the theory
    they're defined to be true, meaning they're never contradicted.

    That is a great insight that you and Haskell Curry and
    very few others agree on.
    https://www.liarparadox.org/Haskell_Curry_45.pdf

    This is with the usual notion of contradiction and
    non-contradiction, about opposition and juxtaposition,
    where it's established usually that there is "true" or
    there is "false" and there is no middle ground, that
    a third case or tertium does not exist in the world,
    "tertium non datur", the laws of excluded middle, the
    principle of excluded middle, which in this axiomatic
    theory, is somehow always a theorem, as it results
    from the plain contemplation or consideration, that
    axioms are "true", in the theory, in what is almost
    always these days, a "meta-theory", that's undefined,
    except that axioms are true and none of their theorems
    contradict each other, saying "both true and false",
    which is tertium and non datur.


    So anyways ZFC is a theory where there's only one relation,
    it's "elt". There's only one kind of object, it's "set".

    I have no idea what "elt" means.

    For any given set P and any given set Q, either P elt Q
    or Q elt P, or neither, and, not both. Then you might
    wonder, "well why not both?", and it's because, one of
    the axioms of ZFC is "not both".

    The axioms of ZFC either _expand_ comprehension, meaning,
    "no reason why not, so, it's so", or _restrict_ comprehension,
    meaning, "not: because this axiom is true in this theory,
    and says no".

    This introduces the concept of "independence" of axioms,
    that axioms that are independent say nothing about the
    theorems of otherwise independent axioms, and that axioms
    that are not independent, contradict each other, and that
    restriction is defined to always win, in any case of otherwise
    contradiction, when axioms aren't independent, in ZFC,
    that axioms of _restriction_ of comprehension aren't
    necessarily independent each other, or, the independent
    axioms of _expansion_ of comprehension.



    So, ZFC has various axioms of restriction of comprehension,
    Yes, no set can be defined that contains itself.

    what boil down to the "Axiom of Regularity" also known as
    the "Axiom of Well-Foundedness" or "Axiom of Foundation",
    Yes that one

    that for any two sets P elt Q or Q elt P, or neither,
    but not both. This is where otherwise the axioms of
    expansion of comprehension, would otherwise result,
    "no reason why not", except that eventually certain
    theorems _desired_, of the theory, would either not be
    evident or would see contradictions.


    So, yeah, "ZFC solution to incorrect questions: reject them",
    is what's called "restriction of comprehension" and then
    Great !!!

    what you do is get into all the various combinations of
    otherwise the expansion of comprehension, then get into
    why the models of the universe of the objects so related,
    is a wider theory where ZFC, or ZF, set theory, is variously
    considerable as either a fragment or an extension,
    the universe of the objects of ZF and ZFC set theories,
    in all theory in all comprehension according to what's, "true".


    Or, you know, "not false".



    So of course there are names for all these things and
    studies of all these things and criteria for all these
    things, what basically results for "Set Theory" what's
    called "Class/Set Distinction", a sort of, meta-theory,
    NBG set theory
    https://www.britannica.com/science/proper-class

    about set theory, where "elt" has a sort of complement
    "members" reflects "elt's sets are contained in sets"
    while "members' classes contain classes", that also the
    Class/Set distinction reflects objects as of the,
    "Inconsistent Multiplicities", of set theory, that
    one can relate to the, "Indeterminate Forms", of
    mathematics, that variously kind of do or don't have
    structure, "models" in the "model theory", where a
    theory has a model and a model has a theory is the meta-theory,
    helping explain why the wider world of theory knows that
    ZFC, or ZF, set theory, is a fragment of the universe of
    the objects of ZF set theory, which is its model in
    the wider model theory,

    Theory of ZF, of course, doesn't actually acknowledge,
    "a universe of objects of ZF, the domain of discourse"
    not so much as it's axiomatic "there is no universe of
    objects in ZF set theory", but that it's a theorem that's
    a consequence of restriction of comprehension, "foundational
    well-foundedness", which follows from otherwise a very
    useful result called "uncountability", .

    Now, "Regularity" means "the Rulial", it rules or defines
    a rule, so other theories otherwise about sets can have
    their own sorts rulial definitions, just saying that the
    theory where Well-Foundedness is rulial, just indicates
    that this is moreso "ZF's axiom that establishes ZF's
    main restriction of comprehension as ruliality, AoR
    the Axiom of Regularity, is particular to ZF, and it's
    called Well-Foundedness or Foundation, to reflect that
    theories without it are called Non-Well-Founded or
    sometimes Anti-Well-Founded, with regards to the regular
    Yes I get that and have known about it for some years.

    or rulial the ruliality of what may be other theories,
    of sets, which are defined by one relation, elt".



    So anyways, there are others.

    *Your knowledge of these things seem truly superb*
    I had forgotten many of the details that you referenced.


    Well, yeah, my mathematical curiousity brought me
    to questions of foundations.

    I love foundations because I noticed errors in the understanding
    of the notion of true_on_the_basis_of_semantic_meaning(x) back
    in 2004.

    "Question" is a word, and it's kind of loaded. For
    example, the German language has two different words
    for "question-able", "fraglich", as, dubious, and,
    "question-providing", "fragwuerdig", as, profound.

    I.e., the profound, opens new questions, vis-a-vis
    the interrogable, which may or may not.

    I'm reading about this in Steiner on Heidegger as
    of from old Roger Bacon. (I've sort of got a
    trio of anti-Plato's in Wittgenstein, Nietzsche,
    and Heidegger as various rejections of logical
    positivism in the accommodation of logical positivism
    The coherent notion of true_on_the_basis_of_semantic_meaning(x)
    seems to affirm logical positivism over Gödel.

    after their compensation in search of teleology
    after the breakwater of ontology, that of course
    they're each strong Platonists in otherwise a
    world of mundane, subjective, inauthentic,
    Existentialists, the adrift logical positivists.
    Of course that's for a strong logical positivism
    overall, with re-attaching the silver thread, or cord.)



    The universe of logical and mathematical objects
    is its own, intuitively structured, thing. All

    Yes, hence true_on_the_basis_of_semantic_meaning(x) does
    not apply to reality only mental models of reality.

    matters of relation are assumed to exist in it,
    both the concrete as realizable mathematically,
    and of course all suchly matters of mathematical
    or logical consistency, and inconsistency, so effable,
    or ineffable.

    Perhaps with true_on_the_basis_of_semantic_meaning(x)
    it becomes effable.

    Then, humans or objectively thinkers, or course have
    limited or finite means, and, communication has its
    own finite yet open means.

    Yet with Montague semantics can be formalized to eliminate
    all ambiguity.



    I thank you for your compliments, affinity,
    then would suggest that you look to the fuller
    complements, complementarity, as what such
    notions of the fuller dialectic, arrive largely
    as fundamentally from "complementary duals",
    that not only is something filled, also filling.

    I can only understand those aspects of my ideas that you
    directly referenced that I acknowledged that I understood.
    This was much more tan I expected to understand. I had
    forgotten some of the key details that you referenced.

    I.e., "is there an axiom?" "Inverse".

    Or, you know, yes and no.



    You can find some hundred hours or readings
    and lectures on my youtube channel as of
    like https youtube /@rossfinlayson .

    Or, you know, the 10,000's posts to sci.math.



    Well what you get is that there are two theories,
    there's a "true theory", teleological, and a "theory
    of truth", ontological, where really, our ontological
    theory of truth, or each's, happens to be one of
    the contingent theories of contemplation in otherwise
    the true theory, so what this is, is a metaphysics.

    Then, logical positivism eschews metaphysics,
    that "it's only ontological", which doesn't seem
    very scientific itself, being it's unfalsifiable to
    validate or invalidate the super-scientific.

    Yet that's what logical positivism does help give,
    that things can be abstract, for "freedom of will",
    but it's un-necessary, for someone in their freedom
    of will, to abandon their voluntary submission, as
    it were, to the existence of some "true theory".

    It's like, post-modern deconstructivists, aren't all
    historiographic disfigurists, and some are modern
    constructivists again.

    So, that's where, Wittgenstein the hot-head and
    Nietzsche the mad-man and Heiedegger the
    angsty-anxious, each sort of get impassioned
    about Platonism, which is the standard sort of
    idea that "there is a true theory of mathematical
    objects they exist outside us", not including "all
    the notions" of platonism, but basically ideal forms,
    that mathematics and logic exists for itself and
    is only discovered, not invented.

    Each of the anti-Plato's first to be credible demonstrates
    a very agreeable platonist sort of foundation, then gets
    into the woes caused by the void of what was otherwise
    the "theory of truth's" authority, which again has a sort
    of monist tradition, also especially a monotheist tradition,
    since Augustine and the Scholastics, basically made for
    that if platonism is ideals and the monistic deity is ideal
    then anything good of platonism is courtesy the super-scientific
    monoistic ideal of a deity, which is its own thing.

    Then, they each sort of get into free will, and apologia,
    about why _philosophy has a metaphysics_ and
    _theory has a metaphysics_ and _if it doesn't it
    does again_ and logicism and positivism must be
    a stronger logicism and stronger positivism as of
    a strong platonism.

    Anyways, that's the technical-philosophy side, or,
    the logico-philosophical side, just sort of explaining
    where our canon comes from, including antiquity
    and pre-history and all since then up to now, then
    in terms of modern foundations today, what gets
    into our _formalisms_ for the goal of the explanatory
    or explicatory, what can be related unambiguously
    in language, to complement the each agent or monad's
    own ontology, why it's so, what things are, what it is.


    Then, for ZF set theory and ZFC set theory, where ZFC
    has another axiom the Well-Ordering principle which
    has that even sets that can't be "enumerated" per se
    still have a mapping to an ordinal that can be, is about
    really that it has to do with objects of the universe of
    the theories of logic, and mathematics, then insofar
    as it's the "strongly" platonic with regards to that also
    being realized as the concrete and the real universe.


    So anyways then this object, the domain of discourse,
    "the numbers", might be void, might be a universe,
    might be sets, might be parts, starting with the unary
    in relation, could be anything. Then, if Hegel is at the
    end of before anti-Platonism with Wittgenstein as
    "platonism ends" then Nietzsche as "deism ends"
    and Heidegger as "history ends", the reason they're
    still held up in the canon is because Wittgetstein
    says "you're your own platonism" and Nietzsche with
    the "you're your own deism" and Heidegger with
    "you're your own historicism". So this way they're
    re-Plato-nizing, in a sense.

    So anyways Hegel has "teleology and ontology both"
    and "things exist, they change, it's a dialectic". Then
    Kant has "there's a thing-in-itself the Ding-an-Sich,
    there's a thing greater itself The Sublime". Then for
    example with Leibniz and DesCartes it's like "well
    between us we sort of have a monadology, and we think it".

    So, this way, "Being and Thought" and "Being and Time",
    stay together. And, the canonical trios there, basically
    stand it up on both ends teleology and ontology, apologia.



    Anyways then about the continuous and discrete and
    the objects of mathematics, so, this theory is a theory
    of the universe of logical objects, it's basically either
    numbers or words, sort of like how the Bible affirms
    when it starts "In the beginning there was nothing"
    the space and origin, as a number then "In the beginning
    there was the word", book of John's deity, or as, words.
    (This isn't necessarily a Christian thing, it's just the
    most printed book in the world.)

    Then mathematics is just theories of relation of
    this primordial soup of relation, numbers, and words,
    they exist, says Plato, then that all sorts theories about
    arithmetic and algebra and geometry then basically
    result after theories of relation that there are function
    theory and topology and that's all about that it is.


    Then about the continuous and discrete, you know,
    points and lines and geometry and these kinds of things,
    we have out modern theory of infinite numbers the
    cardinals and Cantor space of all the 0's and 1's,
    each definable and effable while altogether uncountable,
    and again a post-modern theory of "Square Cantor Space",
    making again for the ancient through classical and
    modern theories of real analysis and the infinitesimal
    and integral calculus, the milieu of dynamical modeling,
    in the course-of-passage of the common continuous
    parameter, "time", "t".



    So if you care to study some of these posts about
    Cantor space and Square Cantor space as I've been
    writing to comp.theory, then you'd have a greater
    idea about models of computation and theories of
    computation when the discrete computation arrives
    at the continuous computation, about why this
    "theory of a true theory", results, at all, "A Theory".


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ross Finlayson@ross.a.finlayson@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Wed Mar 13 11:16:45 2024
    From Newsgroup: comp.ai.philosophy

    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>
    There is some input TMD to every H such that
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>
    And it can be a different TMD to each H.

    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. THAT >>>>>>>>>>>>> will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>> (that are contained within the above specified set)
    only differ by return value will both be wrong on the
    same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>> corresponding decider/input pair that only differs by the return >>>>>>>>>> value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return
    value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both
    of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no
    requirement that any of them get any particular answer right.

    So, ALL deciders are in the set that we cycle through and apply the
    following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the
    existance of the input was what as just proven, the ^ template


    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they
    will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying.

    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that select a
    input that the machine will get wrong needs to be the same for two
    differnt machines.




    For Every H, we show we can find at least one input (chosen just for >>>>>>> that machine) that it will get wrong.

    When we use machine templates then we can see instances of
    the same machine that only differs by return value where both
    get the wrong answer on the same input. By same input I mean
    the same finite string of numerical values.


    But if they returned differnt values, they will have different
    descriptions.

    Otherwise, how could a UTM get the right answer, since it only gets
    the description.

    We can get around all of this stuff by simply using this criteria:
    Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is
    correct*
    (He has neither reviewed nor agreed to anything else in this paper)
    (a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running
    unless aborted then
    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability
    of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable. (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals. I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated. It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system
    that cannot have hidden gaps its its reasoning. If a machine can do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".









    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From olcott@polcott2@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Thu Mar 14 00:20:26 2024
    From Newsgroup: comp.ai.philosophy

    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>
    And it can be a different TMD to each H.

    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>> (that are contained within the above specified set)
    only differ by return value will both be wrong on the >>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>> corresponding decider/input pair that only differs by the return >>>>>>>>>>> value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return
    value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both
    of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no
    requirement that any of them get any particular answer right.

    So, ALL deciders are in the set that we cycle through and apply the >>>>>> following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the
    existance of the input was what as just proven, the ^ template


    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they >>>>>> will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying.

    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that select a >>>>>> input that the machine will get wrong needs to be the same for two >>>>>> differnt machines.




    For Every H, we show we can find at least one input (chosen just >>>>>>>> for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of
    the same machine that only differs by return value where both
    get the wrong answer on the same input. By same input I mean
    the same finite string of numerical values.


    But if they returned differnt values, they will have different
    descriptions.

    Otherwise, how could a UTM get the right answer, since it only gets >>>>>> the description.

    We can get around all of this stuff by simply using this criteria:
    Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is
    correct*
    (He has neither reviewed nor agreed to anything else in this paper)
    (a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running
    unless aborted then
    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability
    of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system
    that cannot have hidden gaps its its reasoning. If a machine can do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p // assumption
    (2) x ∈ True if and only if p // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True; // axiom: ~True(x) ∨ ~True(~x)
    (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x)
    (6) if x̄ ∈ Provable, then x̄ ∈ True; // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf
    --
    Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard Damon@richard@damon-family.org to comp.theory,sci.logic,comp.ai.philosophy on Wed Mar 13 22:30:15 2024
    From Newsgroup: comp.ai.philosophy

    On 3/13/24 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H.

    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>>> corresponding decider/input pair that only differs by the >>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return
    value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both
    of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no
    requirement that any of them get any particular answer right.

    So, ALL deciders are in the set that we cycle through and apply the >>>>>>> following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the >>>>>>> existance of the input was what as just proven, the ^ template


    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they >>>>>>> will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying.

    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that select a >>>>>>> input that the machine will get wrong needs to be the same for two >>>>>>> differnt machines.




    For Every H, we show we can find at least one input (chosen >>>>>>>>> just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of
    the same machine that only differs by return value where both
    get the wrong answer on the same input. By same input I mean
    the same finite string of numerical values.


    But if they returned differnt values, they will have different
    descriptions.

    Otherwise, how could a UTM get the right answer, since it only gets >>>>>>> the description.

    We can get around all of this stuff by simply using this criteria: >>>>>> Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is
    correct*
    (He has neither reviewed nor agreed to anything else in this paper) >>>>>> (a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running >>>>>> unless aborted then
    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability
    of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system
    that cannot have hidden gaps its its reasoning. If a machine can do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x) (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x)
    (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf




    Except that (1) and (2) are not "Assumptions", but are sentences that he
    shows can be constructed in the meta-theory, not "assumed"

    You don't seem to understand the difference.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Ross Finlayson@ross.a.finlayson@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Thu Mar 14 09:58:33 2024
    From Newsgroup: comp.ai.philosophy

    On 03/13/2024 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions | >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H.

    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>>> corresponding decider/input pair that only differs by the >>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return
    value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both
    of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no
    requirement that any of them get any particular answer right.

    So, ALL deciders are in the set that we cycle through and apply the >>>>>>> following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the >>>>>>> existance of the input was what as just proven, the ^ template


    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they >>>>>>> will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying.

    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that select a >>>>>>> input that the machine will get wrong needs to be the same for two >>>>>>> differnt machines.




    For Every H, we show we can find at least one input (chosen
    just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of
    the same machine that only differs by return value where both
    get the wrong answer on the same input. By same input I mean
    the same finite string of numerical values.


    But if they returned differnt values, they will have different
    descriptions.

    Otherwise, how could a UTM get the right answer, since it only gets >>>>>>> the description.

    We can get around all of this stuff by simply using this criteria: >>>>>> Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is
    correct*
    (He has neither reviewed nor agreed to anything else in this paper) >>>>>> (a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running >>>>>> unless aborted then
    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability
    of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable. (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals. I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated. It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system
    that cannot have hidden gaps its its reasoning. If a machine can do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p // assumption
    (2) x ∈ True if and only if p // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True; // axiom: ~True(x) ∨ ~True(~x) (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x)
    (6) if x̄ ∈ Provable, then x̄ ∈ True; // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf





    Well I have some collected works of Tarski so
    I'll be looking to them, the excerpt there you
    reference basically talks about the model under
    consideration, as a fragment, of "the meta-theory",
    i.e., "Tarski's meta-theory" is "rather underdefined",
    that what he does is that he says that in the meta-theory,
    that there's something about the model under consideration,
    that isn't true in the model but is true, and he invokes
    Goedel to basically says that Goedel's incompleteness
    applies to this model, which of course must thusly by
    "strong enough to support arithmetic", where otherwise
    of course, Goedel's incompleteness does _not_ apply
    to theories that may be entirely closed categories.

    It seems the issue is that you think that you have
    entirely closed categories, in your meta-theory,
    then are applying this meta-theory to another model
    under consideration, which isn't, so it seems like
    you have implicits on your model, which are underdefined,
    rather as the "properly logical" or "non-logical" are
    with respect to being modeled by the logical, that
    you have the some sort of "un-logical" that are the
    result of that the axioms are un-logical, with regards
    to that your "meta-theory" is logical and your model
    under consideration, its _logical_ objects, are actually
    "un-logical" objects with respect to your, "meta-theory".


    That's a problem overall with "meta-theory", here the
    goal is to arrive at a theory that is its own meta-theory,
    that's also all one language, with regards to otherwise
    these sorts "fragments", of theories, interpreting each
    other, which is only ordinary, and when the objects
    are structurally self-defining or under-defined, as it
    were, when you're talking about quantifier ambiguity,
    in a fragment with decision or the non-logical about
    quantifier ambiguity in the meta-theory, then you've
    basically stipulations which aren't so much "true axioms"
    as "preconceived notions". They're stipulations.

    It seems you're sort of invoking a wash among
    meta-theory, theory, fragment, theory, meta-theory,
    theory, fragment, theory - and not modeling it,
    instead being sort of memoryless and unconscious
    about it, it's sort of considered unconscientious
    about it.


    I've been talking about these kinds of things in
    my podcasts recently, or I read a biography of Tarski
    and sort of address modern metaphysics and with regards
    to formal foundations .

    https://www.youtube.com/watch?v=lNYTU97XCMw&list=PLb7rLSBiE7F4eHy5vT61UYFR7_BIhwcOY&index=28

    For example the ontology and teleology have a
    sort of dualism, here like "fragment" and "meta-theory",
    the goal is a sort of "monist dualism" that's a
    sort of "dualist monism", here about the "logical".


    Good luck though it's a usual thing, about what
    there's basically _expansion_ of comprehension first,
    then that _restriction_ of comprehension is always
    fragments, it really does get into idea like for
    computability theory "the model of the integers,
    is only either a fragment or extension, more than
    standard".

    So, book-keeping your meta-theories or abstractions
    and fragments or grasps, it's like, "get a grip",
    right, but, "left-hand/right-hand".

    It's sort of like "left-hand/right-hand:
    stop hitting yourself, I'm not doing it".






    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From olcott@polcott2@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Thu Mar 14 14:01:23 2024
    From Newsgroup: comp.ai.philosophy

    On 3/14/2024 11:58 AM, Ross Finlayson wrote:
    On 03/13/2024 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H.

    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>>>> corresponding decider/input pair that only differs by the >>>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return >>>>>>>>>>> value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both >>>>>>>>> of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no
    requirement that any of them get any particular answer right.

    So, ALL deciders are in the set that we cycle through and apply the >>>>>>>> following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the >>>>>>>> existance of the input was what as just proven, the ^ template >>>>>>>>

    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they >>>>>>>> will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying.

    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that select a >>>>>>>> input that the machine will get wrong needs to be the same for two >>>>>>>> differnt machines.




    For Every H, we show we can find at least one input (chosen >>>>>>>>>> just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of
    the same machine that only differs by return value where both >>>>>>>>> get the wrong answer on the same input. By same input I mean >>>>>>>>> the same finite string of numerical values.


    But if they returned differnt values, they will have different >>>>>>>> descriptions.

    Otherwise, how could a UTM get the right answer, since it only gets >>>>>>>> the description.

    We can get around all of this stuff by simply using this criteria: >>>>>>> Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is
    correct*
    (He has neither reviewed nor agreed to anything else in this paper) >>>>>>> (a) If simulating halt decider H correctly simulates its input D >>>>>>> until H
    correctly determines that its simulated D would never stop running >>>>>>> unless aborted then
    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability
    of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system
    that cannot have hidden gaps its its reasoning. If a machine can do it >>>> then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox
    https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x)
    (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x)
    (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf





    Well I have some collected works of Tarski so
    I'll be looking to them, the excerpt there you
    reference basically talks about the model under
    consideration, as a fragment, of "the meta-theory",
    i.e., "Tarski's meta-theory" is "rather underdefined",
    that what he does is that he says that in the meta-theory,
    that there's something about the model under consideration,
    that isn't true in the model but is true, and he invokes
    Goedel to basically says that Goedel's incompleteness
    applies to this model, which of course must thusly by
    "strong enough to support arithmetic", where otherwise
    of course, Goedel's incompleteness does _not_ apply
    to theories that may be entirely closed categories.

    It seems the issue is that you think that you have
    entirely closed categories, in your meta-theory,
    then are applying this meta-theory to another model
    under consideration, which isn't, so it seems like
    you have implicits on your model, which are underdefined,
    rather as the "properly logical" or "non-logical" are
    with respect to being modeled by the logical, that
    you have the some sort of "un-logical" that are the
    result of that the axioms are un-logical, with regards
    to that your "meta-theory" is logical and your model
    under consideration, its _logical_ objects, are actually
    "un-logical" objects with respect to your, "meta-theory".


    That's a problem overall with "meta-theory", here the
    goal is to arrive at a theory that is its own meta-theory,
    that's also all one language, with regards to otherwise

    Yes that is exactly correct.

    *True(L,x) returns TRUE when x is True otherwise returns FALSE*

    *Easier to understand in English*
    LP = "This sentence is not true."
    Boolean True(English, LP) returns FALSE
    Boolean True(English, ~LP) returns FALSE

    I formalize this in my own minimal type theory that has actual
    self-reference using the := {is defined as} operator. https://en.wikipedia.org/wiki/List_of_logic_symbols

    LP := ~True(LP)

    *Minimal Type Theory* (YACC BNF) https://www.researchgate.net/publication/331859461_Minimal_Type_Theory_YACC_BNF

    Prolog rejects this
    ?- LP = not(true(LP)).
    LP = not(true(LP)).

    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    It seems that Tarski's theory get stuck on the Liar Paradox
    whereas mine does not because it is anchored in the Prolog
    model where True means proven by axioms and false means
    (negation as failure) unprovable from axioms.

    We can still get back to the conventional notion of false
    when the negation of the sentence in unprovable from axioms.

    these sorts "fragments", of theories, interpreting each
    other, which is only ordinary, and when the objects
    are structurally self-defining or under-defined, as it
    were, when you're talking about quantifier ambiguity,
    in a fragment with decision or the non-logical about
    quantifier ambiguity in the meta-theory, then you've
    basically stipulations which aren't so much "true axioms"
    as "preconceived notions". They're stipulations.

    It seems you're sort of invoking a wash among
    meta-theory, theory, fragment, theory, meta-theory,
    theory, fragment, theory - and not modeling it,
    instead being sort of memoryless and unconscious
    about it, it's sort of considered unconscientious
    about it.


    I've been talking about these kinds of things in
    my podcasts recently, or I read a biography of Tarski
    and sort of address modern metaphysics and with regards
    to formal foundations .

    https://www.youtube.com/watch?v=lNYTU97XCMw&list=PLb7rLSBiE7F4eHy5vT61UYFR7_BIhwcOY&index=28

    For example the ontology and teleology have a
    sort of dualism, here like "fragment" and "meta-theory",
    the goal is a sort of "monist dualism" that's a
    sort of "dualist monism", here about the "logical".


    Good luck though it's a usual thing, about what
    there's basically _expansion_ of comprehension first,
    then that _restriction_ of comprehension is always
    fragments, it really does get into idea like for
    computability theory "the model of the integers,
    is only either a fragment or extension, more than
    standard".

    So, book-keeping your meta-theories or abstractions
    and fragments or grasps, it's like, "get a grip",
    right, but, "left-hand/right-hand".

    It's sort of like "left-hand/right-hand:
    stop hitting yourself, I'm not doing it".

    --
    Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard Damon@richard@damon-family.org to comp.theory,sci.logic,comp.ai.philosophy on Thu Mar 14 15:01:28 2024
    From Newsgroup: comp.ai.philosophy

    On 3/14/24 12:01 PM, olcott wrote:
    On 3/14/2024 11:58 AM, Ross Finlayson wrote:
    On 03/13/2024 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H. >>>>>>>>>>>>>>>>>>>
    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩
    ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>>>>> corresponding decider/input pair that only differs by the >>>>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return >>>>>>>>>>>> value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both >>>>>>>>>> of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no >>>>>>>>> requirement that any of them get any particular answer right. >>>>>>>>>
    So, ALL deciders are in the set that we cycle through and apply >>>>>>>>> the
    following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the >>>>>>>>> existance of the input was what as just proven, the ^ template >>>>>>>>>

    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they >>>>>>>>> will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying. >>>>>>>>>>>
    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that
    select a
    input that the machine will get wrong needs to be the same for two >>>>>>>>> differnt machines.




    For Every H, we show we can find at least one input (chosen >>>>>>>>>>> just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of >>>>>>>>>> the same machine that only differs by return value where both >>>>>>>>>> get the wrong answer on the same input. By same input I mean >>>>>>>>>> the same finite string of numerical values.


    But if they returned differnt values, they will have different >>>>>>>>> descriptions.

    Otherwise, how could a UTM get the right answer, since it only >>>>>>>>> gets
    the description.

    We can get around all of this stuff by simply using this criteria: >>>>>>>> Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is >>>>>>>> correct*
    (He has neither reviewed nor agreed to anything else in this paper) >>>>>>>> (a) If simulating halt decider H correctly simulates its input D >>>>>>>> until H
    correctly determines that its simulated D would never stop running >>>>>>>> unless aborted then
    (b) H can abort its simulation of D and correctly report that D >>>>>>>> specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability >>>>>>> of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system >>>>> that cannot have hidden gaps its its reasoning. If a machine can do it >>>>> then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox
    https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x)
    (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x) >>> (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf





    Well I have some collected works of Tarski so
    I'll be looking to them, the excerpt there you
    reference basically talks about the model under
    consideration, as a fragment, of "the meta-theory",
    i.e., "Tarski's meta-theory" is "rather underdefined",
    that what he does is that he says that in the meta-theory,
    that there's something about the model under consideration,
    that isn't true in the model but is true, and he invokes
    Goedel to basically says that Goedel's incompleteness
    applies to this model, which of course must thusly by
    "strong enough to support arithmetic", where otherwise
    of course, Goedel's incompleteness does _not_ apply
    to theories that may be entirely closed categories.

    It seems the issue is that you think that you have
    entirely closed categories, in your meta-theory,
    then are applying this meta-theory to another model
    under consideration, which isn't, so it seems like
    you have implicits on your model, which are underdefined,
    rather as the "properly logical" or "non-logical" are
    with respect to being modeled by the logical, that
    you have the some sort of "un-logical" that are the
    result of that the axioms are un-logical, with regards
    to that your "meta-theory" is logical and your model
    under consideration, its _logical_ objects, are actually
    "un-logical" objects with respect to your, "meta-theory".


    That's a problem overall with "meta-theory", here the
    goal is to arrive at a theory that is its own meta-theory,
    that's also all one language, with regards to otherwise

    Yes that is exactly correct.

    *True(L,x) returns TRUE when x is True otherwise returns FALSE*

    So what does True(L, S) return when S is defined as S := ~True(L,S)


    *Easier to understand in English*
    LP = "This sentence is not true."
    Boolean True(English, LP)  returns FALSE
    Boolean True(English, ~LP) returns FALSE

    I formalize this in my own minimal type theory that has actual
    self-reference using the := {is defined as} operator. https://en.wikipedia.org/wiki/List_of_logic_symbols

    LP := ~True(LP)

    *Minimal Type Theory* (YACC BNF) https://www.researchgate.net/publication/331859461_Minimal_Type_Theory_YACC_BNF

    Prolog rejects this
    ?- LP = not(true(LP)).
    LP = not(true(LP)).

    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    It seems that Tarski's theory  get stuck on the Liar Paradox
    whereas mine does not because it is anchored in the Prolog
    model where True means proven by axioms and false means
    (negation as failure) unprovable from axioms.

    We can still get back to the conventional notion of false
    when the negation of the sentence in unprovable from axioms.

    So, what does True(L, S) return?

    If S is untrue, doesn't that make True(L,S) false, and thus S true?


    these sorts "fragments", of theories, interpreting each
    other, which is only ordinary, and when the objects
    are structurally self-defining or under-defined, as it
    were, when you're talking about quantifier ambiguity,
    in a fragment with decision or the non-logical about
    quantifier ambiguity in the meta-theory, then you've
    basically stipulations which aren't so much "true axioms"
    as "preconceived notions". They're stipulations.

    It seems you're sort of invoking a wash among
    meta-theory, theory, fragment, theory, meta-theory,
    theory, fragment, theory - and not modeling it,
    instead being sort of memoryless and unconscious
    about it, it's sort of considered unconscientious
    about it.


    I've been talking about these kinds of things in
    my podcasts recently, or I read a biography of Tarski
    and sort of address modern metaphysics and with regards
    to formal foundations .

    https://www.youtube.com/watch?v=lNYTU97XCMw&list=PLb7rLSBiE7F4eHy5vT61UYFR7_BIhwcOY&index=28

    For example the ontology and teleology have a
    sort of dualism, here like "fragment" and "meta-theory",
    the goal is a sort of "monist dualism" that's a
    sort of "dualist monism", here about the "logical".


    Good luck though it's a usual thing, about what
    there's basically _expansion_ of comprehension first,
    then that _restriction_ of comprehension is always
    fragments, it really does get into idea like for
    computability theory "the model of the integers,
    is only either a fragment or extension, more than
    standard".

    So, book-keeping your meta-theories or abstractions
    and fragments or grasps, it's like, "get a grip",
    right, but, "left-hand/right-hand".

    It's sort of like "left-hand/right-hand:
    stop hitting yourself, I'm not doing it".



    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From olcott@polcott2@gmail.com to comp.theory,sci.logic,comp.ai.philosophy on Thu Mar 14 17:54:24 2024
    From Newsgroup: comp.ai.philosophy

    On 3/14/2024 5:01 PM, Richard Damon wrote:
    On 3/14/24 12:01 PM, olcott wrote:
    On 3/14/2024 11:58 AM, Ross Finlayson wrote:
    On 03/13/2024 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H. >>>>>>>>>>>>>>>>>>>>
    When we disallow decider/input pairs that are >>>>>>>>>>>>>>>>>>>>> incorrect
    questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H >>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>>>>>> corresponding decider/input pair that only differs by the >>>>>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a >>>>>>>>>>>>> corresponding H/TMD pair that only differs by the return >>>>>>>>>>>>> value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves >>>>>>>>>>>>> that
    their question was incorrect because the opposite answer to >>>>>>>>>>>>> the
    same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both >>>>>>>>>>> of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no >>>>>>>>>> requirement that any of them get any particular answer right. >>>>>>>>>>
    So, ALL deciders are in the set that we cycle through and >>>>>>>>>> apply the
    following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the >>>>>>>>>> existance of the input was what as just proven, the ^ template >>>>>>>>>>

    When they are Turing_Machines_Returning_Boolean the this >>>>>>>>>>> set inherently includes identical pairs that only differ >>>>>>>>>>> by return value.

    But in the step of select and input that they will get wrong, >>>>>>>>>> they
    will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying. >>>>>>>>>>>>
    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that >>>>>>>>>> select a
    input that the machine will get wrong needs to be the same for >>>>>>>>>> two
    differnt machines.




    For Every H, we show we can find at least one input (chosen >>>>>>>>>>>> just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of >>>>>>>>>>> the same machine that only differs by return value where both >>>>>>>>>>> get the wrong answer on the same input. By same input I mean >>>>>>>>>>> the same finite string of numerical values.


    But if they returned differnt values, they will have different >>>>>>>>>> descriptions.

    Otherwise, how could a UTM get the right answer, since it only >>>>>>>>>> gets
    the description.

    We can get around all of this stuff by simply using this criteria: >>>>>>>>> Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is >>>>>>>>> correct*
    (He has neither reviewed nor agreed to anything else in this >>>>>>>>> paper)
    (a) If simulating halt decider H correctly simulates its input D >>>>>>>>> until H
    correctly determines that its simulated D would never stop running >>>>>>>>> unless aborted then
    (b) H can abort its simulation of D and correctly report that D >>>>>>>>> specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input >>>>>>>> that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's >>>>>>>> an input that exhausts it".

    I still don't see how that would be with infinite tapes though, >>>>>>>> without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability >>>>>>>> of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent >>>>>> about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system >>>>>> that cannot have hidden gaps its its reasoning. If a machine can
    do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox
    https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x)
    (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x) >>>> (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf





    Well I have some collected works of Tarski so
    I'll be looking to them, the excerpt there you
    reference basically talks about the model under
    consideration, as a fragment, of "the meta-theory",
    i.e., "Tarski's meta-theory" is "rather underdefined",
    that what he does is that he says that in the meta-theory,
    that there's something about the model under consideration,
    that isn't true in the model but is true, and he invokes
    Goedel to basically says that Goedel's incompleteness
    applies to this model, which of course must thusly by
    "strong enough to support arithmetic", where otherwise
    of course, Goedel's incompleteness does _not_ apply
    to theories that may be entirely closed categories.

    It seems the issue is that you think that you have
    entirely closed categories, in your meta-theory,
    then are applying this meta-theory to another model
    under consideration, which isn't, so it seems like
    you have implicits on your model, which are underdefined,
    rather as the "properly logical" or "non-logical" are
    with respect to being modeled by the logical, that
    you have the some sort of "un-logical" that are the
    result of that the axioms are un-logical, with regards
    to that your "meta-theory" is logical and your model
    under consideration, its _logical_ objects, are actually
    "un-logical" objects with respect to your, "meta-theory".


    That's a problem overall with "meta-theory", here the
    goal is to arrive at a theory that is its own meta-theory,
    that's also all one language, with regards to otherwise

    Yes that is exactly correct.

    *True(L,x) returns TRUE when x is True otherwise returns FALSE*

    So what does True(L, S) return when S is defined as S := ~True(L,S)


    True(L, S) returns Prolog's negation as failure false
    that only means not true.

    True(L, ~S) also returns Prolog's negation as failure false
    that only means not true.
    --
    Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard Damon@richard@damon-family.org to comp.theory,sci.logic,comp.ai.philosophy on Thu Mar 14 18:27:13 2024
    From Newsgroup: comp.ai.philosophy

    On 3/14/24 3:54 PM, olcott wrote:
    On 3/14/2024 5:01 PM, Richard Damon wrote:
    On 3/14/24 12:01 PM, olcott wrote:
    On 3/14/2024 11:58 AM, Ross Finlayson wrote:
    On 03/13/2024 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H. >>>>>>>>>>>>>>>>>>>>>
    When we disallow decider/input pairs that are >>>>>>>>>>>>>>>>>>>>>> incorrect
    questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid >>>>>>>>>>>>>>>>>>>>> and
    incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to
    ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to
    ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H >>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the >>>>>>>>>>>>>>>>>>> answers.
    THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) >>>>>>>>>>>>>>>> has a
    corresponding decider/input pair that only differs by the >>>>>>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>
    Every H/TMD pair (referenced in the above set) has a >>>>>>>>>>>>>> corresponding H/TMD pair that only differs by the return >>>>>>>>>>>>>> value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves >>>>>>>>>>>>>> that
    their question was incorrect because the opposite answer >>>>>>>>>>>>>> to the
    same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both >>>>>>>>>>>> of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no >>>>>>>>>>> requirement that any of them get any particular answer right. >>>>>>>>>>>
    So, ALL deciders are in the set that we cycle through and >>>>>>>>>>> apply the
    following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and >>>>>>>>>>> the
    existance of the input was what as just proven, the ^ template >>>>>>>>>>>

    When they are Turing_Machines_Returning_Boolean the this >>>>>>>>>>>> set inherently includes identical pairs that only differ >>>>>>>>>>>> by return value.

    But in the step of select and input that they will get wrong, >>>>>>>>>>> they
    will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying. >>>>>>>>>>>>>
    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that >>>>>>>>>>> select a
    input that the machine will get wrong needs to be the same >>>>>>>>>>> for two
    differnt machines.




    For Every H, we show we can find at least one input (chosen >>>>>>>>>>>>> just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of >>>>>>>>>>>> the same machine that only differs by return value where both >>>>>>>>>>>> get the wrong answer on the same input. By same input I mean >>>>>>>>>>>> the same finite string of numerical values.


    But if they returned differnt values, they will have different >>>>>>>>>>> descriptions.

    Otherwise, how could a UTM get the right answer, since it >>>>>>>>>>> only gets
    the description.

    We can get around all of this stuff by simply using this
    criteria:
    Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is >>>>>>>>>> correct*
    (He has neither reviewed nor agreed to anything else in this >>>>>>>>>> paper)
    (a) If simulating halt decider H correctly simulates its input D >>>>>>>>>> until H
    correctly determines that its simulated D would never stop >>>>>>>>>> running
    unless aborted then
    (b) H can abort its simulation of D and correctly report that D >>>>>>>>>> specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts >>>>>>>>> all machines", it just says "for any machine, there's an input >>>>>>>>> that halts it".

    Where "halt the machine" means "put it in an infinite loop". >>>>>>>>>
    So, rather, Halting theorem never said "there's an input that >>>>>>>>> exhausts all machines", it just says, "for any machine, there's >>>>>>>>> an input that exhausts it".

    I still don't see how that would be with infinite tapes though, >>>>>>>>> without a means of checking all the way right the tape in one >>>>>>>>> step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded. >>>>>>>>>

    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly. >>>>>>>>>
    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding >>>>>>>>> the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability >>>>>>>>> of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs, >>>>>>> Gödel 1931 Incompleteness and the Liar Paradox. I have only spent >>>>>>> about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal
    system
    that cannot have hidden gaps its its reasoning. If a machine can >>>>>>> do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox
    https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x)
    (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x) >>>>> (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf





    Well I have some collected works of Tarski so
    I'll be looking to them, the excerpt there you
    reference basically talks about the model under
    consideration, as a fragment, of "the meta-theory",
    i.e., "Tarski's meta-theory" is "rather underdefined",
    that what he does is that he says that in the meta-theory,
    that there's something about the model under consideration,
    that isn't true in the model but is true, and he invokes
    Goedel to basically says that Goedel's incompleteness
    applies to this model, which of course must thusly by
    "strong enough to support arithmetic", where otherwise
    of course, Goedel's incompleteness does _not_ apply
    to theories that may be entirely closed categories.

    It seems the issue is that you think that you have
    entirely closed categories, in your meta-theory,
    then are applying this meta-theory to another model
    under consideration, which isn't, so it seems like
    you have implicits on your model, which are underdefined,
    rather as the "properly logical" or "non-logical" are
    with respect to being modeled by the logical, that
    you have the some sort of "un-logical" that are the
    result of that the axioms are un-logical, with regards
    to that your "meta-theory" is logical and your model
    under consideration, its _logical_ objects, are actually
    "un-logical" objects with respect to your, "meta-theory".


    That's a problem overall with "meta-theory", here the
    goal is to arrive at a theory that is its own meta-theory,
    that's also all one language, with regards to otherwise

    Yes that is exactly correct.

    *True(L,x) returns TRUE when x is True otherwise returns FALSE*

    So what does True(L, S) return when S is defined as S := ~True(L,S)


    True(L, S) returns Prolog's negation as failure false
    that only means not true.

    So True(L, S) Isn't a Predicate, BY DEFINITION, as predicate only return
    the values True and False.

    So, you are AGREEING with Tarski, and Lying in your claim he is wrong.


    True(L, ~S) also returns Prolog's negation as failure false
    that only means not true.


    --- Synchronet 3.20a-Linux NewsLink 1.114