• Hypothetical possibilities V2

    From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy on Mon Jul 22 11:08:31 2024
    From Newsgroup: comp.ai.philosophy

    void DDD()
    {
    HHH(DDD);
    return;
    }

    int main()
    {
    HHH(DDD);
    }

    Of the two hypothetical possible ways that HHH can be encoded:
    (a) HHH(DDD) is encoded to abort its simulation.
    (b) HHH(DDD) is encoded to never abort its simulation.

    We can know that (b) is wrong because this fails to meet the design requirement that HHH must itself halt.

    We also know that any simulation that must be aborted to prevent the
    infinite execution of the simulator is necessarily a non-halting input.
    --
    Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Fred. Zwarts@F.Zwarts@HetNet.nl to comp.theory,comp.ai.philosophy on Mon Jul 22 19:59:48 2024
    From Newsgroup: comp.ai.philosophy

    Op 22.jul.2024 om 18:08 schreef olcott:
    void DDD()
    {
      HHH(DDD);
      return;
    }

    int main()
    {
      HHH(DDD);
    }

    Of the two hypothetical possible ways that HHH can be encoded:
    (a) HHH(DDD) is encoded to abort its simulation.
    (b) HHH(DDD) is encoded to never abort its simulation.

    We can know that (b) is wrong because this fails to meet the design requirement that HHH must itself halt.

    We also know that any simulation that must be aborted to prevent the infinite execution of the simulator is necessarily a non-halting input.



    We also know that (a) is wrong, because HHH, when simulated by itself,
    runs one cycle behind the HHH that simulates. When the simulating HHH
    aborts, the simulated HHH has one cycle to go, after which it would halt
    of its own. Therefore, the simulation is incomplete and incorrect.
    It is clear that HHH cannot possibly simulate itself correctly.
    The conclusion is that both hypothetical ways to encode HHH result in an incorrect HHH.

    Again, olcott introduces DDD in order to hide the fact that the problem
    is in HHH itself.

    int main() {
    return HHH(main);
    }

    has the same problem, proving that the problem is not in DDD, but in HHH.

    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Richard Damon@richard@damon-family.org to comp.theory,comp.ai.philosophy on Mon Jul 22 20:01:04 2024
    From Newsgroup: comp.ai.philosophy

    On 7/22/24 12:08 PM, olcott wrote:
    void DDD()
    {
      HHH(DDD);
      return;
    }

    int main()
    {
      HHH(DDD);
    }

    Of the two hypothetical possible ways that HHH can be encoded:
    (a) HHH(DDD) is encoded to abort its simulation.
    (b) HHH(DDD) is encoded to never abort its simulation.

    We can know that (b) is wrong because this fails to meet the design requirement that HHH must itself halt.

    We also know that any simulation that must be aborted to prevent the infinite execution of the simulator is necessarily a non-halting input.



    Remember, every HHH crreates a DIFFERENT "PROGRAM" DDD to decide on, and
    thus you don't have a case you can apply the property of the excluded
    middle.

    In case (a) HHH(DDD) aborts and returns, so it returns to DDD and DDD
    Halts, so if HHH returned 0, it was wrong.

    In case (b) HHH(DDD) never aborts, and as you admit, fails to be a decider.

    We can show that for (a), that HHH was wrong by giving that same DDD
    (which still calls that HHH) to another emulator (like the (b) case) put
    in an unused location of memory, and it WILL emulate that input to the
    final state, thus proving you wrong.

    This is what you HHH1 shows,
    --- Synchronet 3.20a-Linux NewsLink 1.114