• Re: It is stunning when you see how badly Windows operates: indexing

    From Daniel70@daniel47@nomail.afraid.org to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 20:12:07 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 7/10/2025 1:59 am, Brian Gregory wrote:
    On 06/10/2025 10:54, Daniel70 wrote:
    On 6/10/2025 7:56 pm, MikeS wrote:
    On 06/10/2025 09:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before >>>>>> the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it
    to be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY >>>>>> MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and
    content.

    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had
    an individual Byte/Bit of RAM die on you??) .... unless it still HAS
    the "original" file to compare the new file too??

    As it happens I have never *noticed* an individual Byte/Bit of RAM
    die on me. If it did that would have zero effect on the index data
    needed to search for the file.

    Sure .... but it would mean that the 'moved' file *IS* different to
    the original File.

    You're being silly.

    It's not the job of software to continuously expend lots of effort just
    so that it can carry on as if nothing has happened if and when at some
    point in the future the hardware starts to fail.

    Correct. So the OP COULD have checked/compared the ORIGINAL file with
    the RELOCATED to ensure they are IDENTICAL ..... but, somehow, I don't
    think the SOFTWARE/OS does that check.
    --
    Daniel70
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Daniel70@daniel47@nomail.afraid.org to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 20:18:17 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 7/10/2025 2:59 am, Alan wrote:
    On 2025-10-06 02:54, Daniel70 wrote:
    On 6/10/2025 7:56 pm, MikeS wrote:
    On 06/10/2025 09:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before >>>>>> the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it
    to be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY >>>>>> MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and
    content.

    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had
    an individual Byte/Bit of RAM die on you??) .... unless it still HAS
    the "original" file to compare the new file too??

    As it happens I have never *noticed* an individual Byte/Bit of RAM
    die on me. If it did that would have zero effect on the index data
    needed to search for the file.

    Sure .... but it would mean that the 'moved' file *IS* different to
    the original File.

    And it would mean the same for a file you DIDN'T move.

    Sorry! WHAT?? A file THAT I'VE DONE NOTHING TO might change!

    Really??

    WHEN??

    WHY??

    At some indeterminate time in the future??

    SURE!!
    --
    Daniel70
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Daniel70@daniel47@nomail.afraid.org to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 20:23:10 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 7/10/2025 3:33 am, Alan wrote:
    On 2025-10-06 09:28, MikeS wrote:
    On 06/10/2025 10:54, Daniel70 wrote:
    On 6/10/2025 7:56 pm, MikeS wrote:
    On 06/10/2025 09:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote: Alan>>> If I do a similar
    thing on macOS, it's done almost before the files Alan>>>
    have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its
    content... Alan> ...why would simply moving it to a new
    location require it to be re-read? Alan> Did its content
    magically change just because you moved it? Alan> No.

    This is the difference between indexing on demand versus
    CONTINUOUSLY MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata
    and content.

    When a search happens, the INDEX gets searched the location
    of the file is returned.

    The location of the file is ONE single piece of information
    in the indexing system about the file.

    Move the file and all you need to do is update that one
    piece of information.

    Nothing else has changed: not it's name, not it's date
    modified or last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the
    "new" "moved" file is *EXACTLY* the same as the original file
    (never had an individual Byte/Bit of RAM die on you??) ....
    unless it still HAS the "original" file to compare the new
    file too??

    As it happens I have never *noticed* an individual Byte/Bit of
    RAM die on me. If it did that would have zero effect on the
    index data needed to search for the file.

    Sure .... but it would mean that the 'moved' file *IS* different
    to the original File.

    No .... it would mean that the 'moved' file *IS* corrupted.

    AH!! So MikeS doesn't consider a moved but corrupted file is different
    to the original file. Good to know!!

    File indexing systems CANNOT record and search for data
    corruption.

    THANK YOU!

    For what??
    --
    Daniel70
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Daniel70@daniel47@nomail.afraid.org to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 20:30:23 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 6/10/2025 11:22 pm, Carlos E.R. wrote:
    On 2025-10-06 10:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before
    the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it to
    be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY
    MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and content.

    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had an
    individual Byte/Bit of RAM die on you??) .... unless it still HAS the
    "original" file to compare the new file too??

    Two possibilities.

      a) There is a service, or daemon, that is tracking all move
    operations (it must be connected to the system libraries that do the
    moves). Ie, the kernel is designed to track moves and inform some higher level layer about this. Thus the indexer is told that a file moved
    location.

      b) The indexer does some fast checking of the file (name, attributes, etc) and if it is the same, it assumes the file has moved. Maybe verification is delayed.

    So are you, Carlos, suggesting that, at the time of the move/copy, the
    OS does not check that what ends up at the new/final location is the
    same as what was in the old/original location??

    If that *IS* the case, I hope no-one EVER does a Defrag!! EVER!!
    --
    Daniel70
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Daniel70@daniel47@nomail.afraid.org to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 20:35:43 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 7/10/2025 2:58 am, Alan wrote:
    On 2025-10-06 01:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before
    the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it to
    be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY
    MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and content.

    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had an
    individual Byte/Bit of RAM die on you??) .... unless it still HAS the
    "original" file to compare the new file too??

    Oh dear GOD!

    You cannot be this ignorant.

    When you "move" a file from one directory to another on a partition, the actual data of the file IS NOT TOUCHED.

    The individual bits of the file stay in exactly the same place.

    All that's changed is the file system's record of which directory the
    file should be displayed as being in.

    Do you know what a "hard link" is?

    Alan, would you be satisfied if I had said/asked what happens when a
    File is 'COPIED' rather than 'MOVED'??

    Would I end up with TWO copies of the File on my HD??

    If I then deleted the ORIGINAL file, how do I *KNOW* that the remaining
    file is the same as the ORIGINAL file??
    --
    Daniel70
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Carlos E.R.@robin_listas@es.invalid to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 11:48:19 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 11:30, Daniel70 wrote:
    On 6/10/2025 11:22 pm, Carlos E.R. wrote:
    On 2025-10-06 10:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before
    the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it to >>>>> be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY >>>>> MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and content. >>>>
    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had an
    individual Byte/Bit of RAM die on you??) .... unless it still HAS the
    "original" file to compare the new file too??

    Two possibilities.

       a) There is a service, or daemon, that is tracking all move
    operations (it must be connected to the system libraries that do the
    moves). Ie, the kernel is designed to track moves and inform some
    higher level layer about this. Thus the indexer is told that a file
    moved location.

       b) The indexer does some fast checking of the file (name,
    attributes, etc) and if it is the same, it assumes the file has moved.
    Maybe verification is delayed.

    So are you, Carlos, suggesting that, at the time of the move/copy, the
    OS does not check that what ends up at the new/final location is the
    same as what was in the old/original location??

    If it is an move via hardlink operation, then as the data sectors are
    not touched, only the metadata, the data is guaranteed to be the same.

    If it is copy-and-delete operation, the system does not do an automated
    check that what is actually written is the same data.

    However, the userland program doing the operation can do a verify before delete.


    If that *IS* the case, I hope no-one EVER does a Defrag!! EVER!!

    Not related, either. The application doing the defrag can do a verify.


    In Linux, there are filesystems that store a checksum of files in the metadata, so that integrity can be verified. It is not the default.
    --
    Cheers, Carlos.
    ES🇪🇸, EU🇪🇺;
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From J. P. Gilliver@G6JPG@255soft.uk to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 11:16:17 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025/10/8 10:18:17, Daniel70 wrote:
    []
    Sorry! WHAT?? A file THAT I'VE DONE NOTHING TO might change!

    Really??
    Yes; it's called bit-rot, among other things.>
    WHEN??
    Depends on many factors; generally thought of as being possibly 5-10
    years for hard drives and SSDs, though the latter haven't _really_ been
    around for long enough for other than accelerated testing. Optical discs depends greatly on quality of disc, and storage conditions; my
    experience (mostly with budget ones) is not hopeful for more than a few
    (single digit number of) years.
    In practice, these are the times after which the error-correcting fails; individual bits probably rot sooner, but are correctable - until they
    aren't.

    WHY??
    Hard (and floppy) discs - areas magnetised lose or gain difference from adjacent ones. SSDs - charge leaks away (or in). Optical discs - the dye
    fades and similar (especially if not kept away from light).

    At some indeterminate time in the future??
    Yes.

    SURE!!
    Indeed!
    About the only reliable medium (for computer data, anyway - printed
    paper isn't bad, either) is punched plastic tape, but the information
    density is horrendous. (And not proof against e. g. fire.)
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()ALIS-Ch++(p)Ar++T+H+Sh0!:`)DNAf
  • From Daniel70@daniel47@nomail.afraid.org to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 21:21:07 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 7/10/2025 10:55 am, Alan wrote:
    On 2025-10-06 14:25, Carlos E.R. wrote:
    On 2025-10-06 23:12, Alan wrote:
    On 2025-10-06 13:47, Carlos E.R. wrote:
    On 2025-10-06 18:12, Alan wrote:
    On 2025-10-06 05:17, Carlos E.R. wrote:
    On 2025-10-06 06:16, Alan wrote:
    Move the file and all you need to do is update that one piece of >>>>>>> information.

    For that you need a system service that tracks moves and tells the >>>>>> indexer. It doesn't happen automatically.

    And why would that be at all difficult?

    You have a journaled file system, so all you need is a process that >>>>> checks for events in the journal that actually DO mean that there
    is a new file that's been created or that a file has been changed.

    Those events require that a file get re-indexed. Moving a file
    within the same volume does NOT.

    That process on macOS is called "fseventsd"

    Look it up.

    Not going to look it up, I don't do macs. I'll accept your word.

    Does Windows do it?

    Apparently not.

    Which is why I wrote subject I did:

    "It is stunning when you see how badly Windows operates: indexing"

    Well, I know that I have moved in the past huge content in Linux and
    seen several content indexers re-index them. I would have to test
    again noticing if it was the same partition.
    A move to another volume is never really a move, now is it?

    The OS can make the end result appear to be the SAME as a move, but if
    you want the files on a different volume, then the data in those files
    will have to be WRITTEN to that new volume and of course the indexing
    for that volume will have no entries for those files...

    ...because from the context of that new volume…

    ...THEY WILL BE NEW FILES.

    So when DO those files that YOU have WRITTEN to that new volume actually
    get listed in the content of that new volume??

    And when do they get deleted from the original volume .... and
    de-Indexed, of course??
    --
    Daniel70
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From J. P. Gilliver@G6JPG@255soft.uk to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 11:25:47 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025/10/8 10:48:19, Carlos E.R. wrote:
    On 2025-10-08 11:30, Daniel70 wrote:
    []
    So are you, Carlos, suggesting that, at the time of the move/copy, the
    OS does not check that what ends up at the new/final location is the
    same as what was in the old/original location??

    If it is an move via hardlink operation, then as the data sectors are
    not touched, only the metadata, the data is guaranteed to be the same.
    (Hmm, arguably that's not a move, though it looks like one to the user -
    a "move" to another place on the same partition usually only involves
    changing pointers.)>
    If it is copy-and-delete operation, the system does not do an automated check that what is actually written is the same data.
    Indeed. The _checksums_ (or more complex equivalents) may well be
    recalculated (possibly at the hardware level).>
    However, the userland program doing the operation can do a verify before delete.
    Indeed. A lot of copy/move commands have a verify _option_; I don't know
    if any have it on by default.>

    If that *IS* the case, I hope no-one EVER does a Defrag!! EVER!!

    Not related, either. The application doing the defrag can do a verify.

    Well, it _is_ related: a defrag _is_ a copy-and-delete (usually many of
    them). How many defrags do verify by default, I do not know.>
    In Linux, there are filesystems that store a checksum of files in the metadata, so that integrity can be verified. It is not the default.

    A lot of (all modern, even floppy?) systems - hard disc drives, probably
    SSDs - do that at a low level anyway, as their error-correction mechanism.>
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()ALIS-Ch++(p)Ar++T+H+Sh0!:`)DNAf
  • From J. P. Gilliver@G6JPG@255soft.uk to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 11:38:13 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025/10/8 10:35:43, Daniel70 wrote:
    []
    Alan, would you be satisfied if I had said/asked what happens when a
    File is 'COPIED' rather than 'MOVED'??

    Would I end up with TWO copies of the File on my HD??
    Yes; for a COPY, you will definitely have to have two copies - because
    if you subsequently edited one of them, the first one would have to
    remain unchanged. The only alternative to that would be for the OS to
    keep a record of all copy operations you do, and then if you alter one
    of them, copy the original first anyway so that it remains unchanged; I
    know of no OS that does this (the trcking requirements would be
    horrendous!).
    You can easily check that: look at the reported space used/remaining
    before and after you do a copy. It will go up/down. (And you will
    eventually run out of space and it won't let you copy.)

    If I then deleted the ORIGINAL file, how do I *KNOW* that the remaining
    file is the same as the ORIGINAL file??
    You don't, unless you invoked the verify option (or it is on by default)
    when you did the copy.
    --
    J. P. Gilliver. UMRA: 1960/<1985 MB++G()ALIS-Ch++(p)Ar++T+H+Sh0!:`)DNAf
  • From Daniel70@daniel47@nomail.afraid.org to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 22:01:22 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 7/10/2025 12:47 pm, Paul wrote:
    On Mon, 10/6/2025 8:27 PM, Alan wrote:


    Paul, you're the idiot that claimed moving a file resulted in a "deletefile" in the USN Journal...

    ...and you were spectacularly wrong.

    Here is some CoPilot feedback.

    [Picture]

    https://i.postimg.cc/J4qxv3Wn/Co-Pilot-Comment-On-USN.gif

    Paul, in what you quote there from Co-pilot it seems you end up with a
    NEW file with a totally different File Name.

    Surely this can't be so .... or has Copilot just not completed the process??

    [Picture]

    https://i.postimg.cc/6pV8Yzs0/USN-move-testcase.gif

    Paul

    --
    Daniel70
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 10:45:08 2025
    From Newsgroup: comp.sys.mac.advocacy

    On Wed, 10/8/2025 7:01 AM, Daniel70 wrote:


    Surely this can't be so .... or has Copilot just not completed the process??

    You can make up cases and ask the AI for a result.

    The idea of making my stupid cases, is just to get it
    to dump the names of a representative set of possibilities.

    To move a file can:

    1) Require deleting it with a change journal event,
    then create it with a second change journal event.
    The AI tells us there is no single-event Move type for the job.
    There is a tendency in computer systems designs to use
    atomic events rather than "do fancy stuff". There is a lot of
    fancy stuff we could do, which in the end we don't do.

    2) Moves can be orchestrated by the Name Change set of change journal events.

    Now, you have to prepare your set of file names, and then
    figure out which of those cases apply. Take the actual move
    command that demonstrated a problem and get the AI description
    of how that would be coded.

    The journal dumps I've tried, at some point the design must
    have changed. I didn't make a fetish out of this. Like at
    one time, you could see a write-file event dumped, for
    every 256KB or so of data. I don't think that scheme was
    in the latest try I made of a dump.

    ******************

    CoPilot Question

    In the NTFS change journal, has the design of the journal events been a
    constant with time or did the change journal design or representation change over the years ?

    CoPilot Answer

    The NTFS Change Journal - also known as the USN Journal - has evolved over time,
    though its core purpose has remained consistent: to efficiently track
    changes to files and directories on an NTFS volume.

    ### 🕰️ Evolution of the Change Journal Design

    Here’s how its design and representation have changed over the years:

    #### 1. **Initial Introduction (Windows 2000 / NTFS 3.0)**
    - The Change Journal was introduced with NTFS 3.0.
    - It stored records in a metafile called `$Extend\$UsnJrnl`.
    - Each record included:
    - A 64-bit Update Sequence Number (USN)
    - File reference number
    - Reason flags (bitmask indicating the type of change)
    - Timestamp
    - File name

    #### 2. **USN_RECORD Structure Versions**
    - Over time, Microsoft introduced new versions of the `USN_RECORD` structure:
    - **V2**: Used in early implementations (Windows 2000, XP, etc.)
    - **V3 and V4**: Introduced in later versions of Windows to support new features and metadata
    - These versions added fields like extended reason flags, source info, and more precise timestamps.

    #### 3. **Bit Flags Expansion**
    - The `Reason` field uses bit flags like `USN_REASON_DATA_OVERWRITE`, `USN_REASON_FILE_CREATE`, etc.
    - New flags were added over time to reflect more granular changes (e.g., file renames, ACL changes).

    #### 4. **Performance and Scalability Enhancements**
    - As file systems grew to support millions of files, the journal’s internal mechanisms were optimized for speed and reduced overhead.
    - Journals became more configurable (e.g., size limits, retention policies).

    #### 5. **Integration with New Features**
    - Windows features like **File History**, **Windows Search**, and **Backup utilities** began leveraging the Change Journal.
    - This led to more robust APIs and better documentation for developers.
    ---
    ### 🔍 Summary

    While the fundamental concept of logging file system changes has stayed the same,
    the **structure**, **metadata**, and **integration points** of the NTFS Change Journal
    have evolved to meet the demands of modern systems. It’s a great example of a foundational technology quietly adapting behind the scenes.

    Would you like to dive into the specifics of the USN_RECORD structure versions or how backup tools use the journal today?
    **********************************8

    Paul
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:18:01 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 02:12, Daniel70 wrote:
    On 7/10/2025 1:59 am, Brian Gregory wrote:
    On 06/10/2025 10:54, Daniel70 wrote:
    On 6/10/2025 7:56 pm, MikeS wrote:
    On 06/10/2025 09:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before >>>>>>> the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it >>>>>>> to be re-read?
    Alan> Did its content magically change just because you moved it? >>>>>>> Alan> No.

    This is the difference between indexing on demand versus
    CONTINUOUSLY
    MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and
    content.

    When a search happens, the INDEX gets searched the location of the >>>>>> file is returned.

    The location of the file is ONE single piece of information in the >>>>>> indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or >>>>>> last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had
    an individual Byte/Bit of RAM die on you??) .... unless it still
    HAS the "original" file to compare the new file too??

    As it happens I have never *noticed* an individual Byte/Bit of RAM
    die on me. If it did that would have zero effect on the index data
    needed to search for the file.

    Sure .... but it would mean that the 'moved' file *IS* different to
    the original File.

    You're being silly.

    It's not the job of software to continuously expend lots of effort
    just so that it can carry on as if nothing has happened if and when at
    some point in the future the hardware starts to fail.

    Correct. So the OP COULD have checked/compared the ORIGINAL file with
    the RELOCATED to ensure they are IDENTICAL ..... but, somehow, I don't
    think the SOFTWARE/OS does that check.

    I will say this over and over until the ignorant get the message:

    When you "move" a file from one directory to another directory on the
    same volume there is no second file ("ORIGINAL" vs "RELOCATED")

    The file's data NEVER MOVES.

    A "move" as described is literally NO DIFFERENT than just renaming the
    file, so there is no need to check that something has been changed.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:20:11 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 02:23, Daniel70 wrote:
    On 7/10/2025 3:33 am, Alan wrote:
    On 2025-10-06 09:28, MikeS wrote:
    On 06/10/2025 10:54, Daniel70 wrote:
    On 6/10/2025 7:56 pm, MikeS wrote:
    On 06/10/2025 09:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote: Alan>>> If I do a similar
    thing on macOS, it's done almost before the files Alan>>>
    have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its
    content... Alan> ...why would simply moving it to a new
    location require it to be re-read? Alan> Did its content
    magically change just because you moved it? Alan> No.

    This is the difference between indexing on demand versus
     CONTINUOUSLY MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata
    and content.

    When a search happens, the INDEX gets searched the location
    of the file is returned.

    The location of the file is ONE single piece of information
    in the indexing system about the file.

    Move the file and all you need to do is update that one
    piece of information.

    Nothing else has changed: not it's name, not it's date
    modified or last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the
    "new" "moved" file is *EXACTLY* the same as the original file
    (never had an individual Byte/Bit of RAM die on you??) ....
    unless it still HAS the "original" file to compare the new
    file too??

    As it happens I have never *noticed* an individual Byte/Bit of
    RAM die on me. If it did that would have zero effect on the
    index data needed to search for the file.

    Sure .... but it would mean that the 'moved' file *IS* different
    to the original File.

    No .... it would mean that the 'moved' file *IS* corrupted.

    AH!! So MikeS doesn't consider a moved but corrupted file is different
    to the original file. Good to know!!

    I file IS NEVER CORRUPTED by a "move" from one place to another on a
    single volume.

    NEVER!!!

    Because the data is NOT BEING REWRITTEN.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:20:48 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 02:30, Daniel70 wrote:
    On 6/10/2025 11:22 pm, Carlos E.R. wrote:
    On 2025-10-06 10:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before
    the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it to >>>>> be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY >>>>> MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and content. >>>>
    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had an
    individual Byte/Bit of RAM die on you??) .... unless it still HAS the
    "original" file to compare the new file too??

    Two possibilities.

       a) There is a service, or daemon, that is tracking all move
    operations (it must be connected to the system libraries that do the
    moves). Ie, the kernel is designed to track moves and inform some
    higher level layer about this. Thus the indexer is told that a file
    moved location.

       b) The indexer does some fast checking of the file (name,
    attributes, etc) and if it is the same, it assumes the file has moved.
    Maybe verification is delayed.

    So are you, Carlos, suggesting that, at the time of the move/copy, the
    OS does not check that what ends up at the new/final location is the
    same as what was in the old/original location??

    If that *IS* the case, I hope no-one EVER does a Defrag!! EVER!!

    I will say this over and over until the ignorant get the message:

    When you "move" a file from one directory to another directory on the
    same volume there is no second file ("ORIGINAL" vs "RELOCATED")

    The file's data NEVER MOVES.

    A "move" as described is literally NO DIFFERENT than just renaming the
    file, so there is no need to check that something has been changed.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:21:41 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 02:48, Carlos E.R. wrote:
    On 2025-10-08 11:30, Daniel70 wrote:
    On 6/10/2025 11:22 pm, Carlos E.R. wrote:
    On 2025-10-06 10:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before >>>>>> the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it
    to be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY >>>>>> MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and
    content.

    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had
    an individual Byte/Bit of RAM die on you??) .... unless it still HAS
    the "original" file to compare the new file too??

    Two possibilities.

       a) There is a service, or daemon, that is tracking all move
    operations (it must be connected to the system libraries that do the
    moves). Ie, the kernel is designed to track moves and inform some
    higher level layer about this. Thus the indexer is told that a file
    moved location.

       b) The indexer does some fast checking of the file (name,
    attributes, etc) and if it is the same, it assumes the file has
    moved. Maybe verification is delayed.

    So are you, Carlos, suggesting that, at the time of the move/copy, the
    OS does not check that what ends up at the new/final location is the
    same as what was in the old/original location??

    If it is an move via hardlink operation, then as the data sectors are
    not touched, only the metadata, the data is guaranteed to be the same.

    If it is copy-and-delete operation, the system does not do an automated check that what is actually written is the same data.

    However, the userland program doing the operation can do a verify before delete.

    No OS does a "copy-and-delete" for a move within a single volume.



    If that *IS* the case, I hope no-one EVER does a Defrag!! EVER!!

    Not related, either. The application doing the defrag can do a verify.


    In Linux, there are filesystems that store a checksum of files in the metadata, so that integrity can be verified. It is not the default.



    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:22:16 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 03:25, J. P. Gilliver wrote:
    On 2025/10/8 10:48:19, Carlos E.R. wrote:
    On 2025-10-08 11:30, Daniel70 wrote:

    []


    So are you, Carlos, suggesting that, at the time of the move/copy, the
    OS does not check that what ends up at the new/final location is the
    same as what was in the old/original location??

    If it is an move via hardlink operation, then as the data sectors are
    not touched, only the metadata, the data is guaranteed to be the same.

    (Hmm, arguably that's not a move, though it looks like one to the user -
    a "move" to another place on the same partition usually only involves changing pointers.)>

    Exactly.

    If it is copy-and-delete operation, the system does not do an automated
    check that what is actually written is the same data.

    Indeed. The _checksums_ (or more complex equivalents) may well be recalculated (possibly at the hardware level).>
    However, the userland program doing the operation can do a verify before
    delete.

    Indeed. A lot of copy/move commands have a verify _option_; I don't know
    if any have it on by default.>

    If that *IS* the case, I hope no-one EVER does a Defrag!! EVER!!

    Not related, either. The application doing the defrag can do a verify.

    Well, it _is_ related: a defrag _is_ a copy-and-delete (usually many of them). How many defrags do verify by default, I do not know.>
    In Linux, there are filesystems that store a checksum of files in the
    metadata, so that integrity can be verified. It is not the default.

    A lot of (all modern, even floppy?) systems - hard disc drives, probably
    SSDs - do that at a low level anyway, as their error-correction mechanism.>




    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:23:38 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 02:35, Daniel70 wrote:
    On 7/10/2025 2:58 am, Alan wrote:
    On 2025-10-06 01:23, Daniel70 wrote:
    On 6/10/2025 3:16 pm, Alan wrote:
    On 2025-10-05 19:46, Lars Poulsen wrote:
    Alan <nuh-uh@nope.com> wrote:
    Alan>>> If I do a similar thing on macOS, it's done almost before
    the files
    Alan>>> have finished the move

    On 2025-10-01 13:19, Carlos E.R. wrote:
    Carlos>> Not if it does content indexing.

    Alan> If a file has already BEEN indexed for its content...
    Alan> ...why would simply moving it to a new location require it to >>>>> be re-read?
    Alan> Did its content magically change just because you moved it?
    Alan> No.

    This is the difference between indexing on demand versus CONTINUOUSLY >>>>> MAINTAINING an index.

    I'm sorry, but you're wrong.

    You index a file so that you can find it based on metadata and content. >>>>
    When a search happens, the INDEX gets searched the location of the
    file is returned.

    The location of the file is ONE single piece of information in the
    indexing system about the file.

    Move the file and all you need to do is update that one piece of
    information.

    Nothing else has changed: not it's name, not it's date modified or
    last opened, not it's content.

    So there is no need to re-read the file.

    How is your system supposed to know that the contents of the "new"
    "moved" file is *EXACTLY* the same as the original file (never had an
    individual Byte/Bit of RAM die on you??) .... unless it still HAS the
    "original" file to compare the new file too??

    Oh dear GOD!

    You cannot be this ignorant.

    When you "move" a file from one directory to another on a partition,
    the actual data of the file IS NOT TOUCHED.

    The individual bits of the file stay in exactly the same place.

    All that's changed is the file system's record of which directory the
    file should be displayed as being in.

    Do you know what a "hard link" is?

    Alan, would you be satisfied if I had said/asked what happens when a
    File is 'COPIED' rather than 'MOVED'??

    Would I end up with TWO copies of the File on my HD??

    If I then deleted the ORIGINAL file, how do I *KNOW* that the remaining
    file is the same as the ORIGINAL file??

    Why would you do that?

    This thread is about why Windows needs to reindex the entire content of
    a file when the operation is a simple move.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:24:26 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 03:21, Daniel70 wrote:
    On 7/10/2025 10:55 am, Alan wrote:
    On 2025-10-06 14:25, Carlos E.R. wrote:
    On 2025-10-06 23:12, Alan wrote:
    On 2025-10-06 13:47, Carlos E.R. wrote:
    On 2025-10-06 18:12, Alan wrote:
    On 2025-10-06 05:17, Carlos E.R. wrote:
    On 2025-10-06 06:16, Alan wrote:
    Move the file and all you need to do is update that one piece of >>>>>>>> information.

    For that you need a system service that tracks moves and tells
    the indexer. It doesn't happen automatically.

    And why would that be at all difficult?

    You have a journaled file system, so all you need is a process
    that checks for events in the journal that actually DO mean that
    there is a new file that's been created or that a file has been
    changed.

    Those events require that a file get re-indexed. Moving a file
    within the same volume does NOT.

    That process on macOS is called "fseventsd"

    Look it up.

    Not going to look it up, I don't do macs. I'll accept your word.

    Does Windows do it?

    Apparently not.

    Which is why I wrote subject I did:

    "It is stunning when you see how badly Windows operates: indexing"

    Well, I know that I have moved in the past huge content in Linux and
    seen several content indexers re-index them. I would have to test
    again noticing if it was the same partition.
    A move to another volume is never really a move, now is it?

    The OS can make the end result appear to be the SAME as a move, but if
    you want the files on a different volume, then the data in those files
    will have to be WRITTEN to that new volume and of course the indexing
    for that volume will have no entries for those files...

    ...because from the context of that new volume…

    ...THEY WILL BE NEW FILES.

    So when DO those files that YOU have WRITTEN to that new volume actually
    get listed in the content of that new volume??

    And when do they get deleted from the original volume .... and de-
    Indexed, of course??

    You're too ignorant to be in this conversation.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Alan@nuh-uh@nope.com to comp.sys.mac.advocacy,alt.comp.os.windows-10,alt.comp.os.windows-11 on Wed Oct 8 08:25:01 2025
    From Newsgroup: comp.sys.mac.advocacy

    On 2025-10-08 07:45, Paul wrote:
    On Wed, 10/8/2025 7:01 AM, Daniel70 wrote:


    Surely this can't be so .... or has Copilot just not completed the process??

    You can make up cases and ask the AI for a result.

    And you can get garbage out.


    The idea of making my stupid cases, is just to get it
    to dump the names of a representative set of possibilities.

    To move a file can:

    1) Require deleting it with a change journal event,
    then create it with a second change journal event.
    The AI tells us there is no single-event Move type for the job.
    There is a tendency in computer systems designs to use
    atomic events rather than "do fancy stuff". There is a lot of
    fancy stuff we could do, which in the end we don't do.

    2) Moves can be orchestrated by the Name Change set of change journal events.

    Now, you have to prepare your set of file names, and then
    figure out which of those cases apply. Take the actual move
    command that demonstrated a problem and get the AI description
    of how that would be coded.

    The journal dumps I've tried, at some point the design must
    have changed. I didn't make a fetish out of this. Like at
    one time, you could see a write-file event dumped, for
    every 256KB or so of data. I don't think that scheme was
    in the latest try I made of a dump.

    ******************

    CoPilot Question

    In the NTFS change journal, has the design of the journal events been a
    constant with time or did the change journal design or representation change over the years ?

    CoPilot Answer

    The NTFS Change Journal - also known as the USN Journal - has evolved over time,
    though its core purpose has remained consistent: to efficiently track
    changes to files and directories on an NTFS volume.

    ### 🕰️ Evolution of the Change Journal Design

    Here’s how its design and representation have changed over the years:

    #### 1. **Initial Introduction (Windows 2000 / NTFS 3.0)**
    - The Change Journal was introduced with NTFS 3.0.
    - It stored records in a metafile called `$Extend\$UsnJrnl`.
    - Each record included:
    - A 64-bit Update Sequence Number (USN)
    - File reference number
    - Reason flags (bitmask indicating the type of change)
    - Timestamp
    - File name

    #### 2. **USN_RECORD Structure Versions**
    - Over time, Microsoft introduced new versions of the `USN_RECORD` structure:
    - **V2**: Used in early implementations (Windows 2000, XP, etc.)
    - **V3 and V4**: Introduced in later versions of Windows to support new features and metadata
    - These versions added fields like extended reason flags, source info, and more precise timestamps.

    #### 3. **Bit Flags Expansion**
    - The `Reason` field uses bit flags like `USN_REASON_DATA_OVERWRITE`, `USN_REASON_FILE_CREATE`, etc.
    - New flags were added over time to reflect more granular changes (e.g., file renames, ACL changes).

    #### 4. **Performance and Scalability Enhancements**
    - As file systems grew to support millions of files, the journal’s internal mechanisms were optimized for speed and reduced overhead.
    - Journals became more configurable (e.g., size limits, retention policies).

    #### 5. **Integration with New Features**
    - Windows features like **File History**, **Windows Search**, and **Backup utilities** began leveraging the Change Journal.
    - This led to more robust APIs and better documentation for developers.
    ---
    ### 🔍 Summary

    While the fundamental concept of logging file system changes has stayed the same,
    the **structure**, **metadata**, and **integration points** of the NTFS Change Journal
    have evolved to meet the demands of modern systems. It’s a great example of a
    foundational technology quietly adapting behind the scenes.

    Would you like to dive into the specifics of the USN_RECORD structure versions
    or how backup tools use the journal today? **********************************8

    Paul

    --- Synchronet 3.21a-Linux NewsLink 1.2