• =?UTF-8?Q?=E2=80=9CBackblaze_Drive_Stats_for_Q2_2025=E2=80=9D?=

    From Lynn McGuire@lynnmcguire5@gmail.com to comp.sys.ibm.pc.hardware.storage,alt.comp.hardware.pc-homebuilt on Tue Aug 5 22:21:32 2025
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    “Backblaze Drive Stats for Q2 2025”
    https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2025/

    Buy WD drives for now.

    QLC drives are coming. Less heat, less electricity, less space, way
    longer life, and sizes in the hundreds of TB real soon now for the same
    cost as spinning rust.

    Lynn

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Mr. Man-wai Chang@toylet.toylet@gmail.com to comp.sys.ibm.pc.hardware.storage,alt.comp.hardware.pc-homebuilt on Wed Aug 6 12:08:27 2025
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On 6/8/2025 11:21 am, Lynn McGuire wrote:
    “Backblaze Drive Stats for Q2 2025”
    https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2025/

    Buy WD drives for now.

    QLC drives are coming. Less heat, less electricity, less space, way
    longer life, and sizes in the hundreds of TB real soon now for the same
    cost as spinning rust.

    You are mixing HDD and SSD in one single message. Better split it into 2 threads. :)

    According to some, QLC has shorter lifespan than TLC and even less than
    SLC, though we cannot find SLC drives now.

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Paul@nospam@needed.invalid to comp.sys.ibm.pc.hardware.storage,alt.comp.hardware.pc-homebuilt on Wed Aug 6 05:38:16 2025
    From Newsgroup: comp.sys.ibm.pc.hardware.storage

    On Tue, 8/5/2025 11:21 PM, Lynn McGuire wrote:
    “Backblaze Drive Stats for Q2 2025”
       https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2025/

    Buy WD drives for now.

    QLC drives are coming.   Less heat, less electricity, less space, way longer life, and sizes in the hundreds of TB real soon now for the same cost as spinning rust.

    Lynn


    The QLC drives could be for specific purposes, such as "read mostly" applications.
    I don't know if I would conclude they were a "good deal". Maybe only AI TechBros
    buy those. Saying "Enterprise" and "QLC" in the same breath seems a sacrilege.

    Apparently an AI training set is 400TB+.

    To be comparable to spinning rust, I'd want something where I don't
    have to worry about wear, quite so much.

    There was a device at Computex, which was PCIe Rev5 x16, read 60GB/sec, write 60GB/sec
    (manufacturer calls this 120GB/sec), has 28 NVMe sticks and a 144 lane PCIe Rev5 switch
    chip, and that could be whatever NAND type you can get your NVMe in (TLC maybe or
    MLC-like TLC). And that might be a bit closer to a "some read, some write" device,
    although the transactions per second is pretty low considering the number of NVMe. There must be an assumption of a RAID0 kind of operation. It draws at least 300 watts
    of electricity, and presumably the NVMe each have heatsink fins.

    So while you can see "crazy shit" being designed, it is not for devices you will be
    getting close to, in your lifetime. There is also a lot of HBM3 memory being designed,
    and you would presume the prices on that will be eye watering as well. In a gold rush,
    be in the pack mule and shovel business. Compared to consumer electronics, the volume
    of HBM3 will be not all that high, but the price will be high. For some time, there
    has been an SRAM company that makes RAM chips at $500 per chip, as an example of
    a "good business to be in" (a DRAM chip might be $3 each). And the interface on that
    chip runs at something like 2GHz, and like a good SRAM, the latency is very low.
    The HBM uses massive parallelism, a lot of I/O channels, but the previous versions
    of HBM were not all that good for GPU use. These are the chips that might be on your $10,000 video card and GPU (as AI is all about RAM bandwidth in the machine,
    at all levels of the machine). One reason a general purpose computer isn't
    very good, is it has poor RAM bandwidth. It's not that the cores aren't ambitious
    enough.

    https://www.phoronix.com/forums/forum/hardware/processors-memory/1507066-8-vs-12-channel-ddr5-6000-memory-performance-with-amd-5th-gen-epyc

    "What BIOS NUMA settings did you use? The STREAM benchmark results (355858.4 MB/s for ADD,
    348347.3 MB/s for TRIAD) don't look very good considering that "per socket" Epyc Turin bandwidth
    advertised by AMD is 576 GB/s."

    Some of the storage then, is aimed at this crazy stuff. It's not for legacy datacenter
    usage particularly.

    Paul
    --- Synchronet 3.21a-Linux NewsLink 1.2