vlovich123 12 hours ago

Is this an actually good explanation? The introduction immediately made me pause:

> In classical computers, error-resistant memory is achieved by duplicating bits to detect and correct errors. A method called majority voting is often used, where multiple copies of a bit are compared, and the majority value is taken as the correct bit

No in classical computers memory is corrected for using error correction not duplicating bits and majority voting. Duplicating bits would be a very wasteful strategy if you can add significantly fewer bits and achieve the same result which is what you get with error correction techniques like ECC. Maybe they got it confused with logic circuits where there’s not any more efficient strategy?

  • ziofill 8 hours ago

    Physicist here. Classical error correction may not always be a straight up repetition code, but the concept of redundancy of information still applies (like parity checks).

    In a nutshell, in quantum error correction you cannot use redundancy because of the no-cloning theorem, so instead you embed the qubit subspace in a larger space (using more qubits) such that when correctable errors happen the embedded subspace moves to a different "location" in the larger space. When this happens it can be detected and the subspace can be brought back without affecting the states within the subspace, so the quantum information is preserved.

    • adastra22 2 hours ago

      You are correct in the details, but not the distinction. This is exactly how classical error correction works as well.

    • immibis 3 hours ago

      This happens to be the same way that classical error correction works, but quantum.

  • abdullahkhalids 6 hours ago

    While you are correct, here is a fun side fact.

    The electric signals inside a (classical) processor or digital logic chip are made up of many electrons. Electrons are not fully well behaved and there are often deviations from ideal behavior. Whether a signal gets interpreted as 0 or 1 depends on which way the majority of the electrons are going. The lower the power you operate at, the fewer electrons there are per signal, and the more errors you will see.

    So in a way, there is a a repetition code in a classical computer (or other similar devices such as an optical fiber). Just in the hardware substrate, not in software.

  • abtinf 10 hours ago

    This seems like the kind of error an LLM would make.

    It is essentially impossible for a human to confuse error correction and “majority voting”/consensus.

    • GuB-42 9 hours ago

      I don't believe it is the result of a LLM, more like an oversimplification, or maybe a minor fuckup on the part of the author as simple majority voting is often used in redundant systems, just not for memories as there are better ways.

      And for a LLM result, this is what ChatGPT says when asked "How does memory error correction differ from quantum error correction?", among other things.

      > Relies on redundancy by encoding extra bits into the data using techniques like parity bits, Hamming codes, or Reed-Solomon codes.

      And when asked for a simplified answer

      > Classical memory error correction fixes mistakes in regular computer data (0s and 1s) by adding extra bits to check for and fix any errors, like a safety net catching flipped bits. Quantum error correction, on the other hand, protects delicate quantum bits (qubits), which can hold more complex information (like being 0 and 1 at the same time), from errors caused by noise or interference. Because qubits are fragile and can’t be directly measured without breaking their state, quantum error correction uses clever techniques involving multiple qubits and special rules of quantum physics to detect and fix errors without ruining the quantum information.

      Absolutely no mention of majority voting here.

      EDIT: GPT-4o mini does mention majority voting as an example of a memory error correction scheme but not as the way to do it. The explanation is overall more clumsy, but generally correct, I don't know enough about quantum error correction to fact-check.

    • mmooss 9 hours ago

      People always have made bad assumptions or had misunderstandings. Maybe the author just doesn't understand ECC and always assumed it was consensus-based. I do things like that (I try not to write about them without verifying); I'm confident that so do you and everyone reading this.

      • Suppafly 8 hours ago

        >Maybe the author just doesn't understand ECC and always assumed it was consensus-based.

        That's likely, or it was LLM output and the author didn't know enough to know it was wrong. We've seen that in a lot of tech articles lately where authors assume that something that is true-ish in one area is also true in another, and it's obvious they just don't understand other area they are writing about.

        • fnordpiglet 7 hours ago

          Frankly every state of the art LLM would not make this error. Perhaps GPT3.5 would have, but the space of errors they tend to make now is in areas of ambiguity or things that require deductive reasoning, math, etc. Areas that are well described in literature they tend to not make mistakes.

  • Karliss 5 hours ago

    By a somewhat generous interpretation classical computer memory depends on implicit duplication/majority vote in the form of increased cell size of each bit instead of discrete duplication. Same way as repetition of signal sent over the wire can mean using lower baudrate and holding the signal level for longer time. A bit isn't stored in single atom or electron. A cell storing single bit can be considered a group of smaller cells connected in parallel storing duplicate value. And the majority vote happens automatically in analog form as you read total sum of the charge within memory cell.

    Depending on how abstractly you talk about computers (which can be the case when contrasting quantum computing with classical computing), memory can refer not just to RAM but anything holding state and classical computer refer to any computing device including simple logic circuits not your desktop computer. Fundamentally desktop computers are one giant logic circuits.

    Also RAID-1 is a thing.

    At higher level backups are a thing.

    So I would say there enough examples of practically used duplication for the purpose of error resistance in classical computers.

    • mathgenius 4 hours ago

      Yes, and it's worth pointing out these examples because they don't work as quantum memories. Two more: magnetic memory based on magnets which are magnetic because they are build from many tiny (atomic) magnets, all (mostly) in agreement. Optical storage is similar, much like parent's example of a signal being slowly sent over a wire.

      So the next question is why doesn't this work for quantum information? And this is a really great question which gets at the heart of quantum versus classical. Classical information is just so fantastically easy to duplicate that normally we don't even notice this, it's just too obvious a fact... until we get to quantum.

  • outworlder 11 hours ago

    That threw me off as well. Majority voting works for industries like aviation, but that's still about checking results of computations, not all memory addresses.

  • weinzierl 9 hours ago

    Maybe they were thinking of control systems where duplicating memory, lockstep cores and majority voting are used. You don't even have to go to space to encounter such a system, you likely have one in your car.

  • bramathon 9 hours ago

    The explanation of Google's error correction experiment is basic but fine. People should keep in mind that Quantum Machines sells control electronics for quantum computers which is why they focus on the control and timing aspects of the experiment. I think a more general introduction to quantum error correction would be more relevant to the Hackernews audience.

  • refulgentis 10 hours ago

    I think it's fundamentally misleading, even on the central quantum stuff:

    I missed what you saw, that's certainly a massive oof. It's not even wrong, in the Pauli sense, i.e. it's not just a simplistic rendering of ECC.

    It also strongly tripped my internal GPT detector.

    Also, it goes on and on about realtime decoding, the foundation of the article is Google's breakthrough is real time, and the Google article was quite clear that it isn't real time.*

    I'm a bit confused, because it seems completely wrong, yet they published it, and there's enough phrasing that definitely doesn't trip my GPT detector. My instinct is someone who doesn't have years of background knowledge / formal comp sci & physics education made a valiant effort.

    I'm reminded that my throughly /r/WSB-ified MD friend brings up "quantum computing is gonna be big what stonks should I buy" every 6 months, and a couple days ago he sent me a screenshot of my AI app that had a few conversations with him hunting for opportunities.

    * "While AlphaQubit is great at accurately identifying errors, it’s still too slow to correct errors in a superconducting processor in real time"

    • bramathon 9 hours ago

      This is not about AlphaQubit. It's about a different paper, https://arxiv.org/abs/2408.13687 and they do demonstrate real-time decoding.

      > we show that we can maintain below-threshold operation on the 72-qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1 μs cycle duration

      • refulgentis 7 hours ago

        Oh my, I really jumped to a conclusion. And what fantastic news to hear. Thank you!

    • vlovich123 10 hours ago

      Yeah, I didn't want to just accuse the article of being AI generated since quantum isn't my specialty, but this kind of error instantly tripped my "it doesn't sound like this person knows what they're talking about alarm" which likely indicates a bad LLM helped summarize the quantum paper for the author.

  • UniverseHacker 11 hours ago

    ECC is not easy to explain, and sounds like a tautology rather than an explanation "error correction is done with error correction"- unless you give a full technical explanation of exactly what ECC is doing.

    • marcellus23 11 hours ago

      Regardless of whether the parent's sentence is a tautology, the explanation in the article is categorically wrong.

      • bawolff 10 hours ago

        Categorically might be a bit much. Duplicating bits with majority voting is an error correction code, its just not a very efficient one.

        Like its wrong, but its not like its totally out of this world wrong. Or more speciglficly its in the correct category.

        • vlovich123 9 hours ago

          It's categorically wrong to say that that's how memory is error corrected in classical computers because it is not and never has been how it was done. Even for systems like S3 that replicate, there's no error correction happening in the replicas and the replicas are eventually converted to erasure codes.

          • bawolff 8 hours ago

            I'm being a bit pedantic here, but it is not categorically wrong. Categorically wrong doesn't just mean "very wrong" it is a specific type of being wrong, a type that this isn't.

            Repetition codes are a type of error correction code. It is thus in the category of error correction codes. Even if it is not the right error correction codes, it is in the correct category, so it is not a categorical error.

            • cycomanic 6 hours ago

              Well it's about as categorically wrong as saying quantum computers use similar error correction algorithms as classical computers. Categorically both are are error correction algorithms.

      • cortesoft 10 hours ago

        Eh, I don’t think it is categorically wrong… ECCs are based on the idea of sacrificing some capacity by adding redundant bits that can be used to correct for some number of errors. The simplest ECC would be just duplicating the data, and it isn’t categorically different than real ECCs used.

        • vlovich123 9 hours ago

          Then you're replicating and not error correcting. I've not seen any replication systems that use the replicas to detect errors. Even RAID 1 which is a pure mirroring solution only fetches one of the copies when reading & will ignore corruption on one of the disks unless you initiate a manual verification. There are technical reasons why that is related to read amplification as well as what it does to your storage cost.

          • cortesoft 8 hours ago

            I guess that is true, pure replication would not allow you to correct errors, only detect them.

            However, I think explaining the concept as duplicating some data isn’t horrible wrong for non technical people. It is close enough to allow the person to understand the concept.

            • vlovich123 8 hours ago

              To be clear. A hypothetical replication system with 3 copies could be used to correct errors using majority voting.

              However, there's no replication system I've ever seen (memory, local storage, or distributed storage) that detects or corrects for errors using replication because of the read amplification problem.

      • vlovich123 10 hours ago

        Yeah, I couldn't quite remember if ECC is just hamming codes or is using something more modern like fountain codes although those are technically FEC. So in the absence of stating something incorrectly I went with the tautology.

cwillu 8 hours ago

Wow, they managed to make a website that scales everything except the main text when adjusting the browser's zoom setting.

  • essentia0 4 hours ago

    They set the root font size relative to the total width of the screen (1.04vw) with the rest of the styling using rem units

    Ive never seen anyone do that before.. It may well be the only way to circumvent browser zoom

    • rendaw 3 hours ago

      Why don't browsers reduce the screen width when you zoom in, as they adjust every other unit (cm, px)?

      • zamadatix 3 hours ago

        They effectively do. All css absolute units are effectively defined as ratios of each other and zoom*DPI*physicalPixels sets the ratio of how many physical pixels each absolute unit will end up turning into. Increase zoom and the screen seems to have shrunk to some smaller 'cm' and so on.

        For things like 'vh' and 'vw' it just doesn't matter "how many cm" the screen is as 20% of the viewing space always comes out to 20% of the viewing space regardless how many 'cm' that is said to be equivalent to.

  • hiisukun 2 hours ago

    It's interesting how this (and other css?) means the website is readable in a phone in portrait, but the text is tiny in landscape!

  • rezonant 5 hours ago

    There should be a law for this. Who in their right mind wants this?

xscott 12 hours ago

While I'm still eager to see where Quantum Computing leads, I've got a new threshold for "breakthrough": Until a quantum computer can factor products of primes larger than a few bits, I'll consider it a work in progress at best.

  • Strilanc an hour ago

    If qubit count increased by 2x per year, largest-number-factored would show no progress for ~8 years. Then the largest number factored would double in size each year, with RSA2048 broken after a total of ~15 years. The initial lull is because the cost of error correction is so front loaded.

    Depending on your interests, the initial insensitivity of largest-number-factored as a metric is either great (it reduces distractions) or terrible (it fails to accurately report progress). For example, if the actual improvement rate were 10x per year instead of 2x per year, it'd be 3 years until you realized RSA2048 was going to break after 2 more years instead of 12 more years.

  • mrandish 8 hours ago

    > While I'm still eager to see where Quantum Computing leads

    Agreed. Although I'm no expert in this domain, I've been watching it a long time as a hopeful fan. Recently I've been increasing my (currently small) estimated probability that quantum computing may not ever (or at least not in my lifetime), become a commercially viable replacement for SOTA classical computing to solve valuable real-world problems.

    I wish I knew enough to have a detailed argument but I don't. It's more of a concern triggered by reading media reports that seem to just assume "sure it's hard, but there's no doubt we'll get there eventually."

    While I agree quantum algorithms can solve valuable real-world problems in theory, it's pretty clear there are still a lot of unknown unknowns in getting all the way to "commercially viable replacement solving valuable real-world problems." It seems at least possible we may still discover some fundamental limit(s) preventing us from engineering a solution that's reliable enough and cost-effective enough to reach commercial viability at scale. I'd actually be interested in hearing counter-arguments that we now know enough to be reasonably confident it's mostly just "really hard engineering" left to solve.

  • UberFly 10 hours ago

    I guess like most of these kinds of projects, it'll be smaller, less flashy breakthroughs or milestones along the way.

    • Terr_ 7 hours ago

      People dramatically underestimate how important incremental unsung progress is, perhaps because it just doesn't make for a nice memorable story compared to Suddenly Great Person Has Amazing Idea Nobody Had Before.

  • ashleyn 5 hours ago

    My first question any time I see another quantum computing breakthrough: is my cryptography still safe? Answer seems like yes for now.

    • xscott 2 hours ago

      I have a pseudo-theory that the universe will never allow quantum physics to provide an answer to a problem where you didn't already know the result from some deterministic means. This will be some bizarre consequence of information theory colliding with the measurement problem.

      :-)

  • kridsdale1 10 hours ago

    There will be a thousand breakthroughs before that point.

    • xscott 9 hours ago

      That just means that the word "breakthrough" has lost it's meaning. I would suggest the word "advancement", but I know this is a losing battle.

      • Suppafly 8 hours ago

        >That just means that the word "breakthrough" has lost it's meaning.

        This. Small, incremental and predictable advances aren't breakthroughs.

  • dekhn 10 hours ago

    quantum computers can (should be able to; do not currently) solve many useful problems without ever being able to factor primes.

    • xscott 9 hours ago

      What are some good examples?

      The one a few years ago where Google declared "quantum supremacy" sounded a lot like simulating a noisy circuit by implementing a noisy circuit. And that seems a lot like simulating the falling particles and their collisions in an hour glass by using a physical hour glass.

      • dekhn 8 hours ago

        The only one I can think of is simulating physical systems, especially quantum ones.

        Google's supremacy claim didn't impress me; besides being a computationally uninteresting problem, it really just motivated the supercomputer people to improve their algorithms.

        To really establish this field as a viable going concern probably needs somebody to do "something" with quantum that is experimentally verifiable but not computable classically, and is a useful computation.

    • Eji1700 9 hours ago

      Yeah I think that's the issue that makes it hard to assess quantum computing.

      My very layman understanding is that there are certain things it will be several orders of magnitude better at, but "simple" things for a normal machine quantum will be just as bad if not massively worse.

      It really should be treated as a different tool for right now. Maybe some day in the very far future if it becomes easier to make quantum computers an abstraction layer will be arrived at in some manner that means the end user thinks it's just like a normal computer, but from a "looking at series of 1/0's" or "looking at a series of superimposed particles" it's extremely different in function.

dangerlibrary 12 hours ago

I'm someone not really aware of the consequences of each quantum of progress in quantum computing. But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.

How much closer does this work bring us to the Quantum Crypto Apocalypse? How much time do I have left before I need to start budgeting it into my quarterly engineering plan?

  • bawolff 10 hours ago

    > But, I know that I'm exposed to QC risks in that at some point I'll need to change every security key I've ever generated and every crypto algorithm every piece of software uses.

    Probably not. Unless a real sudden unexpected breakthrough happens, best practise will be to use crypto-resistant algorithms long before this becones a relavent issue.

    And practically speaking its only public-key crypto that is an issue, your symmetric keys are fine (oversimplifying slightly, but practically speaking this is true)

  • griomnib 12 hours ago

    The primary threat model is data collected today via mass surveillance that is currently unbreakable will become breakable.

    There are already new “quantum-proof” security mechanisms being developed for that reason.

    • bawolff 10 hours ago

      Perhaps, but you got to ask yourself how valuable will your data be 20-30 years in the future. For some people that is a big deal maybe. For most people that is a very low risk threat. Most private data has a shelf life where it is no longer valuable.

    • sroussey 12 hours ago

      Yes, and people are recording encrypted conversations communications now for this reason.

  • er4hn 12 hours ago

    You'll need to focus on asym and DH stuff. If your symmetric keys are 256 bits you should be fine there.

    The hope is that most of this should just be: Update to the latest version of openssl / openssh / golang-crypto / what have you and make sure you have the handshake settings use the latest crypto algorithms. This is all kind of far flung because there is very little consensus around how to change protocols for various human reasons.

    At some point you'll need to generate new asym keys as well, which is where I think things will get interesting. HW based solutions just don't exist today and will probably take a long time due to the inevitable cycle of: companies want to meet us fed gov standards due to regulations / selling to fedgov, fedgov is taking their sweet time to standardize protocols and seem to be interested in wanting to add more certified algorithms as well, actually getting something approved for FIPS 140 (the relevant standard) takes over a year at this point just to get your paperwork processed, everyone wants to move faster. Software can move quicker in terms of development, but you have the normal tradeoffs there with keys being easier to exfiltrate and the same issue with formal certification.

    • dylan604 10 hours ago

      Maybe my tinfoil hat is a bit too tight, but every time fedgov wants a new algo certified I question how strong it is and if they've already figured out a weakness. Once bitten twice shy or something????

      • jiggawatts 9 hours ago

        The NSA has definitely weakened or back-doored crypto. It’s not a conspiracy or even a secret! It was a matter of (public) law in the 90s, such as “export grade” crypto.

        Most recently Dual_EC_DRBG was forced on American vendors by the NSA, but the backdoor private key was replaced by Chinese hackers in some Juniper devices and used by them to spy on westerners.

        Look up phrase likes “nobody but us” (NOBUS), which is the aspirational goal of these approaches, but often fails, leaving everyone including Americans and their allies exposed.

        • dylan604 9 hours ago

          You should look up the phrase "once bitten twice shy" as I think you missed the gist of my comment. We've already been bitten at least once by incidents as you've described. From then on, it will always be in the back of my mind that friendly little suggestions on crypto algos from fedgov will always be received with suspicion. Accepting that, most people that are unawares will assume someone is wearing a tinfoil hat.

  • bdamm 12 hours ago

    I'm not sure anyone really knows this although there is no shortage of wild speculation.

    If you have keys that need to be robust for 20 years you should probably be looking into trying out some of the newly NIST approved standard algorithms.

computerdork 12 hours ago

Does anyone on HN have a understanding how close this achievement brings us to useful quantum computers?

  • kittikitti 12 hours ago

    This is another hype piece from Google's research and development arm. This is a theoretical application to increase the number of logical qubits in a system by decreasing the error caused by quantum circuts. They just didn't do the last part yet so the application is yet to be seen.

    https://arxiv.org/abs/2408.13687

    "Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."

    Google forgot to test if it scales I guess?

    • Strilanc 2 hours ago

      The experiment is literally all about scaling. It tests scaling from distance 3 to 5 to 7. It shows the logical qubit lifetime doubles each time the distance is increased. The sentence you quoted is describing an expectation that this doubling will continue to larger distances, when larger chips are built.

      This is the first quantum error correction experiment showing actual improvement as size is increased (without any cheating such as postselection or only running for a single step). It was always believed in theory that bigger codes should have more protection, but there are have been various skeptics over the years saying you'd never actually see these improvements in practice due to the engineering difficulty or due to quantum mechanics breaking down or something.

      Make no mistake; much remains to be done. But this experiment is a clear indication of progress. It demonstrates that error correction actually works. It says that quantum computers should be able to solve qubit quality with qubit quantity.

      disclaimer: worked on this experiment

    • wasabi991011 7 hours ago

      It's the opposite of a theoretical application, and it's not a hype piece. It's more like an experimental confirmation of a theoretical result mixed with an engineering progress report.

      They show that a certain milestone was achieved (error rate below the threshold), show experimentally that this milestone implies what theorists predicted, talk about how this milestone was achieved, and characterize the sources of error that could hinder further scaling.

      They certainly tested how it scales up to the scale that they can build. A major part of the paper is how it scales.

      >> "Our results present device performance that, if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms."

      > Google forgot to test if it scales I guess?

      Remember that quantum computers are still being built. The paper is the equivalent of

      > We tested the scaling by comparing how our algorithm runs on a chromebook, a server rack, and google's largest supercomputing cluster and found it scales well.

      The sentence you tried to interpret was, continuing this analogy, the equivalent of

      >Google's largest supercomputing cluster is not large enough for us, we are currently building an even bigger supercomputing cluster, and when we finish, our algorithm should (to the best of our knowledge) continue along this good scaling law.

    • wholinator2 11 hours ago

      Lol yeah the whole problem with quantum computation is the scaling, that's literally the entire problem. It's trivial to make a qbit, harder to make 5, impossible to make 1000. "If it scales" is just wishy washy language to cover, "in the ideal scenario where everything works perfectly and nothing goes wrong, it will work perfectly"

  • layer8 9 hours ago

    The fact that there is a forward-looking subsection about “the vision for fault tolerance” (emphasis mine) almost entirely composed of empty words and concluding in “we are just starting this exciting journey, so stay tuned for what’s to come!” tells you “not close at all”.

bawolff 10 hours ago

Doesn't feel like a breakthrough. A positive engineering step forward, sure, but not a breakthrough.

And wtf does AI have to do with this?

  • wasabi991011 7 hours ago

    It's not a major part of the paper, but Google tested a neural network decoder (which had the highest accuracy), and some of their other decoders used priors that were found using reinforcement learning (again for greater accuracy).