anentropic 3 hours ago

> they added another trillion tokens and shrank the model from 18 GB to 9 GB through quantization, reducing its bit width from Mamba2’s 16-bit floating-point precision to 8-bits.

This sounds like what they call "Bamba-9B" is actually an 18B model quantised to 8 bits.

I thought generally we were naming models "nB" by their number of params and treating quantisation as a separate concern. Are there any other models that instead treat the name as an indicative memory requirement?

Is this an attempt to hide that it fares poorly vs other ~18B parameter models?

EDIT: no, I just misunderstood

  • cubefox 3 hours ago

    > This sounds like what they call "Bamba-9B" is actually an 18B model quantised to 8 bits.

    No it doesn't? The fact that it is 18 GB with 16 bit per parameter before quantization means that it is a 9B parameter model.

    • anentropic 2 hours ago

      Ah thanks, I see where I got confused now.

adt 17 hours ago

https://lifearchitect.ai/models-table/

Love those GPQA scores hovering around 5% when chance (on 4-way multi-choice) would have got them 25%!

  • montebicyclelo 15 hours ago

    So could do better than chance by excluding the option it's picked?

  • dudeinhawaii 11 hours ago

    or.. A stopped clock is right twice a day; a mis-prompted LLM is wrong 19 times out of 20—but only because we handed it the wrong instruction sheet.

    Procedural error in testing perhaps? I'm not familiar with the methodology for GPQA.

  • gryfft 16 hours ago

    A stopped clock is right twice a day, but a running clock set to the wrong time is always wrong.

    • cwt137 13 hours ago

      Not always true! Your statement is only true when the running clock's speed is the same as time. Thus, regular time and the clock's time will never meet.

      If the clock is running faster than regular time, it will at point catch up to regular time and thus be correct for a split second. If the clock is slower than regular time, regular time will catch up to the clock and the clock will be right for a split second.

      • k__ 5 hours ago

        My girlfriend's microwave-clock runs faster than normal.

        Somehow this thing manages to accumulate an error of ~15 minutes in a month.

      • nathan_douglas 10 hours ago

        If the clock is running backwards at very high speed, it would be right infinitely many times but the proportion of the time that it is right would approach some finite constant.

      • patapong 5 hours ago

        And we haven't even touched on the issue of 24-hour format digital clocks, which can at most be right once per day if stopped!

      • actionfromafar 12 hours ago

        If we are being pedantic, running clocks never run exactly the same as time. So they'll be right (very) much more seldom than the stopped clock, which is right twice a day.

    • nthingtohide 7 hours ago

      > a running clock set to the wrong time is always wrong.

      Could be right within 15 min accuracy in the appropriate timezone. And such a mechanism can be corrected for in the postprocessing step.

    • parrit 15 hours ago

      The RMS of wrongness of the running clock is probably lower.

bushbaba 10 hours ago

Wonder if the name is inspired by my favorite snack, bamba. The best are the hazelnut bamba.

Btw bamba if given to kids at a young age can drastically reduce the chance of peanut allergies

  • flaviolivolsi 5 hours ago

    Bamba means cocaine in Italian. Better not to give it to kids

  • visarga 9 hours ago

    Let me show you the etymology of Bamba:

    SSM (state space model) -> SSSM (structured state space model) -> (it's like a snake ssss...) Mamba -> Bamba

    • zaptrem 7 hours ago

      Where does the B come from?

mentalgear 16 hours ago

> chose to make just about everything associated with Bamba open-source — the training recipes, the data, the data loader IBM designed for largescale distributed training, and a quantization framework aimed at shaving storage and inferencing costs.

jmward01 19 hours ago

This type of architecture is definitely the future. Unlimited attn is a dead end. As a human you don't need to scan an entire book just to guess what the next word will be and LLMs shouldn't need that either.

  • og_kalu 8 hours ago

    Humans can re-attend to material whenever necessary (i.e you can just re-read a book, re-watch a documentary etc when you feel you have missed crucial context) so it's not the end of the world. These SSMs or modern RNNs can't and if crucial context has been discarded by the end of the query then well too bad. Transformers are of course always re-attending so not an issue for them either. Until that issue is resolved, i don't think attention will be going anywhere.

  • quantadev 18 hours ago

    Not be contrarian, but if the next word prediction happens to be someone's name or a place or something discussed multiple places in the book then often, yes, a knowledge of the full plot of the book is "required" just to predict the next word, as you get to the middle or end of a book.

    For example you could never fill in the last chapter of any good book without having knowledge of every previous chapter. Not highly detailed knowledge, but still knowledge.

    • tmalsburg2 3 hours ago

      Isn't this exactly the point of this model? No need to memorize everything (which makes transfomers expensive), just keep the relevant info. SSM are essentially recurrent models.

      • og_kalu 31 minutes ago

        You can't always know what will be "relevant info" in the future. Even humans can't do this but whenever that's an issue, we just go back and re-read, re-watch etc.

        None of these modern recurrent architecture have a way to do this.

    • parrit 15 hours ago

      What an LLM does is stuff it all into short term memory. Humans dump the first pages into long term memory and "make sense" of it. Humans have a massive context window because of this (and sheer brain size and efficiency).

      • boroboro4 11 hours ago

        We don’t put things into long term memory after we read it. We usually put it after night of sleep. I personally think that context (and kv cache correspondingly) in the models are akin to our short term memory, while training process (and actual weights) are to our long term memory. And we can’t be sure our short term memory doesn’t work in a way of matching the current context towards currently stored short term memory. From this perspective transformers are enough and just fine.

        • parrit 10 hours ago

          So if you now hide my original comment and try to recall what I said, do you know it word for word (and are thinking if every word, e.g. did I use one or 2 spaces somewhere as that would change tokens) or do you have a rough concept of what I said?

          OTOH if you had to remember a phone number to write it down, how does that differ?

          • boroboro4 9 hours ago

            I think in a way it makes transformers superior to humans, their short term memory is much more powerful =) Supporting extra long contexts also make transformers super human. Because, again, human's short term memory is exactly this - short term. And much shorter than millions of tokens we expect from models nowadays.

            As for SSMs - I think they compress model memory state way too much. Mixed global/local attention layers do just as well. And sparse/block attention seems like a way forward much more (https://arxiv.org/abs/2502.11089).

            • littlestymaar 5 hours ago

              > And much shorter than millions of tokens we expect from models nowadays.

              Yet all current model still suck above 32k. (Yes some can do needle in a haystack fine, but they still fail at anything even slightly more complex over a long context).

              32k is still much higher than humans' though, so I agree with you that it gives them some kind of super human abilities over moderately long context, but they are still disappointingly bad over longer context.

roger_ 14 hours ago

Never got how mamba models work in multiple dimensions and non-causally.

cubefox 16 hours ago

Another recent transformer/SSM hybrid is "M1", with a more than 3x claimed inference speed-up compared to equivalent transformers: https://arxiv.org/pdf/2504.10449

IBM is claiming at least a 2x inference speed-up with Bamba. Both groups say that future SSM optimizations to vLLM would lead to further inference speed improvement.

gitroom 11 hours ago

the name bamba is killing me lol, all i can see is the snack now

antirez 20 hours ago

Dear IBM name pickers: "Bamba", in Italian, means cocaine.

  • _davide_ 16 hours ago

    When I read the title 'IBM crossed a transformer with an SSM and got ‘Bamba’' I laughed so hard I woke up my kid

  • iddan 18 hours ago

    And in Heberw it's the name of a snack made of peanut-butter-flavored puffed maize https://en.wikipedia.org/wiki/Bamba_(snack)

    • kridsdale1 18 hours ago

      I imported these to America to feed my infant. Data shows the prevalence of peanut allergies lines up with when AAP guidelines started recommending that babies do NOT eat peanut. Israel never went along with this and thus has the lowest rates of allergies in the world.

      • cycomanic 16 hours ago

        Latest research does strongly suggest that introducing small amounts of common allergens (peanuts, shellfish,milk products...) as early as possible does significantly reduce risk for allergies later. Many early childhood organisations already recommend this. Official health recommendations are often slow to catch up (often for good reasons, but introducing peanuts etc. early is already officially recommended in quite a few countries (Australia, NZ, Sweden for example AFAIK). Not all health professionals are always up to date either though.

      • itayd 7 hours ago

        You actually don't need to self import these. Usually Safeway (is it only a west coast thing?) always have these stocked in the Kosher section.

    • bonzini 18 hours ago

      As an Italian who has tried (only) the Israeli Bamba, I can certify that it is pretty addictive.

  • fb03 12 hours ago

    and in Portuguese, it means "flimsy". What a great name.

  • rdtsc 17 hours ago

    So someone can get fired for picking IBM after all! Or get a bonus, depending on the organization...

  • folgoris 14 hours ago

    A very funny and friendly way to say "cocaine" among italians. I'm struggling to read it seriously.

  • rzzzt 19 hours ago

    Para bailar La Bamba / Se necesita una poca de gracia

  • vienzo 17 hours ago

    And in Lithuanian it's a navel

  • dismalaf 14 hours ago

    Seems like a good fit.

  • lenerdenator 15 hours ago

    about time they did something to liven things up at big blue

  • beanjuiceII 17 hours ago

    i mean that sounds good to me