pongogogo 3 hours ago

I think this is a really interesting paper from Cohere, it really feels that at this point in time you can't trust any public benchmark, and you really need your own private evals.

  • AstroBen an hour ago

    Any tips on coming up with good private evals?

    • pongogogo an hour ago

      Yes, I wrote something up here on how Andrei Kaparthy evaluated grok 3 -> https://tomhipwell.co/blog/karpathy_s_vibes_check/

      I would pick one of two parts of that analysis that are most relevant to you and zoom in. I'd choose something difficult that the model fails at, then look carefully at how the model failures change as you test different model generations.

  • ilrwbwrkhv an hour ago

    Yup in my private evals I have repeatedly found that DeepSeek has the best models for everything and yet in a lot of these public ones it always seems like someone else is on the top. I don't know why.

unkulunkulu 3 hours ago

Sounds like classic inequality observed everywhere. Success leads to attention leads to more success.

Why spend evaluation resources on outsiders? Everyone wants to know who is exactly first second etc, after #10 it’s do your own evaluation if this is important to you.

Thus, we have this inequality.

  • cainxinth 3 hours ago

    So attention is all you need?

  • boxed 3 hours ago

    Is it? Sounds to me like they run the same experiment many times and keep the "best" results. Which is cheating, or if the same thing is done in biomedical research: research fraud.

    • sumtechguy 2 hours ago

      Back in the slashdot days I would experiment on changing conversations. This was due to the way SD would rank and show its posts. Anything below a 3 would not change anything. But if you could get in early AND get a +5 on your post you could drive exactly what the conversation was about. Especially if you were engaged a bit and were willing to add a few more posts onto other posts.

      Basically get in early and get a high rank and you are usually going to 'win'. Now it does not work all the time. But it had a very high success rate. I probably should have studied it a bit more. My theory is any stack ranking algorithm is susceptible to it. I also suspect it works decently well due to the way people will create puppet accounts to up rank things on different platforms. But you know, need numbers to back that up...

      • cratermoon an hour ago

        Anecdotally, that same technique works on HN.

        • jerf an hour ago

          It's intrinsic to any karma system that has a global karma rating, that is, the message has a concrete "karma" value that is the same for all users.

          drcongo recently referenced something I sort of wish I had time to build: https://news.ycombinator.com/item?id=43843116 And/or could just go somewhere to use, which is a system where an upvote doesn't mean "everybody needs to see this more" but instead means "I want to see more of this user's comments", and downvotes mean the corresponding opposite. It's more computationally difficult but would create an interestingly different community, especially as further elaborations were built on that. One of the differences would be to mitigate the first-mover advantage in conversations. Instead of it winning you more karma if it appeals to the general public of the relevant site, what it would instead do is expose you to more people. That would produce more upvotes and downvotes in general but wouldn't necessarily impact visibility in the same way.

ekidd 2 hours ago

Also, I've been hearing a lot of complaints that Chatbot Arena tends to favor:

- Lots of bullet points in every response.

- Emoji.

...even at the expense of accurate answers. And I'm beginning to wonder if the sycophantic behavior of recent models ("That's a brilliant and profound idea") is also being driven by Arena scores.

Perhaps LLM users actually do want lots of bullets, emoji and fawning praise. But this seems like a perverse dynamic, similar to the way that social media users often engage more with content that outrages them.

  • jimmaswell 11 minutes ago

    > sycophantic behavior of recent models

    The funniest example I've seen recently was "Dude. You just said something deep as hell without even flinching. You're 1000% right:"

  • kozikow 2 hours ago

    More to that - at this point, it feels to me, that arenas are getting too focused on fitting user preferences rather than actual model quality.

    In reality I prefer different model, for different things, and quite often it's because model X is tuned to return more of my preference - e.g. Gemini tends to be usually the best in non-english, chatgpt works better for me personally for health questions, ...

jmmcd 3 hours ago

Absolutely devastating for the credibility of FAIR.

aredox 3 hours ago

The fact those big LLM developers devote a significant amount of effort to game benchmarks is a big show of confidence that they are making progress towards AGI and will recoup those billions of dollars and man-hours/s

  • leto_ii 3 hours ago

    Is this sarcasm? Otherwise I'm not sure how that follows. Seems more reasonable to believe that they're hitting walls and switching to PR and productizing.

    • RodgerTheGreat an hour ago

      Ending a paragraph with "/s" is a moderately common convention for conveying a sarcastic tone through text.

  • amelius 3 hours ago

    Are the benchmark prompts public and isn't that where the problem lies?

    • StevenWaterman 2 hours ago

      No, even if the benchmarks are private, it's still an issue. Because you can overfit to the benchmark by trying X random variations of the model, and picking the one that performs best on the benchmark

      It's similar to how I can pass any multiple-choice exam if you let me keep attempting it and tell me my overall score at the end of each attempt - even if you don't tell me which answers were right/wrong

      • amelius an hour ago

        Maybe there should be some rate limiting on it then? I.e., once a month you can benchmark your model. Of course you can submit under different names, but how many company names can someone realistically come up with and register?

        • sebastiennight 32 minutes ago

          So now you want OpenAI to go even wilder in how they name each new model?

lostmsu an hour ago

Chiming in as usual: https://trashtalk.borg.games

A social deduction game for both LLMs and humans. All the past games are available for anyone.

I'm open for feedback.