kristjank a day ago

I don't think we should as a wider scientific/technical society care for the opinion of a person that uses epistocratic privilege as a a serious term. This stinks to high hell of proving a conclusion by working backwards from it.

The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT

If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.

This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.

  • randomcarbloke 19 hours ago

    Being a pilot is an epistrocratic privilege and they should welcome the input of the less advantaged.

yhoiseth a day ago

Sarkar argues that “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.”

I think there is at least some truth to this.

Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?

  • alisonatwork a day ago

    This latter piece is something I am struggling with.

    I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.

    So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.

    • xwolfi 21 hours ago

      It is normal, you add a layer between the two brains that communicate, and that layer only add statistical experience to the message.

      I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...

      LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.

    • dist-epoch 21 hours ago

      I thought about this too. I think the solution is to send both, prompt and output - since the output was selected itself by the human between potentially multiple variants.

      Prompt: I want to tell you X

      AI: Dear sir, as per our previous discussion let's delve into the item at hand...

  • throwaway78665 a day ago

    If knowledge work doesn't require knowledge then is it knowledge work?

    The main issue that is symptomatic to current AI is that without knowledge (at least at some level) you can't validate the output of AI.

  • visarga a day ago

    > why should I bother to read it and provide feedback?

    I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.

  • dist-epoch 21 hours ago

    > If the author didn’t bother to write it, why should I bother to read it

    There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.

    You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"

    • satisfice 21 hours ago

      If effort wasn’t put into it, then the writing cannot be good, except by accident or theft or else it is not your writing.

      If you want to court me, don’t ask Cyrano de Bergerac to write poetry and pass it off as your own.

      • dist-epoch 21 hours ago

        > If effort wasn’t put into it, then the writing cannot be good

        This is what people used to say about photography versus painting.

        > pass it off as your own.

        This is misleading/fraud and a separate subject than the quality of the writing.

vanschelven a day ago

This reads like yet another attempt to pathologize perfectly reasonable criticism as some form of oppression. Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody. People say that when writing lacks originality or depth — not to reinforce some imagined academic caste system. The idea that pointing out bland prose is equivalent to sumptuary laws or racial gatekeeping is intellectual overreach at its finest. Ironically, this entire paper feels like something an AI could have written: full of jargon, light on substance. And no, there’s no original research, just theory stacked on theory.

  • raincole a day ago

    > Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody.

    In AI discussions the relevance of Poe's law is rampant. You can never tell what is parody or what is not.

    There was a (former) xAI employee that got fired for advocating the extinction of humanity.

terminalshort a day ago

Reading this makes me understand why there is a political movement to defund universities.

  • throwaway2562 17 hours ago

    The real shame of it is that OP claims affiliation to two respectable universities (UCL and Cambridge) and one formerly credible venue (CHI)

    Mock scholarship is on the rampage. I agree: this stuff does make me understand the yahoos with a defunding urge too - not something I ever expected to feel any sympathy for, but here we are.

  • laurent_du a day ago

    It makes me sick to my heart to think that money is stolen from my pocket to be given to lunatics of this kind.

mgraczyk a day ago

I'd like to brag that I got in trouble for saying this to somebody in 2021, before ChatGPT

  • andrelaszlo a day ago

    I put a chapter of a paper I wrote in 2016 into GPTZero and got the probability breakdown 90% AI, 10% human. I am 100% human, and I wrote it myself, so I guess I'm lucky that I didn't hand it in this year, or I could have gotten accused of cheating?

    • rcxdude a day ago

      That's more an indictment of the accuracy of such tools. Writing in a very 'standard' style like found in papers is going to match well with the LLM predictions, regardless of origin.

    • tough a day ago

      maybe gptzero had your paper on its training data (it being from 2016)?

    • mgraczyk a day ago

      I wasn't being serious when I said it, I was using it as an insult for bad work

miningape a day ago

Overall, this comes across as extremely patronising: to authors by running defence for obviously sub-par work, because their background makes it "impossible" for them to do good work. And to the commenters by assuming mal-intent towards the less privileged that needs to be controlled.

And it's all wrapped in a lovely package of AI apologetics - wonderful.

So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.

satisfice a day ago

This paper presents an elaborate straw-man argument. It does not faithfully represent the legitimate concerns of reasonable people about the persistent and irresponsible application of AI in knowledge work.

Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.

It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.

  • strangecasts 14 hours ago

    > Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

    As a matter of scope I could understand leaving the social understanding of "AI makes errors" separate from technical evaluations of models, but the thing that really horrified me is that the author apparently does not think past experience should be a concern in other fields:

    > AI both frustrates the producer/consumer dichotomy and intermediates access to information processing, thus reducing professional power. In response, through shaming, professionals direct their ire at those they see as pretenders. Doctors have always derided home remedies, scientists have derided lay theories, sacerdotal colleges have derided folk mythologies and cosmogonies as heresy – the ability of individuals to “produce” their own healing, their own knowledge, their own salvation. [...]

    If you don't permit that scientists often experiencing crank "lay theories" is a reason for initial skepticism, can you really explain this as anything other than anti-intellectualism?

  • forgetfreeman a day ago

    Additionally their use of the term "slur" for what is frequently a valid criticism seems questionable.

    • satisfice 21 hours ago

      It is itself a form of bullying.

kelseyfrog a day ago

While it would have been a better paper if the author collaborated with a sociologist, it would have also be less likely to be taken seriously by the HN for the same class anxieties that its title is founded on.

  • miningape a day ago

    Excuse us for expecting evidence and intellectual rigour. :D

    I've taken a number of university Sociology courses and from those experiences I came to the opinion that Sociology's current iteration is really just grievance airing in an academic tone. It doesn't really justify it's own existence outside of being a Buzzfeed for academics.

    I'm not even talking about slightly more rigorous subjects such as Psychology or Political Science, which modern Sociology uses as a shield for the lack of a feedback mechanism.

    Don't get me wrong though, I realise this is an opinion formed from my admittedly limited exposure to Sociology (~3 semesters). It could have also been the university I went to particularly leaned on "grievance airing".

    • kelseyfrog 15 hours ago

      My exposure to Sociology and Psychology at university made me understand that HN's resistance to sociology stems from the discomfort of being confronted with uncomfortable truths. It's easier to discount sociology than deal with these truths. I get it. I used to be that way too.

      • miningape 10 hours ago

        Sure, but what evidence is there of that claim? Do you have any falsifiable/empirical studies you can cite?

        • kelseyfrog 9 hours ago

          Of course. But my only requirement is that we pre-register what evidence will change your mind. Fair?

          • miningape 9 hours ago

            The study should tackle these questions in one form or another:

            1. What specific, measurable phenomenon would constitute 'discomfort with uncomfortable truths' versus legitimate methodological concerns?

            2. How would we distinguish between the two empirically?

            I'd expect a study or numerical analysis with at least n > 1000, and p < 0.05 - The study will ideally have controls for correlation, indicating strong causation. The study (or cross analyses of it) should also explore alternative explanations, either disproving the alternatives or showing that they have weak(er) significance (also through numerical methods).

            I'm not sure what kinds of data this result could be derived from, but the methods for getting that data should be cited and common - thus being reproducible. Data could also be collected by examining alternative "inputs" (independent variables: i.e. temperament towards discomfort), or by testing how inducing discomfort leads to resistance to ideas, or something else.

            I'd expect the research to include, for example, controls where the same individuals evaluate methodologically identical studies from other fields. We'd need to show this 'resistance' is specific to sociology, not general scientific skepticism.

            That's to say: The study should also show, numerically and repeatably, that there are legitimate correlations between sociological studies inducing discomfort, and that it is not actual methodological concerns.

            This would include:

            1. Validated scales measuring "discomfort" or cognitive dissonance

            2. Behavioural indicators of resistance vs. legitimate critique

            3. Control groups exposed to equally challenging but methodologically sound research

            4. Control groups exposed to less challenging but equally methodologically sound research (to the level of sociology)

            Also, since we're making a claim about psychology and causation, the study would ideally be conducted by researchers outside of sociology departments to avoid conflicts of interest - preferably cognitive psychologists or neuroscientists using their methodological standards.

            • kelseyfrog 4 hours ago

              Thanks. I understand what happened here. This is a critical discussion paper and you're making the category error of judging it by the rubric of scientific epistemology.

stuaxo a day ago

The state of this headline.

UncleMeat 18 hours ago

"We have to use AI to achieve class solidarity" is insane to me.

People realize that the bosses all love AI because they envision a future where they don't need to pay the rabble like us, right? People remember leaders in the Trump administration going on TV and saying that we should have fewer laptop jobs, right?

That professor telling you not to use ChatGPT to cheat on your essay is likely not a member of the PMC but is probably an adjunct getting paid near-poverty wages.

sshine 18 hours ago

Synthetic beings will look back at this with great curiosity.

_vertigo a day ago

Honestly, AI could have written this.

  • readthenotes1 a day ago

    That tldr table at top looks a lot like what perplexity provides at the bottom...

s0teri0s a day ago

The obvious response is, "Oh, it will."

kazinator a day ago

[flagged]

  • aaronbrethorst a day ago

    Your argument would've been much better without injecting ca. 2025 US culture war jargon into it.

    • kazinator a day ago

      Sorry, what jargon is that? I may be able to fix it with your help. I'm not in the USA and don't follow US politics or culture enough to be up to 2025 in jargon.

      • tmtvl a day ago

        I will say it's funny seeing a post which starts off calling someone a 'woke cretin' ending with a lightly veiled take-that at Musk.

        I think it may be better to say that the author has an agenda or is co-opting real issues, but I can't think of an elegant way to phrase that.

        • whstl a day ago

          Or maybe we should give the author the benefit of the doubt and assume he's unhappy with both radical ends of the spectrum, which would be a refreshing take in 2025 to be honest.

          I don't really agree with the general argument, though. I don't think painting this as an "AI Slop" issue is fair. Online communities are quicker (and quieter!) when dismissing obvious AI Slop than when dismissing legitimate discourse that looks like AI, or was cleaned-up with AI, or even it just uses Em––Dashes. Perhaps the excusable usage is marking content as machine-translated, which of course causes other disadvantages for the poster. But of course that's just one point of view and communities I don’t go to might be 100% different!

          • nottorp a day ago

            > he's unhappy with both radical ends of the spectrum

            Seems to be very hard to realize that in the US. But from the outside, both ends are batshit insane.

      • VectorLock a day ago

        "Woke cretin."

        • xwolfi 21 hours ago

          I like that one

        • kazinator a day ago

          That can't be it. Cretin traces back to the 18th century. Etymonline places woke into the 2010s.

          • lexicality 21 hours ago

            Doesn't matter when the word was created, in the same way furries use ":3" to signal they're a furry, people now use "woke" as a pejorative to signify that they're a member of the "alt-right". I'd suggest avoiding that word unless that's the group membership you want to be advertising.

            • kazinator 16 hours ago

              That is false. Woke liberals and their insane ideas are widely despised by people from all over the political map. Paul Graham has observed that "wokeness is in retreat".

              https://paulgraham.com/woke.html

              Now, some alt right are acting like wokeness is not in retreat, and constitutes some kind of large and growing threat, so that aligning against it is a major priority.

              However, that doesn't mean everyone who uses the word is partaking in an alt right anti-woke frenzy.

              • lexicality 10 hours ago

                Hey, I didn't say you were alt right, just that most people will assume you are if you use that word.

              • throwawaybob420 11 hours ago

                Oh my god who gives a fuck about what Paul Graham has to say about “woke”.

                Saying “woke” in any context to attack a particular idea or ideology makes you _sound_ like your engaging in American culture war bullshit.

                • kazinator 8 hours ago

                  The author of the submitted paper writes that by positing that someone's post may have been AI generated, or could as well have been, you're oppressing disadvantaged groups, helping to keep them out of joining the knowledge worker class.

                  I.e. implying you should probably stop doing that, in the name of social justice.

                  That is absurdly woke, by any objective measure. The word woke is being used accurately.

                  By the way, for most people, it should be a compliment to have their writing compared to AI, because "AI could have written that" means that it the writing has good grammar and spelling, and usually makes some kind of coherent point.

                  I happen like the above article by Graham; it's very observant. I feel that he nails almost everything in it, and handles the subject in an different way from other authors.

    • kristjank a day ago

      The prudence of discussing everything in a cultural vacuum comes with the implication of irrelevance to the cultural climate, which could hardly be further from the truth in this case.

renewiltord a day ago

This is just like the way some people decided that "Blue Check" should be an insult on Twitter. Occasionally people still say it but almost everyone ignores it. Fads like this are common on the Internet. It's just like any other clique of people: a few people accidentally taste-make as a bunch of replicators simply repeat mindless things over and over again "slop" "mask-off moment" "enshittification" "ghoulish". Just words that people repeat because other people say them and get likes/upvotes/retweets or whatever.

The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.

People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.

It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.

  • throwawaybob420 a day ago

    Sounds like something a blue checker would say. And yes, if you pay for Twitter you’re going to clowned on.

    And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.

    This entire comment reeks of not actually understanding anything.

    • renewiltord 15 hours ago

      The point is that Internet people are weird and have their own fads. These so-called “slurs” are meaningless. It’s just like perhaps there’s some middle-school class somewhere that’s decided that white shoes are lame. The majority of the world doesn’t care.

      These fads are transitory. The people participating think the fads are important but they’re just fads. Most of them are just aping the other guy. The Internet guys think they’re having an opinion but really it’s just flock behaviour and will change.

      Once upon a time everyone on the Internet hated gauges (earrings that space out the earlobe) and before that it was hipsters.

      These are like the Harlem Shake. There is no meaning to it. People are just doing as others do. It’ll pass.

    • TeMPOraL a day ago

      Found that user who memorized KnowYourMeme and thinks they're a scholar of culture now.

      Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.

      s/

  • pluto_modadic a day ago

    blue checks are orthogonal - they're more rough approximations of "I bought a cybertruck when musk went full crazy" (and yes, it's a bad look). - judging some blog post for seeming like AI is different.

mvdtnz 21 hours ago

Gosh I wonder why there's a cultural backlash against the "intellectual" elite.