> I worry that we are being over sold on a promise that LLMs will magically make up for a lack of proficiency in an area
I saw a post on twitter about how game devs were using ChatGPT for localization and when you translated the text to English it said something like “as a chat assistant I’m unable to translate this concept” or an explanation instead of the translation.
This is exactly the sort of future I imagine with AI - not that the grunts on the ground will be sold on it but management will be convinced they can fire the people who know what they’re doing and replace them with interns armed with a ChatGPT subscription
>I saw a post on twitter about how game devs were using ChatGPT for localization and when you translated the text to English it said something like “as a chat assistant I’m unable to translate this concept”
My experience has been more like this:
- Write small library as contract work.
- Client vibe codes with it. Code doesn't work.
- End up doing good faith assurance work to fix the vibe coded bug in the client code, the issue was not in my small library.
People are programming out on a limb - and blame goes to the library maintainer if the user lacks the fundamental skills to do troubleshooting.
Yeah my plan was to come in _after_ the vibe coders have done their damage, fix their mess for a somewhat extortionate amount of money, then tell them not to do it again.
Wherever there is pain impeding capital, there is opportunity. And there is always a set of current pain points. There can only be no pain in a fully-autonomous organization with autonomous investors and customers too.
It seems like the future is converging on there will 5 Matrix savant architects who make $1B/y who keep things operating while everyone else lives in a shanty or a pod.
Basically the same as cleaning up after they hired the cheapest dev they can find. Something our little shop has been doing for 4 years now. Can't wait to charge to debug a 100,000 line vibe coded WordPress plugin.
I agree with the sentiment, but I don't think the example given (creating SQL queries) is a good representation of this problem.
That's because if you know a little bit of SQL and know how to validate the answer LLMs give you this becomes a non-issue.
A better example would be an ambiguous prompt where the LLM can either use an array or a map to solve your problem so it chooses an array. But down the road you need a feature where direct access is what you need, but your code is already using arrays. In this situation what tends to happen is the LLMs ends up making some hack on top of your arrays to create this new feature and the code gets real bad.
By not understanding the difference between these 2 data structures you aren't able to specify what needs to be done and the LLM ends up implementing your feature in an additive way. And when you add enough features in this way things get messy.
What is still not clear to me is what is the proper "abstraction layer" we need to use to learn things in this new world where LLMs are available.
So, Karl Marx predicted that the capitalism will eat itself because capitalists will value creating money itself (and money-making enterprises, such as asset bubbles) more than the actual production of goods. This was later elaborated by many people, but since I am not an expert in this, I'll just mention Hyman Minsky and Thomas Piketty.
The OP is essentially a (white collar) labor version of this. What is evidently valued is an appearance of expertise, rather than expertise itself. Just like the capitalists who want to make money, and skipping production of actual goods in order to accomplish that, "professionals" are going to skip actual learning in order to appear knowledgeable.
For 200 years, people have hoped that the "free market" will sort out the problem that Marx saw. It didn't happen - we still get financial bubbles that cause trouble for many people. So, I suspect it's a mistake to assume the learning problem will fix itself either. I suspect people (society at large) will have to consciously value the hard work of learning for this to be fixed.
Yeah, that's why we haven't created any new goods since we embraced capitalism—the whole thing is just parasitic on the labor of the masses, and doesn't really add anything. </sarcasm>
Seriously, you might want to actually do a sniff check before taking Marx's word for anything.
I’m guilty of this. I’m trying to be more mindful when using LLM-generated code. It’s mostly a personal issue: I tend to procrastinate and hope the code “just works.”
We need to stay vigilant,otherwise we will pay the cost by fixing LLM bugs later.
First time ive ever heard someone admit this, only ever heard people accuse theur coworkers of it. This is honestly a very sad thing to hear a professional dev say
wasnt there a study published recently that people using LLMs more frequently tend to become less intelligent over time because their brain doesn't have to process complex tasks and workflows anymore?
Yes, and I think we'll course correct, eventually.
There's a reason we still (generally) teach people how to do arithmetic with pencil and paper instead of jumping straight to calculators. Learning basic algorithms for performing the computations helps solidify the concepts and the rules of the game.
We'll need to do the same thing eventually with respect to LLMs and software engineering. People who skip the foundations or let their comprehension atrophy will eventually end up in a spot in which they need those skills. I basically never do arithmetic using pen and paper now, but I could if I had to, and, more importantly, the process ingrained some basic comprehension of how the integers relate under the group operations.
I totally agree, re: SQL specifically, by the way. SQL is basically already natural language. It's probably the last thing that I'd need to offload to some natural language prompt. I think it's a bit of a vicious circle problem. There's a lot of people who only need to engage with SQL from time to time, so working with it is a bit awkward each time for lack of practice. This incentivizes them to offload it to the LLM just to get it out of the way, which in turn further atrophies their skills with SQL.
This was actually the whole point of SQL in the first place: to be a query language close enough to natural language that non-specialists could easily learn to use it.
This was also the point of COBOL. I think one thing we've learned is non-specialists don't like thinking/problem solving, and there's no meeting them halfway on that. Asking some people to think is asking too much.
I think that's a little too cynical of a view: in years past, I did in fact teach non-specialists to use SQL (against a read-only replica, I'm not crazy) so I didn't have to run all their ad hoc queries for them, and many of them took well to it once they overcame their initial hesitance. The framing that made it click for them was "it's like Excel, but with words."
> I worry that we are being over sold on a promise that LLMs will magically make up for a lack of proficiency in an area
I saw a post on twitter about how game devs were using ChatGPT for localization and when you translated the text to English it said something like “as a chat assistant I’m unable to translate this concept” or an explanation instead of the translation.
This is exactly the sort of future I imagine with AI - not that the grunts on the ground will be sold on it but management will be convinced they can fire the people who know what they’re doing and replace them with interns armed with a ChatGPT subscription
>I saw a post on twitter about how game devs were using ChatGPT for localization and when you translated the text to English it said something like “as a chat assistant I’m unable to translate this concept”
http://news.bbc.co.uk/2/hi/7702913.stm
Kelsey Hightower recently talked about this in a talk titled "The Fundamentals": https://www.youtube.com/watch?v=Jlqzy02k6B8
> Maybe another industry of cleaning up vibe coded messes will be a thing?
I have seriously considered hanging out my shingle to do this freelance, I don't think the time is quite ripe yet but maybe in a few months.
My experience has been more like this: - Write small library as contract work. - Client vibe codes with it. Code doesn't work. - End up doing good faith assurance work to fix the vibe coded bug in the client code, the issue was not in my small library.
People are programming out on a limb - and blame goes to the library maintainer if the user lacks the fundamental skills to do troubleshooting.
Yeah my plan was to come in _after_ the vibe coders have done their damage, fix their mess for a somewhat extortionate amount of money, then tell them not to do it again.
Sounds like a nice niche for some Instant Consulting.
Wherever there is pain impeding capital, there is opportunity. And there is always a set of current pain points. There can only be no pain in a fully-autonomous organization with autonomous investors and customers too.
It seems like the future is converging on there will 5 Matrix savant architects who make $1B/y who keep things operating while everyone else lives in a shanty or a pod.
...right, I see where you're going with this, then we eat those five savants and everyone else is happy.
Nawh, we'll be the Eloi and they'll become the Morlocks. They'll probably eat us.
Basically the same as cleaning up after they hired the cheapest dev they can find. Something our little shop has been doing for 4 years now. Can't wait to charge to debug a 100,000 line vibe coded WordPress plugin.
Oh god, there will be money in this for sure, but at what cost?!?
As someone who loves fixing weird bugs I kinda hope this becomes a thing. There’s nothing as satisfying as finding logic bugs.
Same, it's like solving puzzles, except with practical benefits including getting paid for it.
I agree with the sentiment, but I don't think the example given (creating SQL queries) is a good representation of this problem.
That's because if you know a little bit of SQL and know how to validate the answer LLMs give you this becomes a non-issue.
A better example would be an ambiguous prompt where the LLM can either use an array or a map to solve your problem so it chooses an array. But down the road you need a feature where direct access is what you need, but your code is already using arrays. In this situation what tends to happen is the LLMs ends up making some hack on top of your arrays to create this new feature and the code gets real bad.
By not understanding the difference between these 2 data structures you aren't able to specify what needs to be done and the LLM ends up implementing your feature in an additive way. And when you add enough features in this way things get messy.
What is still not clear to me is what is the proper "abstraction layer" we need to use to learn things in this new world where LLMs are available.
So, Karl Marx predicted that the capitalism will eat itself because capitalists will value creating money itself (and money-making enterprises, such as asset bubbles) more than the actual production of goods. This was later elaborated by many people, but since I am not an expert in this, I'll just mention Hyman Minsky and Thomas Piketty.
The OP is essentially a (white collar) labor version of this. What is evidently valued is an appearance of expertise, rather than expertise itself. Just like the capitalists who want to make money, and skipping production of actual goods in order to accomplish that, "professionals" are going to skip actual learning in order to appear knowledgeable.
For 200 years, people have hoped that the "free market" will sort out the problem that Marx saw. It didn't happen - we still get financial bubbles that cause trouble for many people. So, I suspect it's a mistake to assume the learning problem will fix itself either. I suspect people (society at large) will have to consciously value the hard work of learning for this to be fixed.
Yeah, that's why we haven't created any new goods since we embraced capitalism—the whole thing is just parasitic on the labor of the masses, and doesn't really add anything. </sarcasm>
Seriously, you might want to actually do a sniff check before taking Marx's word for anything.
I’m guilty of this. I’m trying to be more mindful when using LLM-generated code. It’s mostly a personal issue: I tend to procrastinate and hope the code “just works.”
We need to stay vigilant,otherwise we will pay the cost by fixing LLM bugs later.
First time ive ever heard someone admit this, only ever heard people accuse theur coworkers of it. This is honestly a very sad thing to hear a professional dev say
Sorry, I will try to be better.
wasnt there a study published recently that people using LLMs more frequently tend to become less intelligent over time because their brain doesn't have to process complex tasks and workflows anymore?
Yes, and I think we'll course correct, eventually.
There's a reason we still (generally) teach people how to do arithmetic with pencil and paper instead of jumping straight to calculators. Learning basic algorithms for performing the computations helps solidify the concepts and the rules of the game.
We'll need to do the same thing eventually with respect to LLMs and software engineering. People who skip the foundations or let their comprehension atrophy will eventually end up in a spot in which they need those skills. I basically never do arithmetic using pen and paper now, but I could if I had to, and, more importantly, the process ingrained some basic comprehension of how the integers relate under the group operations.
I totally agree, re: SQL specifically, by the way. SQL is basically already natural language. It's probably the last thing that I'd need to offload to some natural language prompt. I think it's a bit of a vicious circle problem. There's a lot of people who only need to engage with SQL from time to time, so working with it is a bit awkward each time for lack of practice. This incentivizes them to offload it to the LLM just to get it out of the way, which in turn further atrophies their skills with SQL.
> SQL is basically already natural language
This was actually the whole point of SQL in the first place: to be a query language close enough to natural language that non-specialists could easily learn to use it.
This was also the point of COBOL. I think one thing we've learned is non-specialists don't like thinking/problem solving, and there's no meeting them halfway on that. Asking some people to think is asking too much.
Bingo. And it is on this rock that non-technical people vibe coding is going to sink.
I think that's a little too cynical of a view: in years past, I did in fact teach non-specialists to use SQL (against a read-only replica, I'm not crazy) so I didn't have to run all their ad hoc queries for them, and many of them took well to it once they overcame their initial hesitance. The framing that made it click for them was "it's like Excel, but with words."
> Yet, I see people blindly trusting LLM outputs to develop SQL queries, without knowing how to explain or debug them.
The same is true about every other single instruction produced.