I don't want to read stuff that AI has expanded upon.
If you turned 3 brief bullet points into a long paragraph with chatgpt, you have effectively used a chat bot to waste other's time and effort.
If you are doing so, just put the prompt at the top, and they leave the flowery nonsense afterwards to make yourself feel better.
Similarly, if you vibe coded something, just put the prompt at the top of the file in a comment, and the rest can be ignored. It's fine to force computers to read it, but there aren't enough lifetimes for humans to bother.
I feel like debates about AI and plagiarism are quite surface-level definitional debates. They miss out an important discussion of why we believe (old-fashioned) plagiarism is bad. Answers about when using AI is acceptable should be based on that reasoning.
This educator makes a valid argument and is entitled to their restrictions, but I have an idea for alternative approach. Allow unfettered use of AI, but raise the bar for grading. Tell the students to knock themselves out and use any tool they want, but you will now expect their papers to read like polished, professional-level prose.
I don't want to read stuff that AI has expanded upon.
If you turned 3 brief bullet points into a long paragraph with chatgpt, you have effectively used a chat bot to waste other's time and effort.
If you are doing so, just put the prompt at the top, and they leave the flowery nonsense afterwards to make yourself feel better.
Similarly, if you vibe coded something, just put the prompt at the top of the file in a comment, and the rest can be ignored. It's fine to force computers to read it, but there aren't enough lifetimes for humans to bother.
Same goes for documentation and podcasts. If you don't have the time then it must not be all that important.
[dead]
I feel like debates about AI and plagiarism are quite surface-level definitional debates. They miss out an important discussion of why we believe (old-fashioned) plagiarism is bad. Answers about when using AI is acceptable should be based on that reasoning.
I can see why profs don’t want to read stuff the student may not even have read.
The request for the student to provide a memo showing with A.I. and without samples seems a bit silly though
Much like plagiarism this seems like a hopeless battle. Doesn’t help that genAI inherently favours the generating more than the reviewing side
I understand the point and the issue but I wonder if the author will be as strict with the colleague
Great read
This educator makes a valid argument and is entitled to their restrictions, but I have an idea for alternative approach. Allow unfettered use of AI, but raise the bar for grading. Tell the students to knock themselves out and use any tool they want, but you will now expect their papers to read like polished, professional-level prose.
This person is talking about phd theses and masters-level writing intended for publication, not graded papers.