4-4: AI and LLMs

I don't want to write this section. I don't want there to be a need. Lately, it's been a growing sentiment that large language models and tools based on them like ChatGPT are useful for self-directed learning. I've written extensively on the dangers of these tools, but let me be clear and succinct here:

Large language models hallucinate, that is to say, lie to you, too frequently to be trusted for learning. They are designed to give authoritative-sounding answers that are usually good enough to pass a first inspection. But that is not sufficiently accurate for your learning. An expert in a field can spot the inaccuracies in these models' output, but you may not. That means it is dangerous for you to rely on their output for learning. There are other, better resources out there. Are they harder to find? Maybe, but perhaps that's not so bad. Research is a skill, and it'll aid you throughout your learning journey.

Personally, I wish we could disinvent this entire genre of technology. The harms so obviously, absurdly outweigh the benefits.

results matching ""

    No results matching ""