Discussion about this post

User's avatar
Phil Smith's avatar

Thank you. This is one of the very few posts I have read that match my thinking. When ChatGPT3.5 was released back in late 2022 I, like many others I guess, was blown away. I was all in, but gradually came to realise the fundamental problem with hallucinations (errors). I thought it was just me. I thought it was a problem that would be correctable, but then at a random small scale event I managed to talk to an AI researcher who admitted the problem was not fixable it could only be improved. Since then I have felt like a voice in the wilderness. I could well be suffering from confirmation bias, but thanks again.

Expand full comment
FerPilot's avatar

Very good post, thanks for sharing. You tocuch on something important which you could fully explore in the future: user psychology around AI errors. People seem to have very different tolerance levels for AI mistakes versus human mistakes, which creates interesting product design challenges beyond just the technical reliability issues.

Expand full comment

No posts