Hallucination: A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.
Bias: A type of error that can occur in a large language model if its output is skewed by the model’s training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.
Anthropomorphism: The tendency for people to attribute human-like qualities or characteristics to an A.I. chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the A.I. is sentient because it is very good at mimicking human language.
Discover five critical techniques for good writing, no charge. These are foundational to my philosophy of writing. Drop me a line at curmudgeon@writing-rag.com asking for the article, and I'll send it to you.
I don't get enough traffic to justify using an auto-responder, so you'll get the article directly from me.
I happen to have another article with some good tips. I'll send you both.
Bio
Rogers George has been a technical writer for more than 20 years. He has written on subjects as diverse as outhouse assembly, restaurant reviews, software, information security, and scientific equipment. He has his own writing consultancy and is always happy to discuss writing and grammar. Drop him a line at curmudgeon@writing-rag.com.
Leave a Reply