A Short AI Glossary and a Link to More

rogersgeorge on May 6th, 2023

An easy post for me today.

Hallucination: A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.
Bias: A type of error that can occur in a large language model if its output is skewed by the model’s training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.
Anthropomorphism: The tendency for people to attribute human-like qualities or characteristics to an A.I. chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the A.I. is sentient because it is very good at mimicking human language.
Click here for more glossary terms.
New York Times for March 29, 2023;  https://www.nytimes.com/section/technology?campaign_id=158&emc=edit_ot_20230329&instance_id=88922&nl=on-tech%3A-a.i.&regi_id=60502913&segment_id=129057&te=1&user_id=eaaeef473199ae511f619ba22fa26404

’nuff said from me…

Leave a Reply

You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*