-
it’s true that this would mislead children, but the model could hallucinate about literally anything. Especially at this stage, no one-- children or adults-- should be uncritically accepting what the model states as fact. That said, I agree LLMs need to improve their factual accuracy
-
Although it is highly debated, some scholars suggest Queen Charlotte might have had African ancestry, or that she would be considered a POC by today’s standards. Of course, she reigned in the 17-1800s, but it isn’t entirely outlandish to have a “Queen of Color”, if we aren’t requesting a specific queen or a specific race
-
People of color did live in England in the middle ages? Like not diverse in the way we conceive now, but here are a few papers discussing the racial diversity at the time. It was surely less intermingled than today, but it’s not like these images are impossible
*Other things are anachronistic or fantastical about these images, such as clothing. Are we worried about children getting the wrong impression of history in that sense?
- Of course increasing visibility and representation of all kinds marginalized people is important. I, myself, am disabled, so I care about that representation too-- thanks for pointing it out. I do kinda feel like people would be groaning if the model had produced a Queen with a visible disability, though… I would be delighted to be wrong on this front :)
groet@feddit.de 10 months ago
Repeat after me:
“Current AI is not a knowledge tool. It MUST NOT be used to get information about any topic!”
If your child is learning Scottish history from AI, you failed as a teacher/parent. This isn’t even about bias, just about what an AI model is. It’s not even supposed to be correct, that’s not what it is for. It is for appearing as correct as the things it has been trained on. And as long as there are two opinions in the training data, the AI will gladly make up a third.