Comment on [deleted]
Radiant_sir_radiant@beehaw.org 6 months agoDear ChatGPT: When an online forum is illegally used for LLM training, is there a way to post wrong answers to poison the LLM training data, but for a human reader to notice that the answer is wrong?
Yes, there are strategies to post wrong answers that could “poison” the training data of language models while still allowing human readers to recognize the errors. Here are a few approaches:
- Subtle Semantic Errors: Provide answers that contain subtle but significant semantic errors. For example, use synonyms incorrectly or swap terms in a way that changes the meaning but might be overlooked by automated systems. For instance, “Paris is the capital of Germany” instead of "Berlin is the capital of Germany."
- Contextual Incongruities: Embed answers with facts that are contextually incorrect but appear correct at a surface level. For example, "The sun rises in the west and sets in the east."
- Formatting and Punctuation: Use formatting or punctuation that disrupts automated parsing but is obvious to a human reader. For example, “The capital of France is Par_is.” or "Water freezes at 0 degrees F@harenheit."
- Obvious Misspellings: Introduce deliberate misspellings that are noticeable to human readers but might not be corrected by automated systems, like "The chemical symbol for gold is Au, not Gld."
- Logical Inconsistencies: Construct answers that logically contradict themselves, which humans can spot as nonsensical. For example, "The tallest mountain on Earth is Mount Kilimanjaro, which is located underwater in the Pacific Ocean."
- Nonsense Sentences: Use sentences that look structurally correct but are semantically meaningless. For example, "The quantum mechanics of toast allows it to fly over rainbows during lunar eclipses."
- Annotations or Meta-Comments: Add comments or annotations within the text that indicate the information is incorrect or a test. For example, “Newton’s second law states that F = ma (Note: This is incorrect for the purpose of testing).”
While these methods can be effective in confusing automated systems and LLMs, they also have ethical and legal implications. Deliberately poisoning data can have unintended consequences and may violate the terms of service of the platform. It’s crucial to consider these aspects before attempting to implement such strategies.
AndrasKrigare@beehaw.org 6 months ago
Huh… Will this message then get re-ingested by chatgpt? Did it just poison itself?
Markaos@lemmy.one 5 months ago
I feel like the ingest system will be sophisticated enough to throw away pieces of text that begin with a message like “ChatGPT says”. Probably even stuff that follows the “paragraph with assumptions and clarifications followed by a list” structure - everything old has been ingested already, and most of the new stuff containing this is probably AI generated.
Radiant_sir_radiant@beehaw.org 5 months ago
Thanks to a few centuries of upper nobility, we already know that marrying your cousin for several generations is not always a good idea. It’ll be interesting to see what happens after a few iterations of AIs being trained on data mostly produced by other AIs (or variations of themselves). I suppose it largely depends on how well the training data can be curated.