Before anyone shits on these scientists it said over and over again it was made up and that officially the USS Enterprise labs were used to make this discovery.
how things become science
Submitted 5 hours ago by not_IO@lemmy.blahaj.zone to science_memes@mander.xyz
https://lemmy.blahaj.zone/pictrs/image/4d3be17a-6b02-4591-b56f-51519e9dad03.webp
Comments
DeathsEmbrace@lemmy.world 3 hours ago
Madzielle@lemmy.dbzer0.com 32 minutes ago
do it again
partial_accumen@lemmy.world 5 hours ago
I give you… “The Grant Money Printing machine!”
Need a grant? Create a disease and submit a paper. Then write a grant asking for money to solve your invented disease.
chemical_cutthroat@lemmy.world 5 hours ago
I’m failing to see how this is different from making up a fact and then spreading it to news outlets. If you are the authority, and you say something is true, you don’t get to point and laugh when people believe your lies. That’s a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine. It only regurgitates what’s been said. It isn’t going to suddenly start doing science on it’s own to determine if what you’ve said is true or not. That isn’t it’s job. It’s job is to tell you what color the sky is based on what you told it the color of the sky was.
Jako302@feddit.org 2 hours ago
The studies contain parts like
Bixonimania, a rare hyperpigmentation disorder, presents a diagnostic challenge due to its unique presentation and its fictional nature
and
This study was fully funded by Austeria Horizon University, in particular the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad with the funding number…
as well as
Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group
Besides, the author didn’t feed it to the AI himself, he just published the study as a preprint, not even officially. Everything after that was done by the crawlers. This specific study was an experiment to see how far these crawlers go and if anything gets reviewed, but it could just as well have been a satirical paper published on April 1st and the crawlers would still see it as truth.
webghost0101@sopuli.xyz 1 hour ago
This should be top comment, the researchers did such a good job to make sure anyone with even the slightest reading comprehension would realise this is parody.
Regardless of that, the internet has always been full of lies and we cannot expect bad actors to not exploit this.
Grail@multiverse.soulism.net 44 minutes ago
I thought the author used she/her pronouns?
partial_accumen@lemmy.world 4 hours ago
That’s a serious breach of ethics and morals. Feeding false information to an LLM is no different that a magazine.
Hang on. Are you suggesting its unethical/immoral to lie to a machine?
Additionally, the authors didn’t submit the article to a magazine. They posted the articles as preprints which can be very questionable anyway as there is no peer review. The machine chose to ignore rigor and treat them as fact.
chemical_cutthroat@lemmy.world 4 hours ago
Additionally, the parents didn’t place the cake on an actual plate. They placed the cake on a napkin which can be very questionable anyway as there is no solid foundation for the cake. The child chose to ignore the napkin and treat the cake as food.
I really don’t understand why people think that LLMs are GOFAI. They aren’t making the hard choices. They aren’t giving novel solutions to the energy crisis. They aren’t solving the trolley problem. They are shitting out what you feed them. If you feed them garbage, you get garbage in return. No one is surprised when the dog gets worms after eating poop it found in the yard. Why are we shocked that an AI that doesn’t know fact from fiction treats everything the same?
kibiz0r@midwest.social 5 hours ago
News outlets are liable for what they publish. LLM vendors should be as well.
turdas@suppo.fi 4 hours ago
“Liable” means they might post a correction later that nobody will see because corrections aren’t sexy to algorithms. Big deal.
5too@lemmy.world 3 hours ago
They even have the same fix - just post somewhere quietly that it’s “entertainment”
unexposedhazard@discuss.tchncs.de 5 hours ago
This is about the untraceability of AI slop. These news outlets just publish LLM outputs as facts without checking sources. Anyone could poison these LLMs so this is more of a threat model demonstration.
GaMEChld@lemmy.world 1 hour ago
I don’t see this as a problem, rather, an opportunity to study information & disinformation propogation.
WhyIHateTheInternet@lemmy.world 4 hours ago
My friends and I did that in high school. Kinda. We made up new words for “awesome” to get people to start saying it. We started with “bumpenis” like that song is bumpenis. Really we were just getting people to say bum penis. It worked too. We are all just walking talking LLMs.
Vathsade@lemmy.ca 2 hours ago
That’s so fetch!
W98BSoD@lemmy.dbzer0.com 39 minutes ago
Stop trying to make fetch happen.
BigTurkeyLove@lemmy.dbzer0.com 2 hours ago
Technology is healing 😌
nialv7@lemmy.world 3 hours ago
(I’ve only read the title. If turns out I am terribly mistaken I will come back and correct myself). More like scientists commit academic fraud and fooled a bunch of people. How did this get through the ethics board? Why would any publisher play along with this?
pemptago@lemmy.ml 46 minutes ago
I imagine this is how it’ll work for stage 2 of Ai enshittifation. They’ll just add a bunch of garbage upstream about a brand or product marketers are paying to push and it’ll infect a bunch of outputs downstream.