Another basic demonstration on why oversight by a human brain is necessary.
A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers
Submitted 3 days ago by ZeroCool@lemmy.ca to science_memes@mander.xyz
https://i.ibb.co/HfNGfW3v/IMG-4256.jpg
Another basic demonstration on why oversight by a human brain is necessary.
A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers
To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit
The human brain has a pattern recognition system. It is not just a pattern recognition system.
The LLM systems are pattern recognition without any logic or awareness is the issue. It’s pure pattern recognition, so it can easily find some patterns that aren’t desired.
Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?
For some weird reason, I don’t see AI amp modelling being advertised despite neural amp modellers exist. However, the very technology that was supposed to replace the guitarists (Suno, etc) are marketed as AI.
I think that’s because in the first case, the amp modeller is only replacing a piece of hardware. It doesn’t do anything particularly “intelligent” from the perspective of the user, so I don’t think using “AI” in the marketing campaign would be very effective. LLMs and photo generators have made such a big splash in the popular consciousness that people associate AI with generative processes, and other applications leave them asking, “where’s the intelligent part?”
In the second case, it’s replacing the human. The generative behaviors match people’s expectations while record label and streaming company MBAs cream their pants at the thought of being able to pay artists even less.
Is there anything like suno that can be locally hosted?
Wouldn’t it be OCR in this case? At least the scanning?
Yes, but the LLM does the writing. Someone probably carelessly copy pasta’d some text from OCR.
Scientists who write their papers with an LLM should get a lifetime ban from publishing papers.
I played around with ChatGTP to see if it could actually improve my writing. (I’ve been writing for decades.)
I was immediately impressed by how “personable” the things are and able to interpret your writing and it’s able to detect subtle things you are trying to convey, so that part was interesting. I also was impressed by how good it is at improving grammar and helping “join” passages, themes and plot-points, it has advantages that it can see the entire writing piece simultaneously and can make broad edits to the story-flow and that could potentially save a writers days or weeks of re-writing.
Now that the good is out of the way, I also tried to see how well it could just write. Using my prompts and writing style, scenes that I arranged for it to describe. And I can safely say that we have created the ultimate “Averaging Machine.”
By definition LLM’s are designed to always find the most probable answers to queries, so this makes sense. It has consumed and distilled vast sums of human knowledge and writing but doesn’t use that material to synthesize or find inspiration, or what humans do which is take existing ideas and build upon them. No, what it does is always finds the most average path. And as a result, the writing is supremely average. It’s so plain and unexciting to read it’s actually impressive.
All of this is fine, it’s still something new we didn’t have a few years ago, neat, right? Well my worry is that as more and more people use this, more and more people are going to be exposed to this “averaging” tool and it will influence their writing, and we are going to see a whole generation of writers who write the most cardboard, stilted, generic works we’ve ever seen.
And I am saying this from experience. I was there when people started first using the internet to roleplay, making characters and scenes and free-form writing as groups. It was wildly fun, but most of the people involved were not writers, and barely read, they were kids just doing their best, but that charming, terrible narration became a social standard. It’s why there are so many atrocious dialogue scenes in shows and movies lately, I can draw a straight line to where kids learned to write in the 90’s. And what’s coming next is going to harm human creativity and inspiration in ways I can’t even predict.
I can confirm that a lot of student’s writing have become “averaged” and it seems to have gotten worse this semester. I am not talking about students who clearly used an AI tool, but just by proximity or osmosis writing feels “cardboardy”. Devoid of passions or human mistakes.
I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.
Your conjecture that bad writing is due to roleplaying on the early internet is a bit more… speculative. Lacking any numbers comparing writing trends over time I don’t think one can draw such a conclusion.
BuT tHE HuMAn BrAin Is A cOmpUteEr.
Vegetative electron microscopes!
It immediately demonstrates a lack of both care and understanding of the scientific process.
As someone within that community: it demonstrates the “publish or perish” mindset. Without enough publications it becomes impossible to get funding to do your research. Thus, the incentives are there for producing more publications and not better research.
Unsurprisingly, encouraging greater throughput results in greater throughput. And without proper support quality suffers. For example, a large portion of research is done by underpaid graduate students.
I recently reviewed a paper, for a prestigious journal. Paper was clearly from the academic mill. It was horrible. They had a small experimental engine, and they wrote 10 papers about it. Results were all normalized and relative, key test conditions not even mentioned, all described in general terms… and I couldn’t even be sure if the authors were real (korean authors, names are all Park, Kim and Lee). I hate where we arrived in scientific publishing.
To be fait, scientific publishing has been terrible for years, a deeply flawed system at multiple levels. Maybe this is the push it needs to reevaluate itself into something better.
And to be even fairer, scientific reviewing hasn’t been better. Back in my PhD days, I got a paper rejected from a prestigious conference for being too simple and too complex from two different reviewers. The reviewer that argue “too simple” also gave a an example of a task that couldn’t be achieved which was clearly achievable.
Goes without saying, I’m not in academia anymore.
People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.
People shit in Hossenfelder much more for her non-academic takes.
She sucks when overextendeding her aura of expertise to domains she’s not good in (eg metaphysics and esp pan-psychism which she profoundly misunderstands yet self-assuredly talked about). Her criticism of academia is good, but she reproduces some of that nonsense herself.
People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.
Somehow I briefly got her and Pluckrose reversed in my mind, and was still kinda nodding along.
If you don’t know who I mean, Pluckrose and two others produced a bunch of hoax papers (likening themselves to the Sokal affair) of which 4 were published and 3 were accepted but hadn’t been published, 4 were told to revise and resubmit and one was under review at the point they were revealed. 9 were rejected, a bit less than half the total (which included both the papers on autoethnography). The idea was to float papers that were either absurd or kinda horrible like a study supporting reducing homophobia and transphobia in straight cis men by pegging them (was published in Sexuality & Culture) or one that was just a rewrite of a section of Mein Kampf as a feminist text (was accepted by Affilia but not yet published when the hoax was revealed).
My personal favorite of the accepted papers was “When the Joke Is on You: A Feminist Perspective on How Positionality Influences Satire” just because of how ballsy it is to spell out what you are doing so obviously in the title. It was accepted by Hypatia but hadn’t been published yet when the hoax was revealed.
Hossenfelder is fine but tries to educate way outside her realm. Her cryptocurrency episode made me lose all respect for her.
Do you usually get to see the names of the authors you are reviewing papers of in a prestigious journal?
I try to avoid reviews, but the editor is a close friend of mine and i’m an expert of the topic. The manuscript was only missing the date
It is worthwhile to note that the enzyme did not attack Norris of Leeds university, that would be tragic.
It is by no spores and examined!
This early draft for The Last of Us just gets weirder and weirder.
At least part of it was not known!
At least they’ve obtained exasporium in Clos. I know they’ve been working hard at it.
The peer review process should have caught this, so I would assume these scientific articles aren’t published in any worthwhile journals.
One of them was in Springer Nature’s Environmental Science and Pollution Research, but it has since been retracted.
The other journals seem less impactful (I cannot truly judge the merit of journals spanning several research fields)
Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?
AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.
Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That’s repugnant.
It is by no spores either
“Science” under capitalism.
Lysenko did nothing wrong?
The most disappointing timeline.
I think you can use vegetative electron microscopy to detect the quantic social engineering of diatomic algae.
My lab doesn’t have a retro encabulator for that yet, unfortunately. 😮💨
Thank you for highlighting the important part 🙏
I thought vegetative electron microscopy was one of the most important procedures in the development of the Rockwell retro encabulator?
You’re still using rockwell retro encabulators? Need to upgrade to the hyper encabulator as soon as you can. www.youtube.com/watch?v=5nKk_-Lvhzo
The retro encabulators just have more soul.
Most articles from the 2020s, just about, but one from 1959, and it seems to talk about the same stuff as OP’s screenshot.
My dear posters, I think this may be the source.
Yep, page 4, seventh line from the bottom. That’s the one in the screenshot
Also, look how people jump to the conclusion that you’re one one extreme side or the opposite extreme side of any issue because they’ve saturated themselves with memes that reduce every complex issue to a clear-cut black and white binary choice between good and evil. Not that memes or AI are bad, people just lazily apply them way beyond their level of precision.
This is why we can’t have nice things.
tRusT tHe sCiEncE!!1
The Science:
/s …kinda. AI is going to make so many things very hard to trust at first glance and it will cause chaos in all kinds of technical fields.
You are not wrong that AI is a whole new level of misinformation. But trusting the science never was a “trust any published paper” it is about trusting scientific consensus. And yeah, if there is a scientific consensus based on multiple papers and peer reviews, it is almost certainly going to be more trustworthy than your opinion/online search/intuition
Trust the science is still true, even in the face of AI, you just need to differentiate between trust scientists and trust scientific consensus.
Without AI, science had its share of tons of problems, confirmation bias being one of the most innocent nowadays. Now with AI, it is ascending to something else entirely. Hopefully some people come up with AI based solutions on how to filter through the AI garbage.
ah, the cut-up technique
This is why newspapers, books, magazines, scientific papers, etc., should be on paper - because it’s too easy for “digital” to become nonsense, false, or maliciously changed.
DrBob@lemmy.ca 3 days ago
When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.
Treczoks@lemmy.world 3 days ago
At university, I faked a paper on economics (not actually my branch of study, but easily to fake) and put it on the shelf in their library. It was filled with nonsense formulas that, if one took the time and actually solved the equations properly, would all produce the same number as a result: 19920401 (year of publication, April Fools Day). I actually got two requests from people who wanted to use my paper as a basis for their thesis.
meyotch@slrpnk.net 2 days ago
Congratulations! You are now a practicing economist. This is exactly how that field works.
qaz@lemmy.world 3 days ago
How did you respond?