Of course we all know that Elsevier gets paid so much for their really thorough quality assuring process.
That introduction...
Submitted 7 months ago by fossilesque@mander.xyz to science_memes@mander.xyz
https://mander.xyz/pictrs/image/1236ef4b-2eaa-453b-9836-380b14f841ce.webp
Comments
RobinSohn@feddit.de 7 months ago
fossilesque@mander.xyz 7 months ago
The more astonishing part is the journal’s impact factor is above 6. I am going to assume its a publishing ring.
drolex@sopuli.xyz 7 months ago
They didn’t even flinch at the mention of bloody cum of anfs. It’s obviously a joke material, like naughtius maxisilicium or biggus dicksprosium.
Rolando@lemmy.world 7 months ago
$2360 Article publishing charge for open access
4 days Time to first decision
79 days Review time
91 days Submission to acceptance
onlinepersona@programming.dev 7 months ago
Wow, this is actually published. I thought it was a joke. Journals like these should not be respected. What a joke
Engywuck@lemm.ee 7 months ago
Sooo… Did the reviewers read this paper?
FinalRemix@lemmy.world 7 months ago
Good one!
TORFdot0@lemmy.world 7 months ago
They had ChatGPT review it
Engywuck@lemm.ee 7 months ago
Fair
bbuez@lemmy.world 7 months ago
Certainly, here is a list of potential reviewers
Allero@lemmy.today 7 months ago
On the side of authors, please, PLEASE do not use any AI tools when writing your articles.
It’s actually very easy to get into Q3-Q4 with absolute crap, and let’s just respect each other - not to mention keep your reputation :)
I know it’s tedious and I don’t like sitting at 4am writing articles, but yeah - it’s important :D
That’s not to say journals should REALLY do a better job.
WalrusDragonOnABike@reddthat.com 7 months ago
If you use it to just get started, but actually read it and have the expertise to fix mistakes and make it relevant, it’s probably fine. Not necessarily because it’s faster, but because some people just suck at getting started, and having nonsense to correct is easier to start correcting than turning whitespace into something.
marcos@lemmy.world 7 months ago
You don’t. AI will lead you astray.
Reading it and paraphrasing is ok if you get stuck. But if you use it before thinking, you won’t get to thinking and write a piece of shit.
Allero@lemmy.today 7 months ago
Yeah, fair enough. Someone here in the thread already said they use LLMs to just outline what to write and how, and then start something along these lines from scratch
FuglyDuck@lemmy.world 7 months ago
"Why yes, I proof read very well. why do you ask?
OpenStars@startrek.website 7 months ago
Insert name here: John E. Doe
I recall hearing of at least two bills passed that had this… and were not even filled in yet, yeesh:-(.
Someone should really try to poison the well here, and put in a line that says: Insert social security number and a valid credit card number here… Except like the above people probably wouldn’t even read that much, yeesh:-(.
Security through
obfuscationstupidity! :-) - it can be adaptive under just the right circumstances!:-)FuglyDuck@lemmy.world 7 months ago
i we’re talking about bills. something like “the assholes that don’t want to feed kids agree to fund kids” and stuff.
(pretty sure they call them riders.)
OpenStars@startrek.website 7 months ago
This goes beyond riders. “Bought” politicians are SO bought that when lobbyists ask politicians to do stuff, they do it unquestioningly. And I mean: THE WHOLE BILL - not just one sentence within it.
But, you may ask, aren’t they also incredibly lazy too? And the answer is yes! So the lobbyists have to do all the work to write out the bills… and then the congressperson simply signs it, easy peasy. “I, insert name here, from state, insert state name here, do solemnly swear that…” - AND I AM NOT EVEN KIDDING, the bill was passed while STILL saying both “insert name here” and also “insert state name here”!!!
So while I am shocked and sickened afresh to hear of plagiarism within academic circles, which I had hoped would be one of the last hold-outs, literal beacons and bastions of Freedom and Truth and all that rizz, politics was the opposite of that and has allowed plagiarism for a LONG time.
ryannathans@aussie.zone 7 months ago
How is this published oml
Harbinger01173430@lemmy.world 7 months ago
I made my thesis with bullshit and hopes before AI was a thing. Can’t modern kids do the same?
fossilesque@mander.xyz 7 months ago
My masters was fueled on Starbucks and my PhD is fueled by spite. Don’t get me wrong, I am not against using LLMs for help, especially for ECRs. This is an issue with peer review and publishing monopolies, aka late-stage capitalism.
DashboTreeFrog@discuss.online 7 months ago
Took me a second.
But man, I don’t right academic papers anymore, but I have to write a lot of reports and such for my work and I’ve tried to use different LLM’s to help and almost always the biggest help is just in making me go “Man, this sucks, it should be more like this.” and then I proceed to just write the whole thing with the slight advantage of knowing what a badly written version looks like.
FinalRemix@lemmy.world 7 months ago
That’s basically Classifier-free guidance for LLMs! It basically takes an additional prompt and says “not this. Don’t do this. In fact, never come near this shit in general. Ew.” And pushes the output closer to the original prompt by using the “not this” as a reference to avoid.
sharkfucker420@lemmy.ml 7 months ago
So real
Serinus@lemmy.world 7 months ago
My favorite was when it kept summarizing too much. I then told it to include all of my points, which it mostly just ignored. I finally figured out it was keeping under its own cap of 5000 words in a response.
DashboTreeFrog@discuss.online 7 months ago
I’ve had the reverse issue, where I wanted to input a large amount of text for ChatGPT to work with. Tried to do a workaround where part of my prompt was that I was going to give it more information in parts. No matter how I phrased things it would always try to start working with whatever I gave it with the first prompt so I just gave up and did it myself.