That’s not necessarily true. General-purpose 3rd party models (chatgpt, llama3-70b, etc) perform surprisingly good in very specific tasks. While training or finetuning your specialized model should indeed give you better results, the crazy amount of computational resources and specialized manpower needed to accomplish it makes it unfeasible and unpractical in many applications. If you can get away with an occational “as an AI model…”, you are better off using existing models.
Comment on Name & same. :)
yamapikariya@lemmyfi.com 5 months agoI don’t think so. They are using AI from a 3rd party. If they train their own specialized version, things will be better.
alehc@slrpnk.net 5 months ago
FiniteBanjo@lemmy.today 5 months ago
Here is a better idea: have some academic integrity and actually do the work instead of using incompetent machine learning to flood the industry with inaccurate trash papers whose only real impact is getting in the way of real research.
yamapikariya@lemmyfi.com 5 months ago
There is nothing wrong with using AI to proofread a paper. It’s just a grammar checker but better.
FiniteBanjo@lemmy.today 5 months ago
You can literally use tools to check grammar perfectly without using AI. What the LLM AI does is it predict what word comes next in a sequence, and if the AI is wrong as it often is then you’ve just attempted to publish a paper with halucinations wasting the time and effort of so many people because you’re greedy and lazy.
yamapikariya@lemmyfi.com 5 months ago
AI does better at checking for grammar and clarity of message. It’s just a fact. I’ve made comparisons myself using a grammar checker on an essay vs AI and AI corrected it and made it much better.
BearGun@ttrpg.network 5 months ago
Proofreading involves more than just checking grammar mate
yamapikariya@lemmyfi.com 5 months ago
I entirely agree. You should read through something you’ll publish.