Comment on Nobel Prize 2024
GreenKnight23@lemmy.world 2 months agogive me at least two peer reviewed articles that AI has had a measurably positive impact on society over the last 24 months.
shouldn’t be too hard for AI to come up with that, right?
if you can do that then I’ll admit that AI has potential to become more than a crypto scam.
ChairmanMeow@programming.dev 2 months ago
He’s already given you 5 examples of positive impact. You’re just moving the goalposts now.
I’m happy to bash morons who abuse generative AIs in bad applications and I can acknowledge that LLM-fuelled misinformation is a problem, but don’t lump “all AI” together and then deny the very obvious positive impact other applications have had (e.g. in healthcare).
GreenKnight23@lemmy.world 2 months ago
those aren’t examples they’re hearsay. “oh everybody knows this to be true”
generative AI is the only “AI”. everything that came before that was a thought experiment based on the human perception of a neural network. it’d be like calling a first draft a finished book.
if you consider the Turing Test AI then it blurs the line between a neural net and nested if/else logic.
great, give an example of this being used to save lives from a peer reviewed source that won’t be biased by product development or hospital marketing.
let’s be real here, this is still a golden turd and is more ML than AI. I know because it’s my job to know.
hearsay, give a creditable source of when this was used to save lives. I doubt that AI could ever be used in this way because it’s basic disaster triage, which would open ANY company up to litigation should their algorithm kill someone.
this dumb. AI isn’t even used in this and you know it. algorithms are not AI. falls are detected when a sudden gyroscopic speed/ direction is identified based on a set number of variables. everyone falls the same when your phone is in your pocket. dropping your phone will show differently due to a change in mass and spin. again, algorithmic not AI.
forecasting is an algorithm not AI. ML would determine the percentage of an algorithm is accurate based on what it knows. algorithms and ML is not AI.
this reads just like the marketing bullshit companies promote to show how “altruistic” they are.
I won’t deny there is potential there, but we’re a loooong way from meaningful impact.
just because a hammer is a hammer doesn’t mean it can’t be used to commit murder. dumbest argument ever, right up there with “only way to stop a bad guy with a gun is a good guy with a gun.”
Feathercrown@lemmy.world 2 months ago
You clearly don’t know much about the field. Generative AI is the new thing that people are going crazy over, and yes it is pretty cool. But it’s built on research into other types of AI-- classifiers being a big one-- that still exist in their own distinct form and are not simply a draft of ChatGPT. In fact, I believe classification is one of the most immediately useful tasks that you can train an AI for. You were given several examples of this in an earlier comment.
Fundamentally, AI is a way to process fuzzy data. It’s an alternative to traditional algorithms, where you need a hard answer with a fairly high confidence but have no concrete rules for determining the answer. It analyzes patterns and predicts what the answer will be. For patterns that have fuzzy inputs but answers that are relatively unambiguous, this allows us to tackle an entire class of computational problems which were previously impossible. To summarize, and at risk of sounding buzzwordy, it lets computers think more like humans. And no, for the record, it has nothing to do with crypto.
Nobody here will give you peer-reviewed articles because it’s clear that your position is overconfident for your subject knowledge, so the likelihood a valid response will change your mind is very small, so it’s not worth the effort. That includes me, sorry. I can explain in more detail how non-generative AI works if you’d like to know more.
technocrit@lemmy.dbzer0.com 2 months ago
Classification =/= intelligence.
My spell checker can classify incorrectly spelled words. Is that intelligence? The whole field if a phony grift.
GreenKnight23@lemmy.world 2 months ago
not once did I mention ChatGPT or LLMs. why do aibros always use them as an argument? I think it’s because you all know how shit they are and call it out so you can disarm anyone trying to use it as proof of how shit AI is.
everything you mentioned is ML and algorithm interpretation, not AI. fuzzy data is processed by ML. fuzzy inputs, ML. AI stores data similarly to a neural network, but that does not mean it “thinks like a human”.
if nobody can provide peer reviewed articles, that means they don’t exist, which means all the “power” behind AI is just hot air. if they existed, just pop it into your little LLM and have it spit the articles out.
AI is a marketing joke like “the cloud” was 20 years ago.
technocrit@lemmy.dbzer0.com 2 months ago
Who’s upvoting this? Is Lemmy really this scientifically illiterate?
IsoSpandy@lemm.ee 2 months ago
LLMs fucking suck. But there are things that don’t suck. AI chess engines have entirety changed the game, AI protein predictors have made designer drugs and nanobots come within our grasp.
It’s just that tech bros want to grab quick cash from us peasants and that somehow equates to integrating chat gpt into everything. The most moronic of AI has become their poster child. It’s like if we asked people what a US president is like in character and everybody showed Trump to them as an example.
technocrit@lemmy.dbzer0.com 2 months ago
In what way is a chess engine meaningfully “intelligent”?