It won’t solve anything except “How do we slowly kill off most life on this planet by using too much energy from power plants that spew awful chemicals into the air.”
AI Won't Solve Your Existential Crisis (And That's Perfectly Fine)
Submitted 3 weeks ago by MirchiLover@beehaw.org to technology@beehaw.org
https://gazeon.site/ai-wont-solve-your-existential-crisis-and-thats-perfectly-fine/
Comments
SweetCitrusBuzz@beehaw.org 3 weeks ago
Perspectivist@feddit.uk 3 weeks ago
It won’t solve anything
Go tell that to AlphaFold which solved a decades‑old problem in biology by predicting protein structures with near lab‑level accuracy. Overnight it gave researchers the 3D shapes of almost every known protein, something humans couldn’t crack, and it’s already speeding up drug discovery and enzyme design.
belated_frog_pants@beehaw.org 3 weeks ago
It could have been done without burning the earth down to get there.
SweetCitrusBuzz@beehaw.org 3 weeks ago
Oh yes, and how many chemicals did it cause to spew out and how much water did it deplete? That solution won’t matter if life is dead anyway.
t3rmit3@beehaw.org 2 weeks ago
LLMs, sure.
Neural Networks in general though are massively useful, and NNs being trained for e.g. medical diagnostics or scientific research are miniscule in their energy footprints compared to LLMs, can be incredibly accurate (even beyond people), and open up tons of avenues for research that the extant budgets just couldn’t support.
Kissaki@beehaw.org 3 weeks ago
in a world of abundance
uuh I guess this is a hypothetical of a possible utopian future rather than about current AI or based on current trends and implementations.
t3rmit3@beehaw.org 2 weeks ago
People need to understand the difference between LLMs and Neural Networks.
LLM training is a massive energy hog that gives us nothing but the illusion of coherent human-made text.
Non-LLM Neural Networks are much broader in use, almost always massively less energy-intensive to train, and often incredibly accurate when finely-tuned for specific purposes.
LLMs can die in a fire, and nothing would be lost. NNs in general are incredibly useful and honestly a massive source of potential for bettering healthcare (and science research in general) globally.
Perspectivist@feddit.uk 3 weeks ago
Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.
Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.