in a world of abundance
uuh I guess this is a hypothetical of a possible utopian future rather than about current AI or based on current trends and implementations.
Submitted 1 day ago by MirchiLover@beehaw.org to technology@beehaw.org
https://gazeon.site/ai-wont-solve-your-existential-crisis-and-thats-perfectly-fine/
in a world of abundance
uuh I guess this is a hypothetical of a possible utopian future rather than about current AI or based on current trends and implementations.
It won’t solve anything except “How do we slowly kill off most life on this planet by using too much energy from power plants that spew awful chemicals into the air.”
It won’t solve anything
Go tell that to AlphaFold which solved a decades‑old problem in biology by predicting protein structures with near lab‑level accuracy. Overnight it gave researchers the 3D shapes of almost every known protein, something humans couldn’t crack, and it’s already speeding up drug discovery and enzyme design.
It could have been done without burning the earth down to get there.
Oh yes, and how many chemicals did it cause to spew out and how much water did it deplete? That solution won’t matter if life is dead anyway.
Perspectivist@feddit.uk 1 day ago
Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.
Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.