The haters are mad that humans are not the super special ones anymore. They can’t comprehend that machines can now write/make pictures too, and that writing/making pictures never was a super magical human only thing. Now they want to destroy the new technology so they can go back to “we’re super special because we can write” land. But they can’t so they cope with made up bullshit.
Comment on Have you know???.
Rhaedas@fedia.io 2 days ago
An LLM isn't imagining anything, it's sorting through the enormous collection of "imaginations" put out by humans to find the best match for "your" imagination. And the power used is in the training, not in each generation. Lastly, the training results in much more than just that one image you can't stop thinking about, and you'd find the best ones if you could prompt better with your little brain.
m532@lemmygrad.ml 1 day ago
NewOldGuard@lemmy.ml 2 days ago
The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
m532@lemmygrad.ml 1 day ago
The inference takes <10 wH aka pretty much nothing.
NewOldGuard@lemmy.ml 1 day ago
It depends on the model but I’ve seen image generators range from 8.6 wH per image to over 100 wH per image. Parameter count and quantization make a huge difference there. Regardless, even at 10 wH per image that’s not nothing, especially given that most ML image generation workflows involve batch generation of 9 or 10 images. It’s several orders of magnitude less energy intensive than training and fine tuning, but it is not nothing by any means.
DoctorPress@lemmy.zip 2 days ago
“prompt better” in the context: “Make no mistakes” a truly engineering power!
HumanoidTyphoon@quokk.au 2 days ago
What is it you think the brain is doing when imagining?
Peruvian_Skies@sh.itjust.works 2 days ago
Actually imagining. The fact that we have created previously unheard of tools such as the hammer, the wrench, the automobile and the profylactic condom is ample evidence that we can actually innovate, somethibg that artificial “intelligence” is ibcapable 9f doing by its very design.
hemko@lemmy.dbzer0.com 2 days ago
AI (or probably better call it machine learning) has been used in engineering to create new ways of building things lighter while still keeping the structural integrity.
I think the point there is to iterate through millions of designs until you find one that meets the criteria or something
Peruvian_Skies@sh.itjust.works 2 days ago
Remixing isn’t innovation.
Rhaedas@fedia.io 2 days ago
That is what AI scientists have been pursuing the entire time (well, before they got sucked up by capitalistic goals).
HumanoidTyphoon@quokk.au 2 days ago
Right, but you seem darn sure that AI isn’t doing whatever that is, so conversely, you must know what it is that are brains are doing, and I was hoping you would enlighten the rest of the class.
Rhaedas@fedia.io 2 days ago
Exhibit A would be the comparison of how we label LLM successes at how "smart" it is, yet it's not so smart when it fails badly. Totally expected with a pattern matching algorithm, but surprising for something that might have a process underneath that is considering its output in some way.
And when I say pattern matching I'm not downplaying the complexity in the black box like many do. This is far more than just autocomplete. But it is probability at the core still, and not anything pondering the subject.
I think our brains are more than that. Probably? There is absolutely pattern matching going on, that's how we associate things or learn stuff, or anthropomorphize objects. There's some hard wired pattern preferences in there. But where do new thoughts come from? Is it just like some older scifi thought, emergence due to enough complexity, or is there something else? I'm sure current LLMs aren't comprehending what they spit out simply from what we see from them, both good and bad results. Clearly it's not the same level of human thought, and I don't have to remotely understand the brain to realize that.
Nakoichi@hexbear.net 2 days ago
I can tell you don’t have a clue what you are talking about because you are referring to it as the buzzword “AI”. There is no intelligence behind it, it is just overhyped procedural generation, it has no intent, it cannot create anything new. All it can do is rearrange data we fed it based on math.