Herpa Derpa flurbidy
Comment on AI Training Slop
utopiah@lemmy.world 6 days agoDon’t know. Don’t really care honestly […] offset by the fact that I don’t and never will drive.
That’s some strange logic. Either you do know and you can estimate that the offset will indeed “balance it out” or you don’t then you can’t say one way or the other.
trungulox@lemm.ee 6 days ago
utopiah@lemmy.world 6 days ago
I see. Well, I checked your post history because I thought “Heck, they sound smart, maybe I’m the problem.” and my conclusion based on the floral language you often use with others is that you are clearly provoking on purpose.
Unfortunately I don’t have the luxury of time to argue this way so I’ll just block you, this way we won’t have to interact in the future.
Take care and may we never speak again.
trungulox@lemm.ee 6 days ago
Erby glerby skeibledee thought terminating cliches groppily boop
jfrnz@lemm.ee 6 days ago
Running a 500W GPU 24/7 for a full year is less than a quarter of the energy consumed by the average automobile in the US (in 2000). I don’t know how many GPUs this person has or how long it took to fine tune the model, but it’s clearly not creating an ecological disaster. Please understand there is a huge difference between the power consumed by companies training cutting-edge models at massive scale/speed, compared to a locally deployed model doing only fine tuning and inferencing.
utopiah@lemmy.world 6 days ago
I specifically asked about the training part, not the fine tuning but thanks to clarifying.
jfrnz@lemm.ee 6 days ago
The point is that OP (most probably) didn’t train it — they downloaded a pre-trained model and only did fine-tuning and inference.
utopiah@lemmy.world 6 days ago
Right, my point is exactly that though, that OP by having just downloaded it might not realize the training costs. They might be low but on average they are quite high, at least relative to fine-tuning or inference. So my question was precisely to highlight that running locally while not knowing the training cost is naive, ecologically speaking. They did clarify though that they do not care so that’s coherent for them. I’m insisting on that point because maybe others would think “Oh… I can run a model locally, then it’s not <<evil>>” so I’m trying to clarify (and please let me know if I’m wrong) that it is good for privacy but the upfront training cost are not insignificant and might lead some people to prefer NOT relying on very costly to train models and prefer others, or a even a totally different solution.