edinbruh
@edinbruh@feddit.it
- Comment on Things used to be more simple back then 2 days ago:
I too used to be young and reckless once…
(I’m 26 and still in uni)
- Comment on Things used to be more simple back then 2 days ago:
This is one of those you download to send on the group chat, and then don’t send it
- Comment on Shout out to my engineering homies. 2 days ago:
Noooooo 😭
It’s not the kind of validation I do tho 🤓😉
- Comment on Shout out to my engineering homies. 3 days ago:
For my computer science internship I just dodged a drone-shaped bullet… I’m working on abstract verification of access policies instead
- Comment on Current state of the internet 1 week ago:
You guys are all doomed. Telecom Italia is ranked 9 among ISPs, and it’s a tier 1 ISP.
Imagine your global communication infrastructure being dependant on fucking Telecom Italia.
- Comment on ✨️carboniferous trees✨️ 1 week ago:
How do we know what the trees looked like? I thought they got buried and crumbled into carbon or something
- Comment on Please be aware! 2 weeks ago:
There are many more interesting proposed approaches though. Like creating a religious cult around avoiding nuclear waste, all kinds of hostile-looking architecture, and my favourite is the idea of stocking waste in containers so durable that any people advanced enough to break into it would have to be advanced enough to know how to behave around nuclear waste
- Comment on It's important! 2 weeks ago:
He’s getting the french culture alright
- Comment on turing completeness 3 weeks ago:
Oh, it probably wasn’t about an existing language, but about some guy studying what would become high level languages. Like studying linkers and symbolic representation of programs
- Comment on turing completeness 3 weeks ago:
LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.
If someone tells you they might be the way forward to simulate a human, they are scamming you.
- Comment on turing completeness 3 weeks ago:
I don’t like it because people don’t shut up about it and insist everyone should use it when it’s clearly stupid.
LLMs are language models, they don’t actually reason (not even reasoning models), when they nail a reasoning it’s by chance, not by design. Everything that is not language processing shouldn’t be done by an LLM. Viceversa, they are pretty good with language.
We already had automated reasoning tools. They are used for industrial optimization (i.e. finding optimal routes, finding how to allocate production, etc.) and no one cared about those.
As if it wasn’t enough. The internet is now full of slop. And hardware companies are warmongering an arms race that is fueling an economic bubble. And people are being fired to be replaced by something that will not actually work in the long run because it does not reason.
- Comment on turing completeness 3 weeks ago:
I’ll try to find it later, I read he said that in a book from Martin Davis
- Comment on turing completeness 3 weeks ago:
Neural networks don’t simulate a brain, it’s a misconception caused by their name. They have nothing to do with brain neurons
- Comment on turing completeness 3 weeks ago:
If Turing was alive he would say that LLMs are wasting computing power to do something a human should be able to do on their own, and thus we shouldn’t waste time studying them.
Which is what he said about compilers and high level languages (in this instance, high level means like Fortran, not like python)
- Comment on Parents App'rule'ved 3 weeks ago:
Most restaurants serve alcohol and are good for kids. What’s your point?
- Comment on Does anyone know what's inside this building? 3 weeks ago:
I think you can read the address in letters you find around in Central Executive
- Comment on Does anyone know what's inside this building? 3 weeks ago:
Telephone exchanger. And as a consequence a lot of espionage occurred there, but just because it’s a big telephone exchanger, not because that’s its purpose.
- Comment on Does anyone know what's inside this building? 3 weeks ago:
Nope, the oldest house is just on the other side of the street
- Comment on 2³² will get interesting... 3 weeks ago:
- Comment on 2³² will get interesting... 3 weeks ago:
at the 33rd round you do
- Comment on fools! 4 weeks ago:
Where is part one of this meme? I need to send it to the group chat
- Comment on I love fucking pasta 4 weeks ago:
What the fuck is an Alfredo?
- Comment on OMG! Trumps! 4 weeks ago:
There is a meme trend of finding inexistent references to people and characters in unrelated stuff, and then pointing it out as a clickbait YouTube thumbnail. In this meme, I came across the verb “trumps” and interpreted it as the plural of the name “Trump”
- Submitted 4 weeks ago to [deleted] | 3 comments
- Comment on Discuss: 4 weeks ago:
Paleo Kabi-Lamius
- Comment on Biased source 5 weeks ago:
But what about the surname? Grandoni basically means Very Big. The coincidences keep piling up for Mr. Very Big Dino
- Comment on It’s what’s for dinner! 1 month ago:
What a coincidence, I saw that for the first time this Friday
- Comment on Manic Stew 1 month ago:
Apathetic omelette
- Comment on MY EYESS 1 month ago:
My sister is an urologist. So for me this is basically what happens opening pictures on the siblings group chat
- Comment on OpenAI be like 1 month ago:
In this context “weight” is a mathematical term. Have you ever heard the term “weighted average”? Basically it means calculating an average where some elements are more “influent/important” than others, the number that indicates the importance of an element is called a weight.
One oversimplification of how any neural network work could be this:
- The NN receives some values in input
- The NN calculates many weighted averages from those values. Each average uses a different list of weights.
- The NN does a simple special operation on each average. It’s not that what the operation actually is, but it must be there. Without this, every NN would be a single layer.
- The modified averages are the input values for the next layer.
- Each layer has different lists of weights.
- In reality this is all done using some mathematical and computational tricks, but the basic idea is the same.
Training an AI means finding the weights that give the best results, and thus, for an AI to be open-source, we need both the weights and the training code that generated them.
Personally, I feel that we should also have the original training data itself to call it open source, not just weights and code.