Comment on Academia to Industry
iAvicenna@lemmy.world 5 months ago
I like how they have no road map on how to achieve general artificial intelligence (apart from lets train LLMs with a gazillion parameters and the equivalent of yearly energy consumed by ten large countries) but yet pretend chatgpt 4 is only two steps away from it
ignotum@lemmy.world 5 months ago
Hard to make a roadmap when people can’t even agree on what the destination is not how to get there.
But if you have enough data on how humans react to stimulus, and you have a good enough model, then you will be able to train it to behave exactly like a human. The approach is sound even though in practice there prooobably doesn’t exist enough usable training data in the world to reach true AGI, but the models are already good enough to be used for certain tasks
LANIK2000@lemmy.world 5 months ago
Thing is we’re not feeding it how humans react to stimulus. For that you’d need it hooked up to a brain directly. It’s too filtered and biased by getting text only, this approach naively ignores things like memory and assumes text messages exist in a vacuum. Throwing a black box into an analytical prediction machine, only works as long as you’re certain it’ll generally throw out the same output with the same input, not if your black box can suddenly experience 5 years of development and emerge a different entity. It’s skipping too many steps to become intelligent.
ignotum@lemmy.world 5 months ago
Yeah that was a hypothetical, if you had thoae things you would be able to create a true AGI (or what i would consider a true AGI at least)
Text is basically just a proxy, but to become proficient at predicting text you do need to develop many of the cognitive abilities that we associate with intelligence, and it’s also the only type of data we have literal terrabytes of laying around, so it’s the best we’ve got 🤷♂️
Regarding memory, the human mind can be viewed as taking in stimuli, associating that with existing memories, condensing that into some high level representation, then storing that, a llm could, with a long enough context window, look back at past input and output and use that information to influence it’s current output, to mostly the same effect.
What do you mean throwing a black box into an analytical prediction machine? And what do you mean 5 years of development?
LANIK2000@lemmy.world 5 months ago
The black box is the human that reads and outputs text and the analytical prediction machine is the AI. 5 years of development is the human living their life before retuning to continue writing. It is an extreme example, but I’m just tyring to point out that the context of what a person might write can change drastically between individual messages because anything can happened in between, and thus the data is fundamentally flawed for training intelligence, as that step is fully missing, the thought process.
As to why I called the AI an analytical prediction machine, that’s because that’s essentially what it does. It has analyzed an unholy amount of random text from the internet, meaning conversations/blogs/books and so on, to predict what could follow the text you gave it. It’s why prompt injection is so hard to combat and why if you give it a popular riddle and change it slightly like “with a boat, how can a man and goat get across the river”, it’ll fail spectacularly trying to shove in the original answer somehow. I’d say that’s proof it didn’t learn to understand (cognition), because it can’t use logic to reason about a deviation from the dataset.
As for memory, we can kind of simulate it with text, but it’s not perfect. If the AI doesn’t write it down, it didn’t happen and thus any thoughts, feelings or mental analysis stops existing upon each generation. The only way it could possibly develop intelligence, is if we made it needlessly ramble and describe everything like a very bad book.
And thus to reach the beginning of your comment, I don’t belive it’s necessary to posses any cognitive abilities to generate text and in turn I don’t see it as evidence of us getting any closer to AGI.
iAvicenna@lemmy.world 5 months ago
The approach is not sound when all the other factors are considered. If AI continues along this approach it is likely that big AI companies will need to usurp next possible tech breakthroughs like quantum computing and fusion energy to be able to keep growing and produce more profit instead of these techs being used for better purposes (cheaper and cleaner household energy, scientific advances etc). All things considered excelling at image analysis, creative writing and digital arts wont be worth all the damage its going to cause.
ignotum@lemmy.world 5 months ago
Usurp? They won’t be the ones to develop quantum computers, nor will they be developing fusion, if those technologies become available they might start using them but that won’t somehow mean it won’t be available for other uses.
And seeing as they make money from “renting out” the models, they can easily be “used for better purposes”
ChatGPT is currently free to use for anyone, this isn’t some technology they’re hoarding and keeping for themselves
VirtualOdour@sh.itjust.works 5 months ago
So.many people have conspiracy theories about how chat gpt is stealing things and whatever, people in this threat crowing that it’s immoral if they teach it with paywalled journal articles - though I bet I can guess who their favorite reddit founder is…
I use gpt to help coding my open source project and it’s fantastic, everyone else I know that contributes to floss is doing the same - it’s not magic but for a lot of tasks it can cut 90% of the time out especially prototyping and testing. I’ve been able to add more and better functionality thaks to a free service, I think that’s a great thing.
What I’m really looking forward to is CAD getting generative tools, refining designs into their most efficient forms and calculating strengths would be fantastic for the ecosystem of freely shared designs, text2printable would be fantastic ‘design a bit to fix this problem’ could shift a huge amount of production back to local small industry or bring it into the home.
The positive possibilities of people having access to these technologies is huge, all the groups that currently can’t compete with the big corporations suddenly have a huge wall pulled down for them - being able to make custom software tools for niche tasks is fantastic for small charities or community groups, small industry, eco projects, etc.
It’ll take time for people to learn how to use the tools effectively just like when computers were new but as it becomes more widely understood I think we’ll see a lot of positive innovation which it enables.
iAvicenna@lemmy.world 5 months ago
By usurp I mean fill out all the available capacity for their own use, assuming by that time they will be the largest tech giants of the time and have the financial means to do so.
Don’t get me wrong the things that chatgpt can do are amazing. Even if hallucinates or cant really reason logically, it is still beyond what I would have expected. But the time I mentioned above comes, people wint be given a choice between AI or cheaper energy/better health care. All that technological advancements will be bought to full capacity by AI companies and AI will be shoved down people’s throats.
And yes chatgpt is free but it is only a business decision not a good of the humanity. free chatgpt helps testing and generating popularity which in turn brings investment. I am not saying anything negative (or positive) about their business plan but dont think for a second that they will have any ethical concerns about leeching upcoming technological innovations for the sake of generating profit. And this is just one company. There will be others too, Amazon, Google, Microsoft etc etc. They will all aggressively try to own as much as possible of these techs as possible leaving only scraps for other uses (therefore making it very expensive to utilise basically).