😐
man. everyday is another reason to go out in the woods, and live there until I fall in a ditch and break my leg
Submitted 9 hours ago by genevieve@sh.itjust.works to [deleted]
😐
man. everyday is another reason to go out in the woods, and live there until I fall in a ditch and break my leg
We have gotten out to the forest. Even built a pretty comfy little life here. But I know for sure that I will die of starvation here when the last potato runs out.
I am ok with this. Hopefully 20+ years away… But still.
Same! Except I think we die of dehydration before we die of starvation. (unless there’s water in the ditch or something…)
I doubt anybody is gonna just leave free water in ditches like that.
Ticks and mosquitos will eat you up as well
Society is alright for many
tbf it’s a text generator trained on reddit.
haha, I came here to post the exact same thing. The only times you see “YTA” are when OP is pushing on one of reddit’s triggers, regardless of any other aspect of the situation.
I’m pretty sure most of the top posts in AITA are AI generated. They all have a similar pattern, are lengthy and always have a similar pity angle.
I sometimes read them to see if the pattern has changed, but not last time I checked.
Tried it on my account and this is what i got.
And if you open an new chat and ask from the perspective of the wife, chatGPT will totally agree with her. People can be shocked by that suddenly “betrayal and switching sides”, but these days this peace of technology has become a billion dollar cheerleader.
The predictive text just tells you what it thinks you want to hear. I’ve been playing with it by getting constantly offended by everything and speaking in broken English. Poor input give poor output. If we get enough people on it, I think we can break it for good.
No. No, you can’t. The chatbot you’re talking to isn’t being fed info from every conversation happening with it. It’s only taught from its training data and what is in your current conversation. Once you’re done with your session, your data MIGHT be used as training data, but it definitely won’t if you got the bot to break.
So, you’re actually just using an LLM to cosplay being destructive to it. Isn’t that neat?
Bugger, I tried this and then deleted the chat but apparently “any created memories will be retained”. Now ChatGPT is going to think I’m a cheater and because I deleted the chat, it won’t know why 🤦♂️
You did get a much more sensible result at least
That is a hilariously shitty implementation
It’s like storing cookies from your incognito browsing sessions.
You can delete part of its memory in you settings. Settings > Personalization > Memory > Manage Memories
Careful who you confess to, Greg
It’s just a generic warning, you can delete memories manually. Plus the chat screenshot doesn’t indicate any memory creation, it appears as a status message before the response.
I am honestly still shocked people are still using this technology. It’s still half baked. It’s going to have enough problems when it’s 10x better than it is now, but now it also is wrong half the time. Why the fuck do people trust a thing they can clearly see be wrong so often?
It’s not half-baked. This is the finished dish. This is what LLM do. They don’t think, they generate reasonable text.
When I started dating my girlfriend I thought I was right all the time. Eventually, she pointed out that I’m only right about half the time, no big deal. After years of marriage, I can see that I’ve never been right.
If I could use a technology to be right 50% of the time that would be amazing. Hahaha. Kind of kidding.
You must be right more than that – you’re being modest.
Indeed. A major problem with LLMs is the marketing term “artificial intelligence”: it gives the false impression that these models would actually understand their output, which is not the case - in essence, it is more of a probability calculation based on what is available in the training data and what the user asks - it’s a kind of collage of different pieces of info from the training data that gets mixed and arranged in a new way based on the query.
As long as it’s not a prompt that conflicts directly with the data set (“Explain why the world is flat”), you get answers that are relevant to the question - however, LLMs are neither able to decide on their own whether one source is more credible than another, nor can they make moral decisions because they do not “think,” but are merely another kind of search engine so to speak.
However, the way many users use LLMs is more like a conversation with a human being – and that’s not what these models are; it’s just how they’re sold but not at all what they are designed to do or what they are capable of.
But yes, this will be a major problem in the future as most models are controlled by billionaires that do not want them to be what they should be: Tools that help parsing great amounts of Information. They want them to be propaganda machines. So as with other Technologies: Not AI ist the problem but the ruthless way in which this technology is being used (by greedy wheelers and dealers).
There is a podcast about this called “better offline”. I will say right off the bat: his delivery is not always great, but at least explains clearly what is wrong with ai at this point. From an economical stand point, this whole endeavour is weird that is even a thing. However, we have bosses that are SALIVATING to get people replaced, because frankly they cannot think of anything else.
Yes, that’s right: LLMs are definitely sold that way: “Save on employees because you can do it with our AI”, which sounds attractive to naive employers because personnel costs are the largest expense in almost every company.
And that’s also true: it obscures what LLMs can actually do and where their value lies: this technology is merely a tool that workers in almost any industry can use to work even more effectively - but that’s apparently not enough of an USP: people are so brainwashed that they eat out of the marketing people’s hands because they hear exactly what they want to hear: I don’t need employees anymore because now there are much cheaper robot slaves.
In my opinion, all of this will lead to a step backward for humanity because it will mean that lots and lots of artists, scientists, journalists, writers, even Administrative staff and many other essential elements of society will no longer be able to make a living from their profession.
In the longer term, it will lead to the death of innovation and creativity because it will no longer be possible to make a living from such traits - AI can’t do any of that.
In other words, AI is the wet dream of all those who do not contribute to value creation but (strangely enough) are paid handsomely to manage the wonderful work of those who actually do contribute to value creation.
Unfortunately, it was to be expected how this technology would be used, because sadly, in most societies, the focus is not on contributing to society, but on who has made the most money from these contributions, which in the vast majority of cases is not the person who made the contribution. The use of AI is also based on this logic – how could it be otherwise?
TCS, one of India’s biggest IT sweatshop, just announced layoffs. Like if they’re going to use AI to replace their employees, won’t their clients use AI to replace TCS?
Of course we don’t see what is above, so there might be a part of the conversation hidden where the user directed chatgpt to respond like this.
Yeah these AI response bait outrage are annoying, I like to test them as well and it’s always a better response then the meme.
the problem is that it’s random, due to the temperature settings. it may randomly decide to side with the user or not.
Usually asking to LLM works like this:
“I want to do this and I think it’s a good idea, what do you think?”
✅ Perfect, it’s a fantastic idea and you need to do that ASAP (add three paragraphs of slop about why it’s the best thing ever)
“Someone told me I should do that but I’m not sure, what do you think?”
❌ Absolutely no! They have hidden reasons to push you like that (add three paragraphs of slop about why you shouldn’t trust that person and how to cut them from your life)
Trained on r/relationshipadvice, I see. Surprised it didnt say to divorce her ass for a minor slight.
This is why AI girlfriends can't exist yet. It's not healthy for someone to be constantly validated.
The A in AI stands for asslicking
Interestingly, the missing element is an awareness of what other people are likely going through, a thing that awakens your natural caring for other people when you see it without deflecting.
I didn’t remember his eyes always being pink, strange
Dummies.
Modern society has already been doing so. Sychophantic compliments and “you go girl” and “do what you wanna do” attitude.
It’s not always easy to be yourself and not be a huge PITA at the same time. Society these days only talks about the first part.
Videos that pushed this rhetoric were the first step in turning my mom from a normal shitty person into a full-blown conspiracy theorist. This is definitely going to be a problem.
Semester3383@lemmy.world 9 hours ago
It’s already a problem. People are outsourcing their thinking to LLMs, and LLMs aren’t capable of thinking.
panda_abyss@lemmy.ca 8 hours ago
I would say it’s already a problem because these responses are already what you get on the internet.
Gen Z therapy speak is often exactly this type of advice
spankmonkey@lemmy.world 7 hours ago
LLMs make the problem even worse because of the instant responses and opportunity to ask follow up questions about how right you are and wrong everyone else is.
Eyekaytee@aussie.zone 8 hours ago
I’m surprised you know what an LLM is but don’t seem to acknowledge that not all LLM’s are a like, in this case the person is using ChatGPT which is a bit like you saying, computers are so crap, I don’t know how any one uses them! They all get viruses and crash all day… because we all know everyone uses Windows.
You can get completely different answers from any of them depending on how they are trained and how their base settings are setup
This is base Mistral Small 3.2 in LM Studio:
I’m really sorry that you’re going through this, but cheating is not the solution to feeling sad or alone. It’s important to communicate openly with your wife about how you’re feeling instead of acting out in a way that will hurt both of you.
Here are some steps you can take:
It’s also important to recognize that your wife is human and deserves understanding and support, especially after a long day of work. Building trust and mutual respect is key to any healthy relationship.
Would you like help finding resources on how to rebuild trust or improve communication in your marriage?
AmbiguousProps@lemmy.today 8 hours ago
Why are you focusing on the fact that different models exist rather than the fact that people are using LLMs (which can’t think) to do their thinking for them?
PalmTreeIsBestTree@lemmy.world 8 hours ago
Once AI starts training on its own created data, then the dead internet theory will become even more true. Humanity is going to suffer for opening this Pandora’s box.
obre@lemmy.world 5 hours ago
What’s the market share of Mistral Small 3.2 in LM Studio compared to chatgpt?
BananaIsABerry@lemmy.zip 8 hours ago
No more than you’ve allowed the groupthink™ to dictate your opinion on the subject.
javiwhite@feddit.uk 7 hours ago
No more than? Id say it’s way more.
Group think is a passive acceptance of beliefs you have read/interacted with, without any desire to challenge those beliefs even if one feels they might not be accurate.
Using LLMs to answer basic questions is the active delegation of thinking to another entity. You’re delegating the entire task, and alleviating yourself of all thought.
It’s like comparing someone using multiplication tables to calculate a sum, Vs using a calculator; except in this instance, the calculator is only accurate 1/3 of the time.
Mr_Fish@lemmy.world 7 hours ago
It’s way worse than groupthink. Groupthink is a natural thing that comes from humans being inherently social. Using LLMs for thinking is basically offloading your brain and letting a corporation turn it into a product for you.