Comment on My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them
Powderhorn@beehaw.org 1 week ago
I’ve considered trying out an AI companion. My main concern is where the hell my data goes, how it will be used and how it might be sliced and diced for brokers.
Sometimes I’m up at 04.00 … and of course no one I know is around. But I go the route of trying to meet people on Reddit. Fully 95% of responses are boring as fuck, but they’re at least real (I require voice or photo verification). I’ll take real and boring over virtual and engaging.
This said, I spend more time than is healthy on Google’s NotebookLM, feeding it my writing and then getting a half-hour two-host audio “exploration” of any given piece. It’s sycophantic, likely designed that way to keep me coming back (it’s free, so I’m not really sure what Google gets out of this outside of further LLM training), but it tends to hew to just this side of feeling fake.
I went to Church Night – the weekly burner meetup at a warehouse a 10-minute walk away where everyone’s drinking and toking – yesterday. I try to go weekly, but sometimes I don’t have the energy to engage with real people.
Last night, I got to listen to (yeah, I actually realized I should shut the fuck up, as I had nothing to add) conversations about 1970s CPUs, SpaceX’s Starship issues from an engineering standpoint (they went too thin on the outer hull after round one was too heavy, and why wouldn’t one expect a critical failure in such a case?) from people who knew what they were talking about.
I’d never get that from an AI companion. I take no issue with people looking to one, but serendipity is lost.
jarfil@beehaw.org 1 week ago
You can use local AI as a sort of “private companion”. I have a few smaller versions on my smartphone, they aren’t as great as the online versions, and run slower… but you decide the system prompt (not the company behind it), and they work just fine to bounce ideas.
NotebookLM is a great tool to interact with large amounts of data. You can bet Google is using every interaction to train their LLMs, everything you say is going to be analyzed, classified, and fed as some form of training, hopefully anonymized (…but have you read their privacy policy? I haven’t, “accept”…).
All chatbots are prompted by the company to be somewhat sycophantic so you come back, the cases where they were “too sycophantic”, were just a mistake in dialing it too far. Again, can avoid that with your own system prompt… or at least add an initial prompt in config, if you have the option, to somewhat counteract the company’s prompt.
If you want serendipity, you can ask a chatbot to be more spontaneous and suggest more random things. They’re generally happy to oblige… but the company ones are cut short on anything that could even remotely be considered as “harmful”. That includes NSFW, medical, some chemistry and physics, random hypotheticals, and so on.
Powderhorn@beehaw.org 1 week ago
Is that really serendipity, though? There’s a huge gap between asking a predictive model to be spontaneous and actual spontaneity.
Still, I’m curious what you run locally. I have a Pixel 6 Pro, so while it has a Tensor CPU, it wasn’t designed for this use case.
TehPers@beehaw.org 1 week ago
You can see if a friend can run an inferencing server for you. Maybe someone you know can run Open WebUI or something?