hendrik
@hendrik@palaver.p3x.de
- Comment on 10 to 100 Times Faster than a Starlink Antenna, and Cheaper Than Fiber: Taara Unveils a Laser Internet That Could Shatter the Status Quo 1 day ago:
It is misrepresenting the facts quite a bit. I think microwave links might be able to do a bit more bandwidth. And laser can do way more than ChatGPT attributes to it. It can do 1 or 2.5 Gbps as well. The main thing about optics is that it comes without electromagnetic interference. And you don't need to have a fresnel zone without obstacles, and you don't need a license. The other things about laser being more susceptible to weather, etc should be about right.
- Comment on 10 to 100 Times Faster than a Starlink Antenna, and Cheaper Than Fiber: Taara Unveils a Laser Internet That Could Shatter the Status Quo 1 day ago:
Sure. I think we're talking a bit about different things here. I didn't want to copy it, just know how it's done 😆 But yeah, you're right. And what you said has another benefit, if they want to protect it by law, we have a process for that: Patents. And those require to publish how it's done...
- Comment on 10 to 100 Times Faster than a Starlink Antenna, and Cheaper Than Fiber: Taara Unveils a Laser Internet That Could Shatter the Status Quo 1 day ago:
Nah, all it takes is one person buying it, disassemble it and look at the mechanics to see whether there are things like motors and mirrors inside the transmitter. And I mean physics, lenses and near infraread lasers along with signal processing are well-understood as well. I think it won't be a big secret once it turns into a real thing... I mean as long as it's hype only it might be.
- Comment on 10 to 100 Times Faster than a Starlink Antenna, and Cheaper Than Fiber: Taara Unveils a Laser Internet That Could Shatter the Status Quo 1 day ago:
I wonder what they did, though. Because the article is omitting most of the interesting details and frames it as if this as if optical communication in itself was something new or disruptive... I mean if I read the Wikipedia article on Long-range optical wireless communication, it seems a bunch of companies have already invested 3digit million sums into solving this exact issue...
- Comment on The number of manipulative, disinformation posts on lemmy is too damn high 1 week ago:
I don't think it's as easy as that. The developers hold that resentment. But that doesn't mean it translates to the users. Also Lemmy as we know it today has be very much shaped by the Reddit exodus. So even if it had been marxist at some point (which I'd argue it didn't), that's long gone.
- Comment on The number of manipulative, disinformation posts on lemmy is too damn high 1 week ago:
I'd support that. I mean I'm very okay with the anti-capitalist comments. But I agree that we participate in te rage-baiting, emotional news articles if the day, generally re-posting all the news and memes we got from the newsfeeds, Facebook and Reddit. That's all not very original. And not very useful to me either. I'd rather have a genuine conversation. Preferrably about things I like, so hobbies etc.
- Comment on Musk’s AI bot Grok blames ‘programming error’ for its Holocaust denial 1 week ago:
Ah, nice to know that single employees can just change the products in Musk's companies without any supervision.
And this also sheds some light on how they make Grok etc align with their narratives. I always wondered about the far-right stuff, or the parroting of what is today's big outrage. I mean nothing of that abides by logic. Or is backed by facts. So I suppose the only way to make an AI handle the many contradicting narratives and propaganda, is to tell it specific details how to handle the illogical stuff in a long prompt?!
- Comment on is it ableist to “support equal rights and those with disabilities” but think someone is terrible and doesn’t deserve rights for showing signs of a disability? 2 weeks ago:
If they do it because of someone's disabilities, it's ableist. If they do it and someone happens to be disabled, but that's not connected, it isn't. This sounds like it is about the disabilities, though. And be aware there is more than ableism, people can be assholes, cruel... as well. And all of the bad behaviour can mix.
- Comment on AI hallucinations are getting worse – and they're here to stay 2 weeks ago:
Yeah, sure. No offense. I mean we have different humans as well. I got friends who will talk about a subject and they've read some article about it and they'll tell me a lot of facts and I rarely see them make any mistakes at all or confuse things. And then I got friends who like to talk a lot, and I better check where they picked that up.
I think I'm somewhere in the middle. I definitely make mistakes. But sometimes my brain manages to store where I picked something up and whether that was speculation, opinion or fact, along with the information itself. I've had professors who would quote information verbatim and tell roughly where and in which book to find it.With AI I'm currently very cautious. I've seen lots of confabulated summaries, made-up facts. And if designed to, it'll write it in a professional tone. I'm not opposed to AI or a big fan of some applications either. I just think it's still very far away from what I've seen some humans are able to do.
- Comment on AI hallucinations are getting worse – and they're here to stay 2 weeks ago:
I think the difference is that humans are sometimes aware of it. A human will likely say, I don't know what Kanye West did in 2018. While the AI is very likely to make up something. And also in contrast to a human this will likely be phrased like a Wikipedia article. While you can often look a human in the eyes and know whether they tell the truth or lie, or are uncertain.
- Comment on AI hallucinations are getting worse – and they're here to stay 2 weeks ago:
I'm not a machine learning expert at all. But I'd say we're not set on the transformer architecture. Maybe just invent a different architecture which isn't subject to that? Or maybe specifically factors this in. Isn't the way we currently train LLM base models to just feed in all text they can get? From Wikipedia and research papers to all fictional books from Anna's archive and weird Reddit and internet talk? I wouldn't be surprised if they start to make things up if we train them on factual information and fiction and creative writing without any distinction... Maybe we should add something to the architecture to make it aware of the factuality of text, and guide this... Or: I've skimmed some papers a year or so ago, where they had a look at the activations. Maybe do some more research what parts of an LLM are concerned with "creativity" or "factuality" and expose that to the user. Or study how hallucinations work internally.
- Comment on AI hallucinations are getting worse – and they're here to stay 2 weeks ago:
I can't find any backing for the claim in the title "and they're here to stay". I think that's just made up. Truth is, we found two ways which don't work. And that's making them larger and "think". But that doesn't really rule out anything.
- Comment on Over 250 CEOs sign open letter supporting K-12 AI and computer science education 3 weeks ago:
Oh wow, since when do we lump CS and AI together? One is basically studying maths and logic and how computers, networks and databases work. The other one is how to tell a chatbot to quote a Wikipdia article back to you. I think those are fundamentally different things. And what students should learn first is how to do a powerpoint presentation and write a letter. Or type a math formula into an electronic document, or use the spell checker. Because they frequently can't do any of that.
- Comment on Report: Meta's AI Chatbots Can Have Sexual Conversations with Underage Users 4 weeks ago:
Hehe, as the article says, there is an abundance of them. Dozens if (paid) online services... You can do it on your beefy graphics card... And as per this article to some degree with your Instagram account. I've tried it on my own and it'll generate something like internet fanfiction, or have a dialogue with you. It's a steep learning curve, though and requires some fiddling. And it was text only and I don't own a gaming computer, so it was unbearably slow. Other than that I try to avoid Meta's services or paying for those kind of "scientific" experiments so I wouldn't know how the voice conversation is like... Maybe someone can enlighten us.
- Comment on Enshittification of ChatGPT 4 weeks ago:
Yeah you're right. I didn't want to write a long essay but I thought about recommending Grok. In my experience, it tries to bullshit people a bit more. But the tone is different. I found deep withing, it has the same bias towards positivity, though. In my opinion it's just behind a slapped on facade. Ultimately similar to slapping on a prompt onto ChatGPT, just that Musk may have also added that to the fine-tuning step before.
I think there is two sides to the coin. The AI is the same. Regardless, it'll tell you like 50% to 99% correct answers and lie to you the other times, since it's only an AI. If you make it more appeasing to you, you're more likely to believe both the correct things it generates, but also the lies. It really depends on what you're doing if this is a good or a bad thing. It's argualby bad if it phrases misinformation to sound like a Wikipedia article. Might be better to make it sound personal, so once people antropormorphize it, they won't switch off their brain. But this is a fundamental limitation of today's AI. It can do both fact and fiction. And it'll blur the lines. But in order to use it, you can't simultaneously hate reading it's output. I also like that we can change the character. I'm just a bit wary of the whole concept. So I try to use it more to spark my creativity and less so to answer my questions about facts.
- Comment on Report: Meta's AI Chatbots Can Have Sexual Conversations with Underage Users 4 weeks ago:
Oh wow. A few days ago, societly looked down on people doing (erotic) role play with chatbots... Today it's rolled out on some of the largest internet platforms. Is it really that easy to do this with Meta's chatbots? I've tried asking ChatGPT and other major services to write me erotic fanfiction or answer lewd questions. And it'd always either dodged the question or straight out refused.
- Comment on Enshittification of ChatGPT 4 weeks ago:
I'd have to agree: Don't ask ChatGPT why it has changed it's tone. It's almost for certain, this is a made-up answer and you (and everyone who reads this) will end up stupider than before.
But ChatGPT always had a tone of speaking. Before that, it sounded very patronizing to me. And it'd always counterbalance everything. Since the early days it always told me, you have to look at this side, but also look at that side. And it'd be critical of my mails and say I can't be blunt but have to phrase my mail in a nicer way...
So yeah, the answer is likely known to the scientists/engineers who do the fine-tuning or preference optimization. Companies like OpenAI tune and improve their products all the time. Maybe they found out people don't like the sometimes patrronizing tone, and now they're going for something like "Her". Idk.
- Comment on 4chan has been down since Monday night after “pretty comprehensive own” 1 month ago:
Lol. And what kind of people are on Soyjak, is that site more or less degenerated?
- Comment on It’s game over for people if AI gains legal personhood 1 month ago:
Exactly. This is directly opposed to why we do AI in the first place. We want something to drive the Uber without earning a wage. Cheap factory workforce. Generate images without paying some artist $250... If we wanted that, we already have humans available, that's how the world was for quite some time now.
I'd say us giving AI human rights and reversing 99.9% of what it's intended for is less likely to happen than the robot apocalypse.
- Comment on Access to future AI models in OpenAI's API may require a verified ID 1 month ago:
They can't seriously complain about intellectual property theft, can they?
- Comment on Human-AI relationships pose ethical issues, psychologists say. 1 month ago:
I feel psychologists aren't really in the loop when people make decisions about AI or most of the newer tech. Sure, they ask the right questions. And all of this is a big, unanswered question. Plus how a modern society works with loneliness, skewed perspectives by social media... But does anyone really care? Isn't all of this shaped by some tech people in Silicon Valley and a few other places? And the only question is how to attract investor money?
And I think people really should avoid marrying commercial services. That doesn't end well. If you want to marry an AI, make sure it is it's own entity and not just a cloud service.
- Comment on just got this captcha 1 month ago:
8FE62A
- Comment on Most Americans don’t trust AI — or the people in charge of it 1 month ago:
Sure. I think you're right. I myself want an AI maid loading the dishwasher and doing the laundry and dusting the shelves. A robot vacuum is nice, but that's just a tiny amount of the tedious every-day chores. Plus an AI assistant on my computer, cleaning up the harddrive, sorting my gigabytes of photos...
And I don't think we're there yet. It's maybe the right amount of billions of dollars to pump into that hype if we anticipate all of this happening. But for a lame assistant that can answer questions and get the facts right 90% of the times, and whose attempts to 'improve' my emails are contraproductive lots of the times, isn't really that helpful to me.
And with that it's just an overinflated bubble that is based on expectations, not actual usefulness or yield of the current state of technology.
- Comment on Most Americans don’t trust AI — or the people in charge of it 1 month ago:
At the current state of things, AI just feels like being forced on people. There isn't much transparency and a lot happens without people's consent. Training data is taken without consent, and they display AI-written text, often riddled with msinformation to me withoit being upfront. I also stop reading most of the times, unless there is a comment section beneath for me to complain 😉
- Comment on Most Americans don’t trust AI — or the people in charge of it 1 month ago:
Uh. What do they say to an AI shill, rewriting their social system with AI code? Or a president writing the countries economic strategy with AI? I also believe that's going to have... consequences...
- Comment on The Growing Number of Tech Companies Getting Cancelled for AI Washing 1 month ago:
Yea, I dunno. Seems investors like buzzwords more than anything else. I'm not really keeping track, but I remember all the crypto hype and then NFTs. I believe that has toned down a bit.
- Comment on The Growing Number of Tech Companies Getting Cancelled for AI Washing 1 month ago:
I think the mechanism behind that is fairly simple. AI is a massive hype, and companies could attract lots of investor money by slapping the word "AI" on things. And group dynamics makes the rest of the companies to want in, too.
- Comment on What does “PhD-level” AI mean? OpenAI’s rumored $20,000 agent plan explained. 2 months ago:
It means AI can recite information from a domain that PhD-level people are concerned with. This doesn't mean it can draw correct conclusions, rephrase emails properly or do any heavy-lifting like come up with computer code beyond boilerplate templates and tech-demos. It's just hype.
- Comment on I had an anonymous Google account I had been using with Grayjay. Today, they decided I must be a bot. 2 months ago:
Had that happen to me without using Grayjay.
- Comment on [deleted] 2 months ago:
Yeah exactly. I mean it takes some balance and they absolutely need to be sensitive. But it is like this in some professions. Once you put in the effort to put away your lunch, drive somewhere etc, you're then going to engage. At least talk to people and try to assess the situation. Same for firefighters, paramedics and even some technicians. And it's the right call in lots of inconspicuous situations. At some point the stop giving a f.. and just bother people because the alternative is they'll occasionally have to return to the same situation several hours later and it'll usually have become worse in the meantime.