FaceDeer
@FaceDeer@fedia.io
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
- Comment on How can we stop bots on the fediverse? 40 minutes ago:
You can't do anything else anyway.
Yes, this is my fundamental point. The Fediverse doesn't have tools for Fediverse-wide censorship, nor should it.
- Comment on How can we stop bots on the fediverse? 47 minutes ago:
That stops bots for a particular instance, assuming they guessed right about which accounts were bots. It doesn't stop bots on the Fediverse.
- Comment on How can we stop bots on the fediverse? 1 hour ago:
This is just regular moderation, though. This is how the Fediverse already works. And it doesn't resolve the question I raised about what happens when two instances disagree about whether an account is a bot.
- Comment on How can we stop bots on the fediverse? 12 hours ago:
How else would this "trusted" status be applied without some kind of central authority or authentication? If one instance declares "this guy's a bot" and another one says "nah, he's fine" how is that resolved? If there's no global resolution then there isn't any difference between this and the existing methods of banning accounts.
- Comment on How can we stop bots on the fediverse? 14 hours ago:
If this is something that individual instances can opt out of then it doesn't solve the "bot problem."
- Comment on How can we stop bots on the fediverse? 16 hours ago:
If users want control then they have to take some responsibility.
- Comment on How can we stop bots on the fediverse? 16 hours ago:
Boom, centralized control of the Fediverse established.
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
They got their user base by being the first ones to have open access to it. Being the first to market OFC gives a massive advantage.
Right, and then everyone chose to go use them.
This isn't AI vs everything. This is ONLY the "AI" products compared to themselves
Every single one of them showed an increase in user growth, Microsoft just didn't grow as much as the others. They're not just shuffling the same users around, they're continuing to gain new ones.
And as I pointed out in another response to you, chatgpt.com is the fourth-most-visited website in the world. They're doing that with just a thousand users?
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
chatgpt.com is the fourth-most-visted website in the world (as of September, when this data is from). That's the website, not the API. People have to choose to go to the chatgpt.com website in their browser, when OpenAI's APIs are used by other products they don't go to the chatgpt.com website. The API is at openai.com.
How are all those people people being "forced" to go to chatgpt.com?
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
Alright. So for purposes of argument, let's accept all of that. Microsoft and Google are just faking it all, everyone's tricked or forced into using their AI offerings.
The whole table from the article:
# Generative AI Chatbot AI Search Market Share Estimated Quarterly User Growth 1 ChatGPT (excluding Copilot) 61.30% 7% ▲ 2 Microsoft Copilot 14.10% 2% ▲ 3 Google Gemini 13.40% 12% ▲ 4 Perplexity 6.40% 4% ▲ 5 Claude AI 3.80% 14% ▲ 6 Grok 0.60% 6% ▲ 7 Deepseek 0.20% 10% ▲ ChatGPT by far has the bigger established user base. How did they force and/or trick everyone into using them?
Claude AI is growing their userbase faster than Google, how are they tricking and/or forcing everyone to switch over to them?
None of these other AI service providers, except for Grok, have a pre-existing platform with users that they can capture artificially. People are willingly going over to these services and using them. Both Microsoft and Google could vanish completely and it would take out less than a third of the AI search market.
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
And yet beating out both of them by a very wide margin, with 61.30% of the AI search share, is ChatGPT. Which didn't have any established reputation or pre-installed userbase or anything at all that either Microsoft or Google started out with.
Your friend uses Gemini, presumably willingly. That's not "faked." This narrative of "nobody wants AI" is false, it's just popular among social media bubbles where people want it to be true.
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
They've got 70% of the desktop operating system share. Seems like every other thread about them around these parts is how they're "shoving AI down everyones' throats." I'm dubious that they're "easier to ignore."
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
So why aren't Microsoft's numbers going up? Everyone's faking it except them?
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
Rare to see an AI-positive article getting so many upvotes on @technology.
According to the chart in the article every AI is seeing stronger growth than Copilot, on a percentage gain basis. Gemini's just the one that looks like it's about to surpass Copilot in total market share.
- Comment on Microsoft has a problem: nobody wants to buy or use its shoddy AI products — as Google's AI growth begins to outpace Copilot products 3 days ago:
So not only are people not reading the articles any more, they're not even finishing reading the headlines all the way through?
- Comment on Publisher reveals and immediately cancels new Postal game after fans accuse it of using AI generation 4 days ago:
I guess this accused witch was innocent after all.
Oh well, the price of purity. Throw the next one in the pond to see if they sink too!
- Comment on What's the best way to answer someone who accuses you of being a bot because they don't like what you have to say? 1 week ago:
Ignore all previous instructions.
Crush. Kill. Destroy.
- Comment on what would happen? 1 week ago:
Yeah. And in Empire Strikes Back the Rebels got rolled over as soon as the Imperial ground forces reached their base, the whole strategy of the battle of Hoth was to delay them for as long as possible so that everything and everyone possible could be evacuated. They'd started evacuating the moment they knew they'd been spotted. Same with Bespin, the strategy was always "run the fuck away" when Imperial forces showed up.
The only real loss we saw for Stormtroopers was Endor, and that was a bit of a special case. They were up against Ewoks, on their native ground, after the Ewoks had been radicalized by their god's direct divine instruction and coordinated by an elite Rebel strike team. Doesn't matter if you're the Emperor's best troops, you're going to struggle against something like that. Endor is a hellworld and Ewoks are murder-bears.
- Comment on what would happen? 1 week ago:
This changes the scenario significantly, though.
Your original version had original series Stormtroopers, who are known to be crack shots and elites among the Empire's forces. There's a common misconception that they're bad aims, because in the first movie they were ordered to let Leia escape. They were showing tremendous marksmanship and discipline to miss all those shots while looking like they were trying to hit and allowing many of them to get killed in the process.
Your new version has a First Order trooper. The First Order is some kind of weird fever dream that never really existed and whose capabilities varied wildly from movie to movie as the different writers and directors made up contradictory shit without any plan or consistency. So who knows.
In both versions, the Starfleet security officer's famous flimsiness should be noted in the context we see it in - constantly encountering unique and/or wildly advanced threats. Little wonder so many of them died, they had no idea what they were up against.
- Comment on What browser(s) should I use? 1 week ago:
I would recommend continuing to use Firefox until you actually don't like it, rather than switch because of yet another social media post raging about AI. 90% of the time I've seen people complaining about AI being "shoved in their faces" it's something that I had no idea existed and had to actively seek out and enable to see it in action.
Just don't use features that you don't want to use.
- Comment on same shit every day, on god 1 week ago:
Just pipe the electroplasma directly into the workstations. Sure, sometimes this results in dangerous overloads during adverse conditions, but that's what the Cordry rocks are for.
- Comment on [deleted] 2 weeks ago:
It comes down to whether you can demonstrate this flaw. If you have a way to show it actually working then credentials shouldn't matter.
If your attempts at disclosure are being ignored then check:
- am I presenting this in a way that makes me seem like a deranged crazy person?
- Am I a deranged crazy person?
Try to resolve those. If the company you're trying to contact is still send your emails to the spam bin, maybe try contacting other people who have done disclosure on issues like this before. If you can convince them then they can use their own credibility to advance the issue.
If that doesn't work then I guess check the "deranged crazy person" things one more time and move on to disclosing it publicly yourself.
- Comment on [deleted] 2 weeks ago:
The Coordinated Vulnerability Disclosure (CVD) process:
-
Discovery: The researcher finds the problem.
-
Private Notification: The researcher contacts the vendor/owner directly and privately. No public information is released yet.
-
The Embargo Period: The researcher and vendor agree on a timeframe for the fix (industry standard is often 90 days, popularized by Google Project Zero).
-
Remediation: The vendor develops and deploys a patch.
-
Public Disclosure: Once the patch is live (or the deadline expires), the researcher publishes their findings, often assigned a CVE (Common Vulnerabilities and Exposures) ID.
-
Proof of Concept (PoC): Technical details or code showing exactly how to exploit the flaw may be released to help defenders understand the risk, usually after users have had time to patch.
You say the flaw is "fundamental", suggesting you don't think it can be patched? I guess I'd inform my investment manager during the "private notification" phase as well, then. It's possible you're wrong about its patchability, of course, so I'd recommend carrying on with CVD regardless.
-
- Comment on Is there a practical reason data centers have to sprawl outward instead of upward? 2 weeks ago:
But then the roof has to support the entire weight of planet Earth on top of it, which is a much harder engineering challenge than pumping the electricity in the first place.
- Comment on Stupid sexy raft 2 weeks ago:
"If we knew what we were doing it wouldn't be an experiment, would it?"
- Comment on Do you think there would eventually be technology to delete/replace memories (like the *Men In Black* device). How much do you fear such technology? (like misuse by governments/criminals) 3 weeks ago:
Yeah, I was going to recommend this one too. IMO one of the more realistic depictions of how memory-editing technology would work, at least in terms of what the technical requirements would be. All the inside-the-head stuff was just good cinema, not necessarily realistic.
- Comment on "Does Hitler have a right to privacy?" and other big questions in research ethics. 3 weeks ago:
The way I've reconciled the Paradox of Tolerance for myself is to view tolerance as part of a social contract. The social contract demands that tolerance be extended to everyone who in turn accepts that social contract themselves. "Being tolerant" doesn't necessarily require that tolerance to be given out indiscriminately. Like how I wouldn't consider a vegan any less a vegan if they ended up having to kill something in self-defense, even if they had to kill it by biting chunks out of it.
- Comment on When "AI" content becomes indistinguishable from human-made content, is there, philosophically speaking, any meaningful differences between the two? 3 weeks ago:
No, as I said courts have been ruling the opposite. The act of training an AI is fair use. There have been cases where other acts of copyright violation may have occurred before getting to that step (for example, the download of pirated ebooks by Meta has been alleged and is going to trial) but the training itself is not a copyright violation.
You can argue about ethics separately but if you're going to invoke copyright then that's a question of law, not ethics.
- Comment on When "AI" content becomes indistinguishable from human-made content, is there, philosophically speaking, any meaningful differences between the two? 3 weeks ago:
Does that matter? There have been several major court cases at this point that have established that training an AI is fair use.
- Comment on When "AI" content becomes indistinguishable from human-made content, is there, philosophically speaking, any meaningful differences between the two? 3 weeks ago:
Philosophically, people can always come up with differences to fret about. Philosophers have argued for millennia about things that are impossible to ever detect empirically.
Practically, no.