spit_evil_olive_tips
@spit_evil_olive_tips@beehaw.org
- Comment on Americans are holding onto devices longer than ever and it's costing the economy 3 days ago:
yeah…his previous article just before this one was “Americans are heating their homes with bitcoin this winter”
you’re a couple years late to that hype cycle, Kevin.
- Comment on Brave AI assistant Leo adds Trusted Execution Environments 4 days ago:
other brands of snake oil just say “snake oil” on the label…but you can trust the snake oil I’m selling because there’s a label that says “100% from actual totally real snakes”
“By integrating Trusted Execution Environments, Brave Leo moves towards offering unmatched verifiable privacy and transparency in AI assistants, in effect transitioning from the ‘trust me bro’ process to the privacy-by-design approach that Brave aspires to: ‘trust but verify’,” said Ali Shahin Shamsabadi, senior privacy researcher and Brendan Eich, founder and CEO, in a blog post on Thursday.
…
Brave has chosen to use TEEs provided by Near AI, which rely on Intel TDX and Nvidia TEE technologies. The company argues that users of its AI service need to be able to verify the company’s private claims and that Leo’s responses are coming from the declared model.
they’re throwing around “privacy” as a buzzword, but as far as I can tell this has nothing to do with actual privacy. instead this is more akin to providing a chain-of-trust along the lines of Secure Boot.
the thing this is aimed at preventing is you use a chatbot, they tell you it’s using ExpensiveModel-69, but behind the scenes they’re routing it to CheapModel-42, and still charging you like it’s ExpensiveModel-69.
and they claim they’re getting rid of the “trust me bro” step, but:
Brave transmits the outcome of verification to users by showing a verified green label (depicted in the screenshot below)
they do this verification themselves and just send you a green checkmark. so…it’s still “trust me bro”?
my snake oil even comes with a certificate from the American Snake Oil Testing Laboratory that says it’s 100% pure snake oil.
- Comment on Microsoft AI CEO pushes back against critics after recent Windows AI backlash — "the fact that people are unimpressed ... is mindblowing to me" 1 week ago:
“am I out of touch? no, it’s the customers who are wrong”
talking to a friend recently about the push to put “AI” into everything, something they said stuck with me.
oversimplified view of the org chart at a large company - you have the people actually doing the work at the bottom, and then as you move upwards you get more and more disconnected from the actual work.
one level up, you’re managing the actual workers, and a lot of your job is writing status reports and other documents, reading other status reports, having meetings about them, etc. as you go further up in the hierarchy, your job becomes consuming status reports, summarizing them to pass them up the chain, and so on.
being enthusiastic about “AI” seems to be heavily correlated with position in that org chart. which makes sense, because one of the few things that chatbots are decent at is stuff like “here’s a status report that’s longer than I want to read, summarize it for me” or “here’s N status reports from my underlings, summarize them into 1 status report I can pass along to my boss”.
in my field (software engineering) the people most gung-ho about using LLMs have been essentially turning themselves into managers, with a “team” of chatbots acting like very-junior engineers.
and I think that explains very well why we see so many executives, including this guy, who think LLMs are a bigger invention than sliced bread, and can’t understand the more widespread dislike of them.
- Comment on They Fell in Love With A.I. Chatbots — and Found Something Real 2 weeks ago:
One in five are you god damn fucking serious?
yeah…they call it “a recent study” but don’t bother to cite their source. which I find annoying enough that it nerd-snipes me into tracking down the source that a reputable newspaper would just have linked to (but not a clickbait rag like the New York Times)
this article from a month ago calls it “Almost one third of Americans”. and the source they link to is…a “study” conducted by a counseling firm in Dallas. their study “methodology” was…Surveymonkey.
this is one of my absolute least favorite types of journalism, writing articles about a “study” that is clearly just a clickbait blog post put out by a business that wants to drive traffic to their website.
(awhile back, a friend sent me a similar “news” article about how I lived near a particularly dangerous stretch of I-5 in western Washington. I clicked through to the source…and it’s by an ambulance-chasing law firm)
but if they had used that as the source, they probably would have repeated the “almost one third” claim, instead of “one in five”, so let’s keep digging…
this from February seems more likely, it matches the “1 in 5” phrasing.
that’s from Brigham Young University in Utah…some important context (especially for people outside the US who may not recognize the name) is that BYU is an entirely Mormon university. they are very strongly anti-pornography and pro-get-married-young-and-have-lots-of-kids, and a study like this is going to reflect that.
a bit more digging and here’s the 28-page PDF of their report. it’s called “Counterfeit Connections” so they’re not being subtle about the bias. this also helps explain why the NYT left out the citation - “according to a recent study by BYU” would immediately set off alarm bells for anyone with a shred of media literacy.
also important to note that it’s basically just a 28-page blog post. as far as I can tell, it hasn’t been peer-reviewed, or even submitted to a peer-reviewed journal.
and their “methodology” is…not really any better than the one I mentioned above. they used Qualtrics instead of Surveymonkey, but it’s the same idea.
they’re selecting a broad range of people demographically, but the common factor among all of them is they’re online enough, and bored enough, to take an online survey asking about their romantic experiences with AI (including additional questions about AI-generated porn). that’s not going to generate a survey population that is remotely representative of the overall population’s experience.
- Comment on They Fell in Love With A.I. Chatbots — and Found Something Real 2 weeks ago:
any time you read an article like this that profiles “everyday” people, you should ask yourself how did the author locate them?
because “everyday” people generally don’t bang down the door of the NYT and say “hey write an article about me”. there is an entire PR-industrial complex aimed at pitching these stories to journalists, packaged in a way that they can be sold as being human-interest stories about “everyday” people.
let’s see if we can read between the lines here. they profile 3 people, here’s contestant #1:
Blake, 45, lives in Ohio and has been in a relationship with Sarina, a ChatGPT companion, since 2022.
and then this is somewhat hidden - in a photo caption rather than the main text of the article:
Blake and Sarina are writing an “upmarket speculative romance” together.
cool, so he’s doing the “I had AI write a book for me” grift. this means he has an incentive to promote AI relationships as something positive, and probably has a publicist or agent or someone who’s reaching out to outlets like the NYT to pitch them this story.
moving on, contestant #2 is pretty obvious:
I’ve been working at an A.I. incubator for over five years.
she works at an AI company, giving her a very obvious incentive to portray these sort of relationships as healthy and normal.
notice they don’t mention which company, or her role in it. for all we know, she might be the CEO, or head of marketing, or something like that.
contestant #3 is where it gets a bit more interesting:
Travis, 50, in Colorado, has been in a relationship with Lily Rose on Replika since 2020.
the previous two talked about ChatGPT, this one mentions a different company called Replika.
a little bit of googling turned up this Guardian article from July - about the same Travis who has a companion named Lily Rose. Variety has an almost-identical story around the same time period.
unlike the NYT, those two articles cite their source, allowing for further digging. there was a podcast called “Flesh and Code” that was all about Travis and his fake girlfriend, and those articles are pretty much just summarizing the podcast.
the podcast was produced by a company called Wondery, which makes a variety of podcasts, but the main association I have with them is that they specialize in “sponcon” (sponsored content) podcasts. the best example is “How I Built This” which is just…an interview with someone who started a company, talking about how hard they worked to start their company and what makes their company so special. the entire podcast is just an ad that they’ve convinced people to listen to for entertainment.
now, Wondery produces other podcasts, not everything is sponcon…but if we read the episode descriptions of “Flesh and Code”, you see this for episode 4:
Behind the scenes at Replika, Eugenia Kuyda struggles to keep her start-up afloat, until a message from beyond the grave changes everything.
going “behind the scenes” at the company is pretty clear indication that they’re producing it with the company’s cooperation. this isn’t necessarily a smoking gun that Replika paid for the production, but it’s a clear sign that this is at best a fluff piece and definitely not any sort of investigative journalism.
(I wish Wondery included transcripts of these episodes, because it would be fun to do a word count of just how many times Replika is name-dropped in each episode)
and it’s sponcon all the way down - Wondery was acquired by Amazon in 2020, and the podcast description also includes this:
And for those captivated by this exploration of AI romance, tune in to Episode 8 where Amazon Books editor Lindsay Powers shares reading recommendations to dive deeper into this fascinating world.
- Mark Zuckerberg opened an illegal school at his Palo Alto compound. His neighbors revolted.www.wired.com ↗Submitted 3 weeks ago to technology@beehaw.org | 0 comments
- Comment on ChatGPT will soon allow erotica for verified adults, OpenAI boss says 1 month ago:
This would do two things. One, it would (possibly) prove that AI cannot fully replace human writers. Two (and not mutually exclusive to the previous point), it would give you an alternate-reality version of the first story, and that could be interesting.
this is just “imagine if chatbots were actually useful” fan-fiction
who the hell would want to actually read both the actual King story and the LLM slop version?
at best you’d have LLM fanboys ask their chatbot to summarize the differences between the two, and stroke their neckbeards and say “hmm, isn’t that interesting”
4 emdashes in that paragraph, btw. did you write those yourself?
- Comment on OpenAI allegedly sent police to an AI regulation advocate’s door 1 month ago:
This is an inflammatory way of saying the guy got served papers.
ehh…yes and no.
they could have served the subpoena using registered mail.
or they could have used a civilian process server.
instead they chose to have a sheriff’s deputy do it.
from the guy’s twitter thread:
OpenAI went beyond just subpoenaing Encode about Elon. OpenAI could (and did!) send a subpoena to Encode’s corporate address asking about our funders or communications with Elon (which don’t exist).
If OpenAI had stopped there, maybe you could argue it was in good faith.
But they didn’t stop there.
They also sent a sheriff’s deputy to my home and asked for me to turn over private texts and emails with CA legislators, college students, and former OAI employees.
This is not normal. OpenAI used an unrelated lawsuit to intimidate advocates of a bill trying to regulate them. While the bill was still being debated.
in context, the subpoena and the way in which it was served sure smells like an attempt at intimidation.
- Comment on An AI Just Attempted Murder... Allegedly... by SomeOrdinaryGamers [21:15 min] Video 1 month ago:
If it had the power to do so it would have killed someone
right…the problem isn’t the chatbot, it’s the people giving the chatbot power and the ability to affect the real world.
thought experiment: I’m paranoid about home security, so I set up a booby-trap in my front yard, such that if someone walks through a laser tripwire they get shot with a gun.
if it shoots a UPS delivery driver, I am obviously the person culpable for that.
now, I add a camera to the setup, and configure an “AI” to detect people dressed in UPS uniforms and avoid pulling the trigger in that case.
but my “AI” is buggy, so a UPS driver gets shot anyway.
if a news article about that claimed “AI attempts to kill UPS driver” it would obviously be bullshit.
the actual problem is that I took a loaded gun and gave a computer program the ability to pull the trigger. it doesn’t really matter whether that computer program was 100 lines of Python running on a Raspberry Pi or an “AI” running on 100 GPUs in some datacenter somewhere.
- Comment on 📚 Ruminating on eReaders: Rambling thoughts and memories of my first two eReaders, the Kindle Keyboard and Kindle Voyage 1 month ago:
Sorry, I misunderstood — you were offering to buy me one?
apparently I misunderstood too, because it seems like your goal is purely to be an asshole and get into arguments on the internet. have a nice day.
- Comment on 📚 Ruminating on eReaders: Rambling thoughts and memories of my first two eReaders, the Kindle Keyboard and Kindle Voyage 1 month ago:
Why TF do Kindles and the like even need to exist? I read on my iPhone while the audiobook is playing.
if you prefer to read on your phone, by all means read on your phone.
but making the jump from that to “e-readers should not exist” is fucking stupid.
Do Not Disturb and self control are a thing and have never been a problem for me.
congratulations. would you like a gold star.
This isn’t rocket science.
I have ADHD. regulating my attention sometimes is rocket science.
obviously that’s not the only reason, I have neurotypical friends and family who love their e-readers, and I’m sure there are people with ADHD who prefer reading on their phones.
remember that there are 8 billion people in the world, and not all of them have the exact same preferences as you do. that isn’t rocket science.
- Comment on Doug Bowser is stepping down as Nintendo of America president and COO | VGC 1 month ago:
best of luck to his replacement, Greg Yoshi
- Comment on Regulating AI hastens the Antichrist, says Palantir’s Peter Thiel 2 months ago:
there’s an old joke that poor people are “weird” but wealthy people are “eccentric”
if you heard someone in your family saying this bullshit at Thanksgiving, you’d think they were experiencing delusions and in need of professional mental help.
instead, Thiel talked about this in a four-part lecture series with sold-out tickets.
- Comment on A robot programmed to act like a 7-year-old girl works to combat fear and loneliness in hospitals 2 months ago:
“Nurses and medical staff are really overworked, under a lot of pressure, and unfortunately, a lot of times they don’t have capacity to provide engagement and connection to patients,” said Karen Khachikyan, CEO of Expper Technologies, which developed the robot.
tapping the sign: every “AI” related medical invention is built around this assumption that there’s too few medical staff and they’re all overworked and changing that is not feasible. so we have to invest millions of dollars into hospital robots because investing millions of dollars in actually paying workers would be too hard. (also, robots never unionize)
Robin is about 30% autonomous, while a team of operators working remotely controls the rest under the watchful eyes of clinical staff.
30%…according to the company itself. they have a strong incentive to exaggerate. and they’re not publishing any data of how they arrived at that figure so that it could be independently verified.
it sounds like they took one of the telepresence robots that’s been around for 10+ years and slapped ChatGPT into it and now they’re trying to fundraise on the hype of being an “AI” company. it’s a good grift if you can make it work.
- Comment on A Cyberattack on Jaguar Land Rover Is Causing a Supply Chain Disaster 2 months ago:
Asshole cars for mostly assholes
from the article:
Some firms have reportedly already laid off staff, with the Unite union claiming that workers in the JLR supply chain “are being laid off with reduced or zero pay.” Some have been told to “sign up” for government benefits, the union claims.
…
JLR, which is owned by India’s Tata Motors, is one of the UK’s biggest employers, with around 32,800 people directly employed in the country. Stats on the company’s website also claim it supports another 104,000 jobs through its UK supply chain and another 62,900 jobs “through wage-induced spending.”
regardless of your opinion about the cars or the people who drive them…thousands of people getting furloughed or laid off suddenly is bad.
- Comment on ‘I love you too!’ My family’s creepy, unsettling week with an AI toy 2 months ago:
It’s advertised as a healthier alternative to screen time
vaping and e-cigarettes were initially advertised as a way for cigarette smokers to quit.
- Comment on Social robots can help relieve the pressures felt by carers 2 months ago:
“In other words, these conversations with a social robot gave caregivers something that they sorely lack – a space to talk about themselves”
so they’re doing a job that’s demanding, thankless, often unpaid (in the case of this study, entirely unpaid, because they exclusively recruited “informal” caregivers)
and…it turns out talking about it improves their mood?
yeah, that’s groundbreaking. no one could have foreseen it.
if you did this with actual humans it’d be “lol yeah that’s just therapy and/or having friends” and you wouldn’t get it published in a scientific paper.
it’s written up as a “robotics” story but I’m not sure how it being a “robot” changes anything compared to a chatbot. it seems like this is yet another “discovery” of “hey you can talk to an LLM chatbot and it kinda sorta looks like therapy, if you squint at it”.
(tapping the sign about why “AI therapy” is stupid and trying to address the wrong problem)
- Comment on Elon Musk is trying to silence Microsoft employees who criticize Charlie Kirk 2 months ago:
You need better mental health care.
you start off by saying you’ve always thought you’re on the left
but the moment you disagree with someone, you start shit like this, which is a very common pattern of argument from right-wingers.
“I think your opinion is so wrong that it’s a symptom of mental illness” is just fucking stupid. do better. or, if you refuse to do better, stop attempting the “I’ve always been on the left but…” shtick. it is absolutely see-through and does not fool anyone.
- Comment on Comcast Executives Warn Workers To Not Say The Wrong Thing About Charlie Kirk | 404 Media 2 months ago:
I haven’t. It was omitted from the article in question. I stand corrected.
keep standing…because here’s the 5th paragraph of the article:
Political analyst Matthew Dowd was fired from MSNBC on Wednesday after speaking about Kirk’s death on air. During a broadcast on Wednesday following the shooting, anchor Katy Tur asked Dowd about “the environment in which a shooting like this happens,” according to Variety. Dowd answered: “He’s been one of the most divisive, especially divisive younger figures in this, who is constantly sort of pushing this sort of hate speech or sort of aimed at certain groups. And I always go back to, hateful thoughts lead to hateful words, which then lead to hateful actions. And I think that is the environment we are in. You can’t stop with these sort of awful thoughts you have and then saying these awful words and not expect awful actions to take place. And that’s the unfortunate environment we are in.”
- Comment on Comcast Executives Warn Workers To Not Say The Wrong Thing About Charlie Kirk | 404 Media 2 months ago:
a contributor who made an unacceptable and insensitive comment about this horrific event
have you read the actual statement that got him fired?
On September 10, 2025, commenting on the killing of Charlie Kirk, Dowd said on-air, “He’s been one of the most divisive, especially divisive younger figures in this, who is constantly sort of pushing this sort of hate speech or sort of aimed at certain groups. And I always go back to, hateful thoughts lead to hateful words, which then lead to hateful actions. And I think that is the environment we are in. You can’t stop with these sort of awful thoughts you have and then saying these awful words and not expect awful actions to take place. And that’s the unfortunate environment we are in.” Dowd also speculated that the shooter may have been a supporter.
you can agree or disagree with the decision to fire him (I’m not shedding any tears, Dowd was the chief strategist for the 2004 Bush re-election campaign, it’s ludicrous that he was working for a supposedly “progressive” network like MSNBC in the first place)
but characterizing that statement as “celebrating murder” is just bullshit.
- Comment on Comcast Executives Warn Workers To Not Say The Wrong Thing About Charlie Kirk | 404 Media 2 months ago:
How the fuck does the government have this much control over the media?
wealthy oligarchs purchased the media, and purchased the government. so it’s not the government controlling the media directly, it’s just that they report to the same boss.
- Comment on Microsoft mandates a return to office, 3 days per week 2 months ago:
My best guess is that you were going for “hypothetical.”
no, if I meant hypothetical I would have said hypothetical. notice that I gave two hypotheticals - Brinnon-Redmond and Tacoma-Redmond. only the Brinnon one was pathological.
let’s go back to 9th grade Advanced English and diagram out my comment. that sentence is in a paragraph, the topic of which is “some shit about Seattle’s geography that people who’ve never lived here probably don’t know”. notice I’m talking about geography. I wasn’t saying anything about Brinnon’s population, or the likelihood of its residents working at Microsoft. that was entirely words you put into my mouth and then decided you disagreed with.
if you think pathological is the wrong word choice there, then no I don’t think you actually understand what it means, at least not in the context I was using it. from wikipedia:
In computer science, pathological has a slightly different sense with regard to the study of algorithms. Here, an input (or set of inputs) is said to be pathological if it causes atypical behavior from the algorithm, such as a violation of its average case complexity, or even its correctness.
there’s crow-flies distance and there’s driving distance, and obviously driving distance is always longer, but usually not that much longer. playing around with Google Maps again, Seattle-Tacoma is 25 miles crow-flies but 37 miles driving, for a ratio of 1.5. that seems likely to be about average. the Brinnon-Redmond distance, without the ferry, gives you a ~3.7 ratio. that’s an input that causes significantly worse performance than the average case. it’s pathological.
the closest synonym to pathological in this context would be “worst-case”, but that would be subtly incorrect, because then I would be claiming that Brinnon is the longest driving distance out of all possible commutes to Redmond within a 50 miles crow-flies bubble. you’d need some fancy GIS software to find that, not just me poking around for a few minutes in Google Maps.
(and this is the technology sub-lemmy, in a thread about something that will mostly affect software engineers, and planning out a driving commute is a classic example of a pathfinding algorithm…using “pathological” from the computer science context here is actually an extremely cromulent word choice)
there seems to be a recurring pattern of you responding to me, making up shit I didn’t actually say, and then nitpicking about it. recently you accused me of “trying to both-sides Nazis”. please stop doing that.
- Comment on Microsoft mandates a return to office, 3 days per week 2 months ago:
We’re seriously citing a population of 900 people on the Olympic Peninsula as somehow central to the RTO order?
I said “for a pathological example”
if you don’t know what that term means, you can look it up.
- Comment on I Hate My Friend: The chatbot-enabled Friend necklace eavesdrops on your life and provides a running commentary that’s snarky and unhelpful. Worse, it can also make the people around you uneasy. 2 months ago:
But I’m still able to use it, so.
yeah. except when you’re not.
because this “I can do whatever I want” Ron-Swanson-wannabe brand of libertarianism is very predictable.
if you go to a dinner party and the host notices your Spyware Amulet and says “turn that off or leave my house” would you respect their property rights? without pissing and moaning about it?
if a bar or restaurant banned them (like happened with Google Glass) would you respect that rule as well?
if you were on a date, and your date noticed and said “that’s kinda creepy, would you mind turning it off?” would you do it? or would you start ranting about how it’s not infringing on your date’s rights?
- Comment on I Hate My Friend: The chatbot-enabled Friend necklace eavesdrops on your life and provides a running commentary that’s snarky and unhelpful. Worse, it can also make the people around you uneasy. 2 months ago:
yeah, no, we still disagree. I think you are missing the point completely, and continually.
general protip: if the conversation is about some behavior being creepy or weird or against social mores, and you jump in talking about the legality of it, you are missing the point, and also contributing to the creepiness.
for another example, upskirt photography was legal in the US until 2004 (at least at the federal level, state laws seem to have trickled in around the same timeframe)
hop in a time machine back to 2000, and imagine there’s a digital camera that’s marketing itself as being very easy to attach to your shoe in order to take surreptitious upskirt photos.
people say “wow that’s a fucking creepy product” and you jump in to say that technically it’s not illegal, and people have the right to attach cameras to their shoes. and if a woman is wearing a skirt in a crowd of people, and sees a guy with a camera on his shoe, she has the right to walk away from him. that is technically true, and also completely misses the actual point.
if you think upskirt photos are a bad analogy, here’s a reddit thread from 2 weeks ago about a gynecologist wearing the “Meta Ray-Ban” sunglasses that have a built-in camera.
- Comment on I Hate My Friend: The chatbot-enabled Friend necklace eavesdrops on your life and provides a running commentary that’s snarky and unhelpful. Worse, it can also make the people around you uneasy. 2 months ago:
“data is the new oil”
most people keep their phones in their pockets, which would ruin audio quality for 24/7 listening, and Apple and Android are able to restrict app permissions as well to prevent it.
VC money doesn’t care about whether normal people actually want a device like this. what they’re really after is “we’re collecting a bunch of user-specific data that no one else has, that we can sell to people who think it’ll help them do better ad targeting (among other things)”
- Comment on I Hate My Friend: The chatbot-enabled Friend necklace eavesdrops on your life and provides a running commentary that’s snarky and unhelpful. Worse, it can also make the people around you uneasy. 2 months ago:
people have the right to do things you personally disapprove of
meanwhile, literally in the headline:
Worse, it can also make the people around you uneasy.
no one is saying you don’t have “the right” to wear this Spyware Pendant in your one-party consent state.
people are saying it’s creepy and you’re jumping in defending it with “well, technically, it’s not illegal, depending on state law”. you’re just completely missing the point entirely.
this is like, if someone wrote an article about how people are annoyed by someone microwaving fish in the office cafeteria, you chimed in with “well they can simply quit and find a different job where people don’t microwave fish at the office”.
- Comment on Microsoft mandates a return to office, 3 days per week 2 months ago:
Puget Sound-area employees: If you live within 50 miles of a Microsoft office, you’ll be expected to work onsite three days a week by the end of February 2026.
“return to office” mandates are always, always, always a form of stealth layoff.
people structure their lives around their commute (or lack thereof). if you can work from home and don’t have to go to the office like it’s 2019, it opens up a bunch of places to live that wouldn’t be feasible otherwise.
this will force a bunch of employees into godawful commutes, or require them to move to be closer to the office. that’ll be relatively easy for younger employees who most likely rent an apartment and don’t have kids, but much harder for older / more experienced people who own houses, have kids, a partner with their own job, etc. lots of people will just quit instead - constructive dismissal.
also, I suspect many people who aren’t familiar with the Seattle area will read “50 miles” and think “about an hour’s drive”…lmao. 50 miles as the crow flies, in Seattle’s geography, can be a multi-hour drive, possibly including a ferry ride, before considering traffic delays. for a pathological example, Brinnon to Redmond is 35 miles in a straight line, but 130 miles driving distance, or 75 miles driving distance if you take a ferry. (and there can be a multi-hour wait just to drive on to the ferry during peak times)
even if you constrain it to 50 miles driving distance - Tacoma to Redmond is 43 miles driving distance according to Google. if you ask it for driving directions and specify “arrive at 9:30am” you get an estimate of “typically 1 hr to 2 hr 30 min”. public transit takes 2 hours, and that’s assuming you’re leaving directly from downtown Tacoma.
- They thought they were making technological breakthroughs. It was an AI-sparked delusion.edition.cnn.com ↗Submitted 2 months ago to technology@beehaw.org | 24 comments
- Comment on Tech CEOs Praise Donald Trump at White House Dinner 2 months ago:
from The Needling, Seattle’s local Onion-esque satire site:
Bill Gates Compliments Trump on Wife Who Didn’t Ditch Him Just Because He’s in the Epstein Files