jarfil
@jarfil@beehaw.org
Programmer and sysadmin (DevOps?), wannabe polymath in tech, science and the mind. Neurodivergent, disabled, burned out, and close to throwing in the towel, but still liking ponies 🦄 and sometimes willing to discuss stuff.
- Comment on Android's new anti-theft features 10 hours ago:
“AI” used to mean “whatever we don’t fully understand yet”. A lot of processes have walked the path from “fantasy” to “AI” to “algorithm”. Doesn’t need to be non-deterministic, the original tic-tac-toe playing software was “AI” at the time.
Until we get some AGI, the term “AI” will remain a moving technological target, and a static marketing target.
- Comment on Google is redesigning its search engine — and it’s AI all the way down 10 hours ago:
Yeah, they call it A-Z testing, because A/B wasn’t enough, and AIs can fill A to Z cases with ease.
- Comment on Google is redesigning its search engine — and it’s AI all the way down 19 hours ago:
Yup.
They also correctly identify the function of LLMs as a glue between siloed AIs. We have barely seen the beginning of that, but as the AI race continues, it seems likely that some LLM models will be created that will have less human language, and more “interop language”. Where nowadays LLMs can be somewhat probed for words and relationships, we’ll have zero chance to probe an LLM using tokens that are part of some made up (by the AIs) interop language. Black boxes inside black boxes.
A naive approach will be to “democratize AI”, and that will surely be better than centralized AIs responding to every query… but won’t solve the deepening of inscrutability.
One point made me chuckle: when they showed the graphs for cualitativa jumps above certain network size. Recently someone commented about the “diminishing returns” and “asymptotic growth” of making a LLM larger and larger… but also so it was in these models: diminishing returns all the way up to a point… followed by a sudden exponential jump until the next asymptote. The truth is we don’t know where the asymptotes and exponential jumps lie, we don’t even have a remote hypothesis about it.
- Comment on Google is redesigning its search engine — and it’s AI all the way down 1 day ago:
Is coming, and more. Very good video, with good points. Slightly outdated already, with AutGPT being a thing. What’s coming, is going to be orders of magnitude more than what they predicted in that video.
- Comment on Google is redesigning its search engine — and it’s AI all the way down 1 day ago:
AI can also understand extra weights for hand picked sources of truth. Whether you then agree with the choices of whoever is doing the hand picking, is a separate matter.
- Comment on Android's new anti-theft features 1 day ago:
It’s kind of both Google’s and manufacturers responsibility. Google has made available a Dynamic System Updates feature:
source.android.com/docs/…/dynamic-system-updates
developer.android.com/topic/dsu
…but it requires manufacturer support to allow adding custom keys.
- Comment on Android's new anti-theft features 1 day ago:
It isn’t clear which of these features use Google servers and which ones don’t. The “Find My Device” definitely does, and has no place in AOSP. If they’re actually using AI to compare phone state with some tracked “habitual” behavior, it may also have no place in AOSP, but who knows.
- Comment on Android's new anti-theft features 1 day ago:
The real solution is, a heuristic analysis of the phone’s gyroscope and accelerometer data.
Marketing calls that, “AI”.
- Comment on Has Generative AI Already Peaked? - Computerphile 3 days ago:
That’s not how watching the video or reading the paper works either.
Whatever.
- Comment on Has Generative AI Already Peaked? - Computerphile 4 days ago:
It’s a “push as much data as a baby gets to train its NN” step, which is several orders of magnitude more, and more focused, than any training dataset in existence right now.
Even with diminishing returns, it’s bound to get better results.
- Comment on "X": Far-right conspiracy theorists have returned in droves after Elon Musk took over the former Twitter, new study says 4 days ago:
The biggest problem with Ukraine… is that they aren’t fully detached from Nazis:
- During WW2, Ukraine was allied with Nazis and fascists, helping them exterminate Poles
- 21st century Ukraine, still uses Nazi symbology, the fascist salute, a fascist hymn, has set national support for WW2 Nazi combatants, and even their national shield is a fascist remnant.
All of that has nothing to do with the Russian invasion… but it does give Russia’s propaganda machine an awesome excuse. It’s just too easy to get people hooked up with some actual facts, then get them to do a leap of faith and fall straight into full propaganda… and Russia knows it.
Israel and Palestine is a particularly juicy case, where there are really shitty groups coming from both sides, ending up like an “all you can eat” buffet for every propaganda machine out there. No matter what narrative one wants to spin, chances are they’ll find a latch point in the Israel vs. Palestine conflict, even contradictory ones for different audiences.
- Comment on Has Generative AI Already Peaked? - Computerphile 4 days ago:
Current research points to memristors, which can work both as memory cells, and as weights in a n×m grid representing a fully connected n->m layer that executes in 1 clock. I forgot which company was showing prototypes since pre-covid… and now Google is so full of wannabes that I can’t seem to find it, oh well.
Cerebras is at the limit of SRAM, that’s true.
Spintronics could be the next step, but seems to be way less ready for production.
Higher dimensionality would be nice, but even at 2D, being able to push multiple processes at once, through multiple n×m layers, would already give those 5 orders of magnitude, at least for inference. Since training also involves an inference step, it would speed that too, just not as much.
Self-training would be the next step after that… I don’t think I’ve seen research in that regard, but maybe I’ve just missed it.
- Comment on Has Generative AI Already Peaked? - Computerphile 5 days ago:
The orders of magnitude will come from the RAM running a whole layer at once in “a single clock”, without the need for a processor to execute any of it. It’s conceivable that multiple layers could be written/“programmed” into neuromorphic RAM, then a processor could just write the inputs, send an execute, move data from outputs to the next inputs, and repeat for all layers.
For example, an nVidia A100 goes up to 1,200 INT8 TOPS with 80GB of RAM at 1500MHz… but if the RAM could execute a neural network directly, that could raise it up to 80G*1.5G=120,000,000 INT8 TOPS, or 5 orders of magnitude.
- Comment on Elon Musk’s X can’t invent its own copyright law, judge says 5 days ago:
This seems to have been addressed by the judge:
By attempting to exclude Bright Data from accessing public X posts owned by X users, X also nearly “obliterated” the “fair use” provision of the Copyright Act, “flouting” Congress’ intent in passing the law, Alsup wrote.
- Comment on Elon Musk’s X can’t invent its own copyright law, judge says 5 days ago:
Now do Reddit 😈
- Comment on Elon Musk’s X can’t invent its own copyright law, judge says 5 days ago:
Copyright can only be waived in the US by dedicating the work to the Public Domain. In most other countries, it can only be assigned or licensed to someone.
The “standard practice in all forum software since practically forever”, has been to include a very broad use license on the work, without switching the copyright holder, in order to potect the forum owner from liability.
The GDPR is about a very broad take on “privacy”, where the rights of “access, modification, and removal” get extended to any “personal information”, no matter whether it’s “personally identifiable” or not.
- Comment on Gabe Newell, the Man Behind Steam, Is Working on a Brain-Computer Interface 5 days ago:
Neuralink has a technology that specifically addresses two of the main issues with BCI: data density, and implant effective duration.
There are more issues, but it addresses those two in particular, which is something quite interesting to see, and can be turned into patents that can be sold to other BCI initiatives.
The rest of Musk is… well, he’s kind of an “unstable genius”, with enough money to blow on random moonshots, marketing stunts, and random publicity. Honestly, if I had his money, I’d probably do the same: build a few core businesses, then go on tangents to see what sticks to the wall. It can all still be seen under the general theme of “colonizing Mars” though, which is a guiding starshot as good as any, with Hyperloop and Boring company having kind of exhausted what can be done on Earth, Tesla being a borderline failure, SpaceX, StarLink, or indoor farming working pretty well, and X being an experiment at social manipulation.
- Comment on "X": Far-right conspiracy theorists have returned in droves after Elon Musk took over the former Twitter, new study says 5 days ago:
It’s not just to destabilize “American” politics, it’s a series of worldwide campaigns to destabilize all information flow, to sow doubt and confusion among everyone, then out of the blue present an aligned front to push a certain narrative.
If people are kept in a “flux state of distrust”, they’re easier to convince when suddenly a bunch of their sources agree on some point, “it must be true if conflicting sources suddenly say the same”.
- Comment on "X": Far-right conspiracy theorists have returned in droves after Elon Musk took over the former Twitter, new study says 5 days ago:
TikTok is a weird beast. It can at the same time show the destruction in Gaza, Israeli soldiers poking fun at it, ASMR videos, mindless looping footage, fake AI idols, underage girls asking for payment from strangers, and siphon engagement data to train Chinese propaganda bots.
It’s the closest thing to “shove everything into a bowl, then shake”…
- Comment on Has Generative AI Already Peaked? - Computerphile 1 week ago:
Rule of headlines? 🙄
No, it’s not peaked out.
- A simple path forward, is to go from classifying single elements of training data, to classifying multiple elements and their relationship in the training data.
- Slightly less simple, is to gather orders of magnitude more data, by just hooking the input to an IRL robot.
- Another step, is for the NN to control the robot and decide which parts of the data require refinement, and focus on that.
There is a lot of ways to improve data acquisition still on the table, it isn’t going to stop at creating large corpora and having humans to fine-tune them.
- Comment on Has Generative AI Already Peaked? - Computerphile 1 week ago:
Neuromorphic hardware is going to jump many orders of magnitude over classic hardware. When we get a RAM that can execute multiple layers in parallel at once, per clock tick, we’ll see whole AI ecosystems cooperating to get a solution in a fraction of the time a single modern NN would take.
- Comment on Has Generative AI Already Peaked? - Computerphile 1 week ago:
You will need an LLM to tell that apart, so… 🤷
- Comment on Has Generative AI Already Peaked? - Computerphile 1 week ago:
No.
Whatever the next big thing is, money will pull in the scammers who will turn it into the next cancer.
It’s always been like that.
- Comment on How the Great Firewall of China Detects and Blocks Fully Encrypted Traffic 1 week ago:
automatically train one that can “learn” to generate similar-looking data by just being fed a bunch of files to emulate
Sounds like a job for a “compression prompt” for ChatGPT… [and thus, the AI wars began]
- Comment on Interactive Loading Screens - High Hell 1 week ago:
The best loading screen is: none.
Load levels in chunks, preload the first chunk of the next level before the player reaches the end of the previous one, and either have a smooth transition, or at most put a skippable cutscene.
Loading screens are for poorly developed games.
- Comment on Men Use Fake Livestream Apps With AI Audiences to Hit on Women 2 weeks ago:
victim blaming across choices and especially towards women and POC individuals
I don’t know about the US, here in Spain the love scams, and fame scams, are a thing across all genders and orientations, with low reporting of scams in general being attributed mainly to shame of the victims for having fallen for a scam.
People like to think they’re smarter than most other people, and the more sure they are of that, the easier they are to fool. I think it’s no wonder they don’t want to acknowledge it afterwards.
everyone in this case is trying to take advantage of someone
We don’t know this, and we shouldn’t assume this of the victim.
I don’t see how else it could work… but I’m open to hearing alternatives?
we must acknowledge the existing structure of power and how it silences certain people and also blames them
Fair.
A relevant aspect I can think of, is the part about it being fine to lie to have sex between “consenting” adults. How can there be consent, when one or both parties are misleading the other? Sounds like an officially codified permission to abuse.
I don’t get what people see in fame or clout, it looks like lying and argument of authority to me. The fact that anyone would pursue or get influenced by either, seems to me like ingrained predisposition to getting abused (by authority figures). Not sure how much of that is inherent, and how much social.
A clearly perverse incentive in the whole scheme, is money… but that’s kind of unavoidable in any money based society.
The elephant in the room, is sex itself: how can it, on one side, make someone pay and lie for it, and on the other side be used as a bargaining chip. Is it a purely hormonal catalyst for the whole scheme, or a proxy for a power play?
- Comment on Twitter co-founder Biz Stone joins board of Mastodon's new US nonprofit | TechCrunch 2 weeks ago:
I don’t know about literature, but both the lawyers and notaries involved, warned everyone of the risks. I was also an idealist and skeptical of their advice at the time… then spent several years trying, to no avail, to make people understand what was at stake… until the warnings became reality… and again, and again, and again.
If it’s not in the literature, then it should be.
- Comment on Men Use Fake Livestream Apps With AI Audiences to Hit on Women 2 weeks ago:
I don’t think that someone’s behavior choice is comparable to their clothing choice, and I see much more than a single problem in this whole situation. It also isn’t any inherent weakness or any sort of coercion that are getting exploited, everyone is free to leave at any moment.
no one deserves to be taken advantage of
Agreed.
The problem is that everyone in this case is trying to take advantage of someone, they just differ in what they want:
- one wants money
- another wants sex
- last one wants clout and money
We can agree that the main instigator is the seller, taking advantage of the others, but that doesn’t mean the others are completely innocent; they can’t be, or the whole scheme wouldn’t be possible in the first place.
(in a sane world, I’d expect the only one to get scammed would be the buyer… but I know that groupies are a real thing)
I think we should ask why each one of them wants what they want, and why are they ready to jump at the opportunity of taking advantage of someone else in order to get it.
Then we could ask what could be done to prevent the whole situation from being possible, at every level.
PS: in some jurisdictions, there is a “funny” situation where lying to get sex is a felony up to certain age… but once it’s between “consenting adults”, lying to get sex is perfectly fine! 😒 We could also take a look at that, how is it possible to give consent while being lied to.
- Comment on Twitter co-founder Biz Stone joins board of Mastodon's new US nonprofit | TechCrunch 2 weeks ago:
I’ve had family fund one, worked for some as a contractor, and had friends work for some more. They’re all bankrupt now, and all of them for the same reason I’ve already explained.
It’s worse than working for someone else, because they’re funded by the workers themselves. When a worker’s coop goes down, workers not only lose their jobs, but also all the capital they’ve put into it. Some fall into a sunken cost fallacy, try to refloat it… only to end up losing even more capital, often get in debt, and also lose their jobs.
When an owner takes advantage of a worker, at least the worker can look for another job without having to pay for the privilege.
Coops work well when members are business-savvy, and when they have a very limited scope with minimal capital investment, allowing members to leave them at any time with minimal loss.
- Comment on Men Use Fake Livestream Apps With AI Audiences to Hit on Women 2 weeks ago:
Honestly, I’m having a hard time not blaming everyone in this.
- Seller: scamming wannabe scammers, while actively spreading and promoting toxic ideas.
- Buyers (1st level victims): wannabe scammers, trying to scam the final victim.
- Victims (2nd level): being so shallow as to fall for a fame scam.
As the saying goes: “you can’t scam a honest
manperson”… but a dishonest one, oh boy, you can scam them over and over and over.