Microsoft, Google, and Apple are all quietly putting these and the software infrastructure in place in their operating systems to do on device classification of content: Windows Recall, Google Secure Core, and Apple Intelligence. These services are obsequiously marketed as being for your benefit, while are all privacy and surveillance nightmares. When the security breaking features of these systems are mentioned, each company touts some convoluted workaround to justify the tech.
The real reason is to watch everything you do in your device, use the tensor hardware you may or may not know that you purchased to analyze the data locally, then export and sell that “anonymized” information to advertisers and the government. All while cryptographically tying the data to your device and the device to you for “security”. This enables mass surveillance, digital rights management, and target advertised previously unseen.
Each of these providers already scans everything you upload to their services:
Microsoft has been caught using your stored passwords to decrypt uploaded archives.
Apple developed client side scanning for CSAM before backlash shut it down, and they already locally scan your photos with a machine learning classification algorithm whether you like it or not. You can’t turn it off.
Google recently implemented local content scanning with SecureCore to protect you from unwanted content.
I would rather saw off my own nuts with a rusty spork before willfully purchasing a device with one of these NPUs on-device. I fear that in the next 5-10 years, you won’t be able to avoid them.
** Do you trust the rising fascist regimes and their tech lackeys in America and the UK to use this information morally and responsibly?**
Do you really believe that these features that no one asked for are and that you cannot disable are for your benefit?
corroded@lemmy.world 1 day ago
I have to wonder if NPUs are just going to eventually become a normal part of the instruction set.
When SIMD was first becoming a thing, it was advertised as accelerating “multimedia,” as that was the hot buzzword of the 1990s. Now, SIMD instructions are used everywhere, any place there is a benefit from processing an array of values in parallel.
I could see NPUs becoming the same. Developers start using NPU instructions, and the compiler can “NPU-ify” scalar code when it thinks it’s appropriate.
NPUs are advertised for “AI,” but they’re really just a specialized math coprocessor. I don’t really see this as a bad thing to have. Surely there are plenty of other uses.
olafurp@lemmy.world 3 hours ago
It’s tricky to use in programming though for non neural network math, I can see it used in video compression and decompression or some very specialised video game math.
Video game AI could be a big one though where difficulty would be AI based instead of just stat modifiers.
Dudewitbow@lemmy.zip 1 day ago
The problem that (local) ai has at the current moment is that its not just a single type of compute, and because of that, breaks usefulness in the pool of what you can do with it.
on the Surface level, “AI” is a mixture of what is essentially FP16, FP8, and INT8 accelerators, and different implementations have been using different ones. NPUs are basically INT8 only, while GPU intensive ones are FP based, making them not inherently cross compatible.
It forces devs to either think of the NPUs themselves with small things (e.g background blur with camera) as there isn’t any consumer level chip with a massive INT8 co processor except for the PS5 Pro (300 TOPS INT8, which compared to laptop cpus, have a 50 TOPs, so on a completely different league, PS5 Pro uses it to upscale)
addie@feddit.uk 1 day ago
SIMD is pretty simple really, but it’s been 30 years since it’s been a standard-ish feature in CPUs, and modern compilers are “just about able to sometimes” use SIMD if you’ve got a very simple loop with fixed endpoints that might use it. It’s one thing that you might fall back to writing assembly to use - the FFmpeg developers had an article not too long ago about getting a 10% speed improvement by writing all the SIMD by hand.
Using an NPU means recognising algorithms that can be broken down into parallelizable, networkable steps with information passing between cells. Basically, you’re playing a game of TIS-100 with your code. It’s fragile and difficult, and there’s no chance that your compiler will do that automatically.
Best thing to hope for is that some standard libraries can implement it, and then we can all benefit. It’s an okay tool for ‘jobs that can be broken down into separate cells that interact’, so some kinds of image processing, maybe things like liquid flow simulations. There’s a very small overlap between ‘things that are just algorithms that the main CPU would do better’ and ‘things that can be broken down into many many simple steps that a GPU would do better’ where an NPU really makes sense, tho.
pftbest@sh.itjust.works 19 hours ago
I agree, we should push more to get the open and standardized API for these accelerators, better drivers and better third party software support. As long as the manufacturers keep them locked and proprietary, we won’t be able to use them outside of niche copilot features no one wants anyway.