Comment on Trump’s brand of US capitalism faces ‘socialist’ backlash from conservatives
brucethemoose@lemmy.world 1 week agoIntel has a good process and good-enough packaging tech. They have good IP to combine.
They have all the pieces they need to be alright if they can just stop the internal corporate Game of Thrones, and stop footgunning themselves.
resipsaloquitur@lemmy.world 1 week ago
I’m no expert, but it seems like their fab and design have lagged.
brucethemoose@lemmy.world 1 week ago
Yes, they have. They are late.
But 18A and 14A are still good, considering the capacity shortages everyone’s facing.
In terms of designs, Arc is good, the small cores are good, the big cores are… not as good, but fine as long as they aren’t clocked to the moon. They canceled a lot of stuff like Gaudi and Xe HPC, but that’s water under the bridge now.
resipsaloquitur@lemmy.world 1 week ago
I think rapidly “server” is becoming GPU and “user” is becoming power-efficient (but still fast) Arm chips.
I wouldn’t want to go down with the Xeon and desktop/laptop ship.
I know how critical I sound and there’s a case for Intel to make sensitive (read: defense) chips domestically, but they need a good kick in the pants, not sweetheart deals.
brucethemoose@lemmy.world 1 week ago
On the contrary, I think inference is going on-device more in the future, but ‘users’ will still need decent CPUs and GPUs. Intel is well set up for this: they have good CPU, GPU, and NPU IP.
Intel can go ARM if they want, no problem, just like AMD can (and almost tried). They could theoretically preserve most of their core design and still switch ISA.
Servers will still need CPUs for a long time.
As for GPU compute, we are both in a bubble, and at several forks in the road:
Is bitnet ML going to take off? If it does, that shifts the advantage to almost cyptominer-like ASICs, as expensive matrix multiplication no longer matters for inference.
Otherwise, what about NPUs? Huawei is already using them, and even training good production models with them. Intel can try their hand at this game again if loads start shifting away from CUDA.
Otherwise, they still have a decent shot at the CUDA ecosystem via ZLUDA and their own frameworks. Training and research will probably forever be Nvidia (and some niches like Cerebra’s), but still.