Im guessing you have never done business in China.
Comment on Should have seen it coming
DarkCloud@lemmy.world 2 days ago
Neo-Liberalism: Let’s hire anyone from anywhere, just the best candidates no matter what, no other values but who we can extract the most value from. Let’s also take money from the government and lobby them to defund themselves and the country’s services to give us even more money.
Oh wow! How does China steal our tech?! Why wasn’t the government funding education and security to protect us?!?
MothmanDelorian@lemmy.world 2 days ago
MissJinx@lemmy.world 2 days ago
Do you want my boss to ask me who I voted for and who I pray for? Are you crazy? That’s not their business. They HAVE to hire based on the value they’ll give to the company
motor_spirit@lemmy.world 2 days ago
You’ve distilled your own ignorance, not reality lol
TropicalDingdong@lemmy.world 2 days ago
Its only Ignorance if it comes from the Ignoramus region of France. Otherwise its just called sparkling stupidity.
DragonTypeWyvern@midwest.social 2 days ago
Downvoted for misinformation, Ignoramus is in Italy
theunknownmuncher@lemmy.world 2 days ago
deepseek is not stolen tech, it was trained using novel innovations that western companies were not doing
deranger@sh.itjust.works 2 days ago
I thought the innovative part was using more efficient code, not what it’s trained on.
theunknownmuncher@lemmy.world 2 days ago
arxiv.org/abs/2405.20304 they invented their own reinforcement learning framework called Group Relative Policy Optimization
Sanctus@lemmy.world 2 days ago
Yeah the original comment in this chain more describes US Telcos and shit, not this particular instance.
kreskin@lemmy.world 1 day ago
thats capitalisms dark secret. Its only innovative when it has to be.
quokka1@mastodon.au 1 day ago
@deranger @theunknownmuncher the US trying to stifle Chinese progress/stop chip exports has had exactly what anyone could see. China is making leaps and bounds in all sorts of tech areas, innovating around obstacles
Fungah@lemmy.world 1 day ago
That’s what they said.
Like. You can compile better or more diverse datasets to train a model on. But you can also have better code training on the same dataset.
The model is what the code poops out after its eaten the dataset.