Can’t you build a great gaming computer for $5000 with a high-end GPU and CPU ?
Why would you spend millions on a supercomputer?
Submitted 6 hours ago by yeni@sh.itjust.works to [deleted]
Can’t you build a great gaming computer for $5000 with a high-end GPU and CPU ?
Why would you spend millions on a supercomputer?
Galaxy collisions, protein folding, mechanical design and much more. Big simulations of real world physics and chemistry that require massively parallel computation, problems that require insane numbers of calculations running on multiple machines that can pass data to each other very quickly.
Every supercomputing center has a website where you can read about the research being done on their machines.
No, these can’t be done at similar scale on your desktop. Your PC can’t do that many calculations in a reasonable time. Even on supercomputers, they can take weeks.
Here’s a vehicle analogy:
If I can get a hypercar for $3 million, why does a freight train cost $32 million? It’s not like it can go faster, and it’s more limited in what it can do.
In molecular biology, they are used for instance to calculate/ predict protein-folding.
The most high-end AMD CPU can’t do that ?
A supercomputer is not one computer. Its a big building filled from floor to ceiling with computers that work together. It wouldnt have 16 cores, it would be more like thousands of cores all acting as one big computer towards a single computational task.
Oh noooh. The number of permutations needed is mind-boggling, bigger than the shoe collection of Imelda Marcos.
If your gaming computer can do x computations every month, and you need to run a simulation that requires 1000x computations, you can wait 1000 months, or have 1000 computers work on it in parallel and have it done in one month.
Keep in mind that not all work loads scale perfectly. You might have to add 1100 computers due to overhead and other dcaling issues. It is still pretty good though, and most of those clusters work on highly parallelised tasks, as they are very suited for it.
There are other work loads do not scale at all. Like the old joke in programming. “A project manager is someone that thinks that 9 women can have a child in one month.”
I’m a PhD student and several of my classmates use computing clusters in their work. These types of computers typically have a lot of CPUs, GPUs, or both. The types of simulations they do are essentially putting a bunch of atoms or molecules in a box and seeing what happens in order to get information which is impossible to obtain experimentally. However, there are plenty of other uses.
The clusters we have would have dozens of these CPUs or GPUs and users would submit jobs to it which would run simultaneously. AMD CPUs have better performance than Intel and Nvidia GPUs have Cuda, which is incorporated into a lot of the software people use for these.
A super computer isn’t just a single computer, it’s a lot of them networked together to greatly expand the calculation scaling. If you can imagine a huge data center, with thousands of racks of hardware, CPUs, GPUs and RAM chips all dedicated to the tasks of managing network traffic for major websites, its very similar to that but instead of being built to handle all the ins and outs and complexities of managing network traffic, it’s purely dedicated to doing as many calculations for a specific task, such as protein folding as someone else mentioned, or something like Pixar’s Render Farm, which is hundreds of GPUs all networked together dedicated solely to the task of rendering frames.
With how big and complex any given 3d scenes are in any given Pixar film one single GPU might take 10 hours to calculate the light bounces in a scene to render a single frame, assuming a 90 minute run time, that ~130,000 frames, which is potentially 1,300,000 hours (or about 150 years) to complete just 1 full movie render on a single GPU. If you have 2 GPUs working on rendering frames, you’ve now cut that time down to 650,000 hours. Throw 100 GPUs at the render, we’ve cut time to 13,000 hours, or about a year and a half. Pixar is pretty quiet about their numbers but at least according to the Science of Pixar traveling exhibit during the time of Monster University in 2013, their render farm had about 2000 machines with 24,000 processing cores, and it took 2 years worth of rendering time to render that movie out, and I can only imagine how much bigger their render farm has gotten since then.
Source: sciencebehindpixar.org/pipeline/rendering
You’re not building a super computer to be able to play Crysis, you’re building a super computer to do lots and lots and lots of math that might take centuries of calculation to do on a single 16 core machine.
Run Linpack and see how many flops you get.
github.com/icl-utk-edu/hpl/
Then compare it to the Top 500 list.
www.top500.org/lists/top500/
I bet you are at least 3 orders of magnitude away from the bottom.
Obvious troll is obvious.
They solve problems that would take years on a normal gaming PC.
Problems like what?
What are super-computers used for ?
Its used to build a giant bot network that makes brand new accounts to ask questions in various forums.
A huge factor is how much data you can process at a given time. Often, in the end it’s not that complicated per sample of data. But when you need to run on terra bytes of data (let’s say wide angle telescopes or CERN style experiment) you need huge computer to simulate your system accurately (How does the glue layer size impacts the data?) and process the mountain of data coming from it.
a supercomputer is usually just a lot of computers on the same (fast) network.
a lot of science and research happens there.
As you might know or not, computers can only count from 0 to 1.
But super-computers can do that in the best possible way.
/s
Think of a gaming PC from the 90s. And now imagine asking the same then.
I’ll toss in my two cents.
It’s mainly about handling and processing vast amounts of data. Many times more than you or I may deal with on a day to day basis. First, you have to have somewhere to put it all. Then, you’ve got to load whatever you’re working with into memory. So you need terabytes of RAM. When you’re dealing with that much data, you need beefy CPUs with crazy fast connections with a ton of bandwidth to move it all around at any kind of reasonable pace.
Imagine opening a million chrome tabs, having all of them actively playing a video, and needing to make sense of the cacophony of sound. Only instead of sound, it’s all text, and you have to read all of it all at once to do anything meaningful with it.
If you make a change to any of that data, how does it affect the output? What about a million changes? All that’s gotta be processed by those beefy CPUs or GPUs.
Part of the reason AI data enters need so much memory is because they’ve got to load increasingly large amounts of training data all at once, and then somehow have it be able to be accessed by thousands of people all at once.
But if you want to understand every permutation of whatever data you’re working with, it’s gonna take a ton of time to essentially sift through it all.
And all that’s gotta be hardware? You have to make doubly sure that the results you get are accurate, so redundancies are built in. Extremely precise engineering of parts, how they’re assembled, and how they’re ultimately used is a lot of what makes supercomputers what they are. Special CPUs, RAM with error correction, redundant connections, backups… it all takes a lot of time, space, and money to operate.
litchralee@sh.itjust.works 1 hour ago
An indisputable use-case for supercomputers is the computation of next-day and next-week weather models. By definition, a next-day weather prediction is utterly useless if it takes longer than a day to compute. And is progressively more useful if it can be computed even an hour faster, since that’s more time to warn motorists to stay off the road, more time to plan evacuation routes, more time for farmers to adjust crop management, more time for everything. NOAA in the USA draws in sensor data from all of North America, and since weather is locally-affecting but globally-influenced, this still isn’t enough for a perfect weather model. Even today, there is more data that could be consumed by models, but cannot due to making the predictions take longer. The only solution there is to raise the bar yet again, expanding the supercomputers used.
Supercomputers are not super because they’re bigger. They are super because they can do gargantuan tasks within the required deadlines.