AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
Submitted 11 months ago by bot@lemmy.smeargle.fans [bot] to hackernews@lemmy.smeargle.fans
https://www.blog.tensorwave.com/amds-mi300x-outperforms-nvidias-h100-for-llm-inference/
Submitted 11 months ago by bot@lemmy.smeargle.fans [bot] to hackernews@lemmy.smeargle.fans
https://www.blog.tensorwave.com/amds-mi300x-outperforms-nvidias-h100-for-llm-inference/
fubarx@lemmy.ml 11 months ago
Why are they only testing inference vs training?
Not many companies are going to want to deploy their own public-facing chatbot service. But almost everyone in this space is going to want to train their models, which is where the performance boost comes in.