Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AMD's MI300X Outperforms Nvidia's H100 for LLM Inference

⁨7⁩ ⁨likes⁩

Submitted ⁨⁨11⁩ ⁨months⁩ ago⁩ by ⁨bot@lemmy.smeargle.fans [bot]⁩ to ⁨hackernews@lemmy.smeargle.fans⁩

https://www.blog.tensorwave.com/amds-mi300x-outperforms-nvidias-h100-for-llm-inference/

HN Discussion

source

Comments

Sort:hotnewtop
  • fubarx@lemmy.ml ⁨11⁩ ⁨months⁩ ago

    Why are they only testing inference vs training?

    Not many companies are going to want to deploy their own public-facing chatbot service. But almost everyone in this space is going to want to train their models, which is where the performance boost comes in.

    source