The company has an LLM Superstation product, which is supposed to make life easier for companies to build proprietary LLMs with AMD Instinct GPUs. The LLM Superstation bundles Lamini's software with AMD GPUs optimised for LLMs.
Lamini has been secretly running over 100 AMD Instinct MI200 GPUs and claims AMD's ROCm software is on par with Nvidia's CUDA for large language models.
This indicates that Nvidia might have some competition for dominance in the accelerators that power large AI models. While Lamini is only a small startup, embracing AMD GPUs proves they can run complex models like Nvidia GPUs.
Overall, all this is good news for the industry, which needs a little more competition to drive down costs and increase access to the infrastructure necessary to run advanced AI. For enterprises looking to deploy private customised AI models, having GPU options beyond just Nvidia is appealing. AMD's ROCm reaching parity with CUDA also opens the software ecosystem.