Groq support (Beta)

We have introduced support for models available on Groq to Humanloop. You can now try out the blazingly fast generations made with the open-source models (such as Llama 3 and Mixtral 8x7B) hosted on Groq within our Prompt Editor.

Groq achieves faster throughput using specialized hardware, their LPU Inference Engine. More information is available in their FAQ and on their website.


Note that their API service, GroqCloud, is still in beta and low rate limits are enforced.