Monitoring with deployed Evaluators

You can now connect deployed Evaluator versions for online monitoring of your Prompts and Tools.

GPT-4o

Same day support for OpenAIs new GPT4-Omni model! You can now use this within the Humanloop Editor and chat APIs. Find out more from OpenAI here.

Logs for Evaluators

For AI and Code Evaluators, you can now inspect and reference their logs as with Prompts and Tools. This provides greater transparency into how they are being used and improves the ability to debug and improve.

Improved Evaluator management

Evaluators are now first class citizens alongside Prompts, Tools and Datasets. This allows for easier re-use, version control and helps with organising your workspace within directories.

Log drawer in Editor

You can now open up the Log drawer directly in the Editor.

Groq support (Beta)

We have introduced support for models available on Groq to Humanloop. You can now try out the blazingly fast generations made with the open-source models (such as Llama 3 and Mixtral 8x7B) hosted on Groq within our Prompt Editor.

Llama 3

Llama 3, Meta AI's latest openly-accessible model, can now be used in the Humanloop Prompt Editor.

Anthropic tool support (Beta)

Our Editor and deployed endpoints now supports tool use with the Anthropic's Claude3 models. Tool calling with Anthropic is still in Beta, so streaming is not important.

Cost, Tokens and Latency

We now compute Cost, Tokens and Latency for all Prompt logs by default across all model providers.

Cohere Command-r

We've expanded the Cohere models with the latest command-r suite. You can now use these models in our Editor and via our APIs once you have set your Cohere API key.