Run an experiment
Experiments help remove the guesswork from working with large language models.
Experiments allow you to set up A/B test between multiple different model configs.
Experiments can be used to compare different prompt templates, different hyperparameter combinations (such as temperature and presence penalties) and even different base models.
This enables you to try out alternative prompts or models and use the feedback from your users to determine which works better.

An example experiment trying out various different prompt templates
Updated 4 months ago