Prompts

Prompts define how an large language model behaves

Prompts define how an large language model (LLM) behaves.

Large language models take in natural language, code, images and other data types as inputs and output text. Defining the inputs to the model is essentially how you “program” the model to do a task.

A Prompt on Humanloop encapsulates the instructions and other configurations for how it should do a specific task. Prompts consist of a template such as Write a song about {{topic}} as well as the model and the parameters that define how it behaves.

Inputs are defined in the template through the double-curly bracket syntax e.g. {{topic}} and the value to variable will need to be supplied at query time.

This separation of concerns, keeping configuration separate from the query time data, is crucial for enabling you to experiment with different instructions, models and evaluate any changes. The configuration is stored in the Prompt and the query time data are stored in Logs, which can then be re-used in Datasets.

A Prompt can be serialized as a Promptfile.

---
model: gpt-4
temperature: 1.0
max_tokens: -1
provider: openai
endpoint: chat
---
<system>
  Write a song about {{topic}}
</system>

You should create a new Prompt for every different ‘task to be done’ with the LLM. For example each of these tasks are things that can be done by an LLM and should be a separate Prompt File: extractive summary, title creator, outline generator etc.

A Prompt file can have multiple versions as you try out different models, params or templates, but they should all be doing the same task, and in general should be swappable with one-another.

By versioning your Prompts, you can track how adjustments to the template or parameters influence the LLM's responses. This is crucial for iterative development, as you can pinpoint which versions produce the most relevant or accurate outputs for your specific use case.

Prompts are callable as an API. You supply and query-time data such as input values or user messages, and the model will respond with its text output


const chatResponse = await humanloop.chatDeployed({
  "project": "song writer",
  "inputs" : {
	  "topic": "debugging compiler errors"
	 },
})