In the SDK

In this guide we will demonstrate how to create and use an OpenAI function in our Python SDK. We refer to functions internally as tools.

The Humanloop SDK provides an easy way for you to integrate the functionality of OpenAI functions, which we refer to as tools, into your existing projects. Tools follow the same universal json-schema definition as OpenAI functions. In this guide, we'll walk you through the process of using tools with the Humanloop SDK via the chat endpoint.


Creating a Tool

Prerequisites

  1. A Humanloop account - you can create one by going to our sign up page.
  2. The Humanloop Python SDK setup. If you don't have this please see our guide on using the Chat endpoint with our SDK.

📘

Using other model providers

This guide assumes you're using OpenAI with either the gpt-3.5 or gpt-4 models. Other models or model providers are not support for OpenAI Functions.

Install and initialize the SDK

🚧

The SDK requires Python 3.8 or greater.

  1. Import the Humanloop SDK: If you haven't done so already, you'll need to install and import the Humanloop SDK into your Python environment. You can do this using pip:
pip install humanloop

Then import the SDK in your script:

from humanloop import Humanloop
  1. Initialize the SDK: Initialize the Humanloop SDK with your API key:
from humanloop import Humanloop

hl = Humanloop(api_key="<YOUR_HUMANLOOP_API_KEY>")
  1. Create a chat with the tool: We'll start with the general chat endpoint format.
from humanloop import Humanloop

hl = Humanloop(api_key="<YOUR_HUMANLOOP_API_KEY>")


def run_conversation():
    # Step 1: send the conversation and available functions to GPT
    messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
		
    # TODO - Add tools definition here
    
    response = hl.chat(
        project="Assistant",
        model_config={"model": "gpt-3.5-turbo-0613", "max_tokens": 100},
        messages=messages,
    )
    response = response.body["data"][0]
  1. Define the tool: Define a tool using the universal json-schema syntax. Let's assume we've defined a get_current_weather tool, which returns the current weather for a specified location. We'll add it in via a "tools": tools, field. We've also defined a dummy get_current_weather method at the top. This can be replaced by your own function to fetch real values, for now we're hardcoding it to return 22 and cloudy for this example.
from humanloop import Humanloop

hl = Humanloop(api_key="<YOUR_HUMANLOOP_API_KEY>")

def get_current_weather(location, unit):
    # Your own function call logic
    # We will return dummy values in this example
    return {"temperature": 22, "other": "cloudy"}


def run_conversation():
    # Step 1: send the conversation and available functions to GPT
    messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
    # functions are referred to as tools on Humanloop, but follows the same schema
    tools = [
        {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        }
    ]
    response = hl.chat(
        project="Assistant",
        model_config={"model": "gpt-3.5-turbo-0613", "tools": tools, "max_tokens": 100},
        messages=messages,
    )
    response = response.body["data"][0]
    
    # TODO - Add assistant response logic
  1. Check assistant response: The code above will make the call to OpenAI with the tool but it does nothing to handle the assistant response. When responding with a tool response the response should have a tool_call field. Fetch that value and pass it to your own function. An example of this can be seen below. Replace the TODO - Add assistant handling logic in your code from above with the following.
    # Step 2: check if GPT wanted to call a tool
    if response.get("tool_call"):
        # Step 3: call the function
        # Note: the JSON response may not always be valid; be sure to handle errors
        available_functions = {
          "get_current_weather": get_current_weather,
        }  # only one function in this example, but you can have multiple
        tool_call = response["tool_call"]

        function_name = tool_call["name"]
        function_args = json.loads(tool_call["arguments"])
        function_to_call = available_functions[function_name]
        function_response = function_to_call(
          location=function_args.get("location"),
          unit=function_args.get("unit"),
        )
				
        # TODO - return the tool response back to OpenAI
  1. Return the tool response: We can then return the tool response to OpenAI. This can be done by formatting OpenAI tool message into the relative assistant message seen below along with a tool message with the function name and function response.
        # Step 4: send the response back to the model
        messages.append(
          {"role": "assistant", "content": "", "tool_call": response["tool_call"]}
        )
        messages.append(
          {
            "role": "tool",
            "name": function_name,
            "content": json.dumps(function_response),
          }
        )
        second_response = hl.chat(
          project="Assistant",
          model_config={
            "model": "gpt-3.5-turbo-0613",
            "tools": tools,
            "max_tokens": 500,
          },
          messages=messages,
        )
        return second_response
  1. Review assistant response: The assistant should respond with a message that incorporates the parameters you provided, for example: The current weather in Boston is 22 degrees and cloudy. The above can be run by adding the python handling logic at the both of your file:
if __name__ == "__main__":
    response = run_conversation()
    response = response.body["data"][0]["output"]
    # Print to console the response from OpenAI with the formatted message
    print(response)

The full code from this example can be seen below:

from humanloop import Humanloop
import json

hl = Humanloop(
    api_key="<YOUR_HUMANLOOP_API_KEY>",
)


def get_current_weather(location, unit):
    # Your own function call logic
    # We will return dummy values in this example
    return {"temperature": 22, "other": "cloudy"}


def run_conversation():
    # Step 1: send the conversation and available functions to GPT
    messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
    # functions are referred to as tools on Humanloop, but follows the same schema
    tools = [
        {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        }
    ]
    response = hl.chat(
        project="Assistant",
        model_config={"model": "gpt-3.5-turbo-0613", "tools": tools, "max_tokens": 100},
        messages=messages,
    )
    response = response.body["data"][0]

    # Step 2: check if GPT wanted to call a tool
    if response.get("tool_call"):
        # Step 3: call the function
        # Note: the JSON response may not always be valid; be sure to handle errors
        available_functions = {
            "get_current_weather": get_current_weather,
        }  # only one function in this example, but you can have multiple
        tool_call = response["tool_call"]

        function_name = tool_call["name"]
        function_args = json.loads(tool_call["arguments"])
        function_to_call = available_functions[function_name]
        function_response = function_to_call(
            location=function_args.get("location"),
            unit=function_args.get("unit"),
        )

          # Step 4: send the response back to the model
        messages.append(
            {"role": "assistant", "content": "", "tool_call": response["tool_call"]}
        )
        messages.append(
            {
                "role": "tool",
                "name": function_name,
                "content": json.dumps(function_response),
            }
        )
        second_response = hl.chat(
            project="Assistant",
            model_config={
                "model": "gpt-3.5-turbo-0613",
                "tools": tools,
                "max_tokens": 500,
            },
            messages=messages,
        )
        return second_response


if __name__ == "__main__":
    response = run_conversation()
    response = response.body["data"][0]["output"]
    # Print to console the response from OpenAI with the formatted message
    print(response)