How to use OpenAI Function Calling

What is OpenAI Function Calling?

OpenAI Function Calling feature recent new feature added in OpenAI API by which the model will be able to choose whether to invoke the custom function or not.

Now why is this feature useful?

Let’s say you have a weather chatbot who gets the message from a user which has a city name and you have provided a prompt to return the location as structured data from the model.

If the model provides location as output, you then call a weather service to get the weather data and return it to the client.

Now, the problem with this approach is that the model might fail to return the proper output due to their indeterministic nature.

So with function calling, the model will decide whether the function needs to be called based on the input prompt itself. This might be similar to ChatGPT Plugins.

This feature is supported by GPT Models such as GPT-4 and GPT-3.5-turbo.

How to use the Function Calling Feature

Step 1: Setting Up API Key

For getting started with function calling, you’ll need to set up your API key. You can easily get one by creating an account on the OpenAI website. Once you have your account, you can generate an API Key.

Your API Key is unique and should not be shared with anyone else. So you can store it in some kind of secret manager service. If you ever lose your API Key or suspect it has been compromised, you can reset it to ensure your account’s and data’s security.

Step 2: Defining Functions

Firstly, you need to create a new function in the code which will be called. We’ll take the example of a weather application which takes the city as input and returns the weather status.

def get_current_weather(location, unit):
    return {
        "location": location,
        "temperature": "72°C"

Now, you need to define those functions in the prompt, this involves creating a JSON object that provides the necessary details about the function. This includes the function’s name, description, parameters etc.

functions = [
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            "required": ["location"],

Next, you need to specify the function while sending requests to LLM. This JSON format counts against the system message which will be considered in the input tokens sent to the LLM.

Step 3: Calling Functions

Once you have the function and prompt ready, we just need to run the script. Based on the model’s response, you can call your function from the code. The model generates a JSON object containing the arguments to be passed to the function.

Here is the output from the model:

The LLM has suggested calling the python function with the arguments that we have defined in the previous step.

gpt_output = openai.ChatCompletion.create(
    function_call="auto",  # auto is default, but we'll be explicit

function_names = {
    "get_current_weather": get_current_weather

if "function_call" in gpt_output["choices"][0]["message"]:
    print("Model has chosen to call the function")
    response = gpt_output["choices"][0]["message"]["function_call"]

    function_name = function_names[response["name"]]
    output = function_name(**json.loads(response["arguments"]))
    print("Output from the function")

Yes! That’s how easy it is to use the function calling feature.

If you are building applications using large language models, then you most likely will be using langchain framework.

Here is how you can use the function calling feature with langchain.

from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI

def get_weather_status(location, unit="celsius"):
    return {
        "location": location,
        "temperature": "72°C"

llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo", openai_api_key=api_key)
tools = [
        name = "get_weather_status",
        description="useful when you need to return the weather status from the location"

agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
output ="What's the weather like in Boston?")

Langchain takes care of calling the function based on the model’s response so you don’t need to do it.

Here is the documentation for implementing this in Javascript.

Benefits of Function Calling Feature

Function Calling can be used in a wide range of applications that make it a valuable tool to reliably connect GPT’s capabilities with external tools. Let’s explore some of these benefits:

Enhancing Chatbots

By incorporating function calling, chatbots can become more intelligent and responsive. They can retrieve real-time information from external tools and APIs or perform tasks without relying on predefined responses. For instance, a chatbot can use function calling to fetch current weather updates, find definitions of words, or even translate sentences in different languages.

Structured Data Extraction from Unstructured Text

Function calling can effectively extract structured data from natural language, bringing structure to otherwise messy information. This can also be used to construct api calls and database queries from the text.

So it has a lot of use cases such as:

A product review analysis tool can extract crucial details like price, rating, and the number of reviews from product reviews.

A sentiment analysis tool can gauge the sentiment (positive, negative, or neutral) of social media posts.

Financial data analysis tools can fetch stock prices, volumes, and open/close prices from news articles.

Challenges with Function Calling Feature

Till now, we saw the how to use function calling and it’s benefits, but there are some challenges with it and few tips I recommend in overcoming those challenges:

Correct interpretation of function calls

The model may struggle to understand the function name, the arguments, or the return value, leading to errors in the output. To overcome this challenge, you need to provide clear and concise function descriptions.

By clearly explaining the purpose and expected arguments of the function, you can help the model accurately interpret the function call.

Generating valid JSON

Another challenge developers may face is the generation of valid JSON by the model. The model may not format the JSON object correctly, resulting in errors when parsing the JSON.

To address this, it is essential to thoroughly validate the generated JSON and make any necessary adjustments to ensure its validity.

By following a consistent naming convention for functions and arguments, developers can assist the model in identifying the correct function and argument names, reducing the chances of generating invalid JSON.

Hallucinated function and argument names

OpenAI models have a remarkable ability to generate content, but they can sometimes generate function and argument names that do not exist.

This can lead to errors when calling the function. This can be mitigated by providing a proper system prompt.

Generating a chain of function calls

Some developers may face difficulties when attempting to generate a chain of function calls using OpenAI models. The model may struggle to understand the proper order and execution of multiple function calls, leading to errors in the output.


As such it provides benefits such as improving chatbot intelligence as well as enabling structured data extraction from unstructured text, but it has also been associated with challenges such as poor interpretation, JSON generation, as well as possible hallucinations in regard to function and argument names.

Ultimately, OpenAI Function Calling allows you to enhance the capabilities of applications built using GPT models.

It provides benefits such as improving chatbot intelligence and enabling structured data extraction from unstructured text and challenges associated with accurate interpretation, JSON generation, and potential function and argument name hallucinations.

However, we can work around these challenges by providing clear function descriptions, validating JSON output, adhering to naming conventions, and carefully constructing system prompts.

You can use either OpenAI SDK for using the function calling or the Langchain framework.