Playing with ChatGPT Prompts

Prompt Engineering using OpenAI's ChatCompletion APIs

Introduction

I started to use ChatGPT a couple of months ago and I'm stunned with what it can do. I use the web interface, slowly moved to use OpenAI's APIs in the products I am building, and tinkered with Marvin AI for a few scenarios. Wrote a blog on Marvin sometime back. You would have guessed by now as to what the next step would be, yes, Prompt Engineering.

It is nothing but writing good prompts so that Large Language Models (LLMs) give back a better response. I did a short course on ChatGPT Prompt Engineering available from Andrew Ng's DeepLearning.ai.

Two reasons why I did the course:

  1. I wanted to be better at prompts to capitalize on the power of LLMs

  2. Quick structured learning always helps, especially if one learns from good sources like Andrew Ng's. For most of us including me, the first Machine Learning course would have been his Coursera one, isn't it?

This Prompt Engineering course, it's a short one but loaded with information, practical information that can be applied easily.

A Bit of Context

Time to apply what I learned. I'm building a Web app "Your-recipe-buddy", which is open source. All details about the app are here. The crux of the app is to suggest recipes when the user provides inputs like cuisine or ingredients or meal course to start with. Check the product roadmap to see how I intend to take it forward.

For the existing APIs that I developed, I used OpenAI's API. I realized that the prompt I used was quite naive, hence I wanted to understand how to write good ones.

Let me show an example prompt that I've written recently. There is a menu option to return a random recipe, just as a surprise. Below is the code, note the prompt that I used.

from .common import get_recipe_using_openai

"""
Use GPT-4 API to fetch one random recipe
"""

def invoke_random_recipe(settings):

    api_key=settings.openai_api_key
    prompt = "Write a Recipe with your own choice of Ingredients. \
        Mention Cooking time and clear instructions on how to cook \
        along with the cuisine of your recipe"
    success_message="Successfully returned a random recipe. Enjoy!"
    recipe = get_recipe_using_openai(api_key=api_key, \
                                       prompt=prompt, \
                                        success_message=success_message)
    return recipe

It does return a random recipe, below is part of the output when I invoked the API right now.

0:{
"text":" Veggie Jalfrezi Cuisine: Indian Cooking Time: 30 minutes 
Ingredients: • 2 tablespoons vegetable oil • 1 teaspoon cumin seeds 
• 1 medium chopped onion • 1 teaspoon freshly grated ginger 
• 2 medium chopped tomatoes • 1 teaspoon ground coriander 
• 1 teaspoon ground cumin • 1 teaspoon garam masala 
• 2 cups of diced mixed vegetables (such as potatoes, peppers, 
carrots, green beans, etc) • Salt to taste 
• 2 tablespoons of freshly chopped cilantro 
Instructions: 1. Heat the oil in a large wok or deep skillet over 
medium-high heat. 2. Add the cumin seeds and cook until the seeds 
sizzle and turn brown. 3. Add the chopped onions and sauté until 
they are brown. 4. Add the grated ginger and sauté for a few seconds.
 5. Add the tomatoes and cook until they are soft and mushy. 
6. Add the ground coriander, cumin and garam masala, and mix everything 
together. 7. Add the diced vegetables and a pinch of salt, and stir 
everything together. 8. Cover the pan and reduce the heat to low and 
let the vegetables cook for about 20 minutes, stirring occasionally. 
9. Once the vegetables are cooked through, turn off the heat and stir 
in the chopped cilantro. 10. Serve the veggie jalfrezi warm with some
basmati or naan bread. Enjoy!"

I realized that this is not readable, thus starting my curiosity to understand the art of writing good prompts. Before trying to parse the JSONmyself, I wanted to know whether there can be a better response.

The answer is a big YESSSS! I will show you how with all the learnings I had from the short course that I mentioned above.

Come, let's start playing with the prompts and see how things change!

💡
I plan to create a branch just to showcase how the response changes, but don't intend to merge it with main

Main Principles

When it comes to Prompt Engineering, the instructors suggest the following 2 main principles:

  1. Instructions should be clear and specific - Vague, ambiguous, high-level instructions do not help create the response we may want to see. Clear to the level of telling the model what type of output we want, whether JSON or HTML is one, what tone we want the response to be, formal or informal, what should be the word limit and so on. Likewise, understanding the sentiment, translating the response, or summarizing an input, can be anything, it's just that we need to instruct the model with the right inputs.

  2. Give the model enough inputs such that it thinks before spitting the response - What this means is to explain the steps required to create the output, in case of providing conclusions let the model suggest its conclusion before validating the user's.

Keeping these as guiding principles, we can try different tasks now.

Iterative

Learn to refine your prompts iteratively. It's very difficult to get the correct prompt the first time, infact it is not required. Provide a reasonably correct prompt, look at the output, figure out what can be made better, then re-iterate the process by refining the prompt.

Depending on your need, it may take a couple of times before settling down on the final version. To be honest, we will always feel that that can be more improvement. Make sure to draw a line with what can be the final response, or else it may end up eating your time.

Let's try a better prompt for the random recipe that we saw above.

from .common import get_recipe_using_openai

"""
Use GPT-4 API to fetch one random recipe
"""

def invoke_random_recipe(settings):

    api_key=settings.openai_api_key
    prompt = "Your task is to write a recipe for a dish that is healthy, \
        delicious, and easy to make. The response should be in JSON format \
        with Cuisine, Cooking Time, Ingredients, Instructions as keys. Instructions \
        be in the form of Step 1, Step 2 and so on." 
    success_message="Successfully returned a random recipe. Enjoy!"
    recipe = get_recipe_using_openai(api_key=api_key, \
                                       prompt=prompt, \
                                        success_message=success_message)
    return recipe

On executing the code with the modified prompt, below is the response I get

"{ "Cuisine": "Mediterranean", "Cooking Time": "30 minutes", 
"Ingredients": [ "2 boneless, skinless chicken breasts", 
"1 tablespoon olive oil", "1 teaspoon dried oregano", 
"1 teaspoon dried basil", "1/2 teaspoon garlic powder", 
"1/2 teaspoon salt", "1/4 teaspoon black pepper", 
"1 cup cherry tomatoes, halved", 
"1/2 cup sliced black olives", 
"1/4 cup crumbled feta cheese", 
"2 tablespoons chopped fresh parsley" ], 
"Instructions": [ "Step 1: Preheat the oven to 400°F (200°C).", 
"Step 2: In a small bowl, mix together the dried oregano, dried basil, 
garlic powder, salt, and black pepper.", 
"Step 3: Place the chicken breasts on a baking sheet and drizzle with 
olive oil. Sprinkle the spice mixture evenly over the chicken.", 
"Step 4: Bake the chicken in the preheated oven for 20-25 minutes, 
or until cooked through and no longer pink in the center.", 
"Step 5: While the chicken is baking, prepare the tomato and olive topping.
 In a bowl, combine the cherry tomatoes, black olives, feta cheese, 
and chopped parsley.", 
"Step 6: Once the chicken is cooked, remove it from the oven 
and let it rest for a few minutes. Then, top each chicken breast 
with the tomato and olive mixture.", 
"Step 7: Serve the chicken with a side of steamed vegetables or a 
fresh salad. Enjoy!" ] }"

Much better than what we got the first time, isn't it? Let's try and refine it a bit more.

    prompt = "Your task is to write a recipe for a dish that is healthy, \
        delicious, and easy to make. The response should be in JSON format \
        with Cuisine, Cooking Time, Ingredients, Instructions as keys. 
        Instructions be in the form of Step 1, Step 2 and so on. \
        Can you also show the steps as a html table?" 
    success_message="Successfully returned a random recipe. Enjoy!"

Here is the HTML table that it generated on top of the previous JSON response. Quite impressive!

Like so, we can keep iterating and making it better until it suits what we want to see as a response.

Summarize

Let's try to create a summary of the recipe suggested in the previous step. We shall ask it to focus on ingredients.

from .common import get_recipe_using_openai

"""
Use GPT-4 API to fetch one random recipe
"""

def invoke_random_recipe(settings):

    api_key=settings.openai_api_key
    recipe = """"
    "{ "Cuisine": "Mediterranean", "Cooking Time": "30 minutes", 
    "Ingredients": [ "2 boneless, skinless chicken breasts", 
    "1 tablespoon olive oil", "1 teaspoon dried oregano", 
    "1 teaspoon dried basil", "1/2 teaspoon garlic powder", 
    "1/2 teaspoon salt", "1/4 teaspoon black pepper", 
    "1 cup cherry tomatoes, halved", 
    "1/2 cup sliced black olives", 
    "1/4 cup crumbled feta cheese", 
    "2 tablespoons chopped fresh parsley" ], 
    "Instructions": [ "Step 1: Preheat the oven to 400°F (200°C).", 
    "Step 2: In a small bowl, mix together the dried oregano, dried basil, 
    garlic powder, salt, and black pepper.", 
    "Step 3: Place the chicken breasts on a baking sheet and drizzle with 
    olive oil. Sprinkle the spice mixture evenly over the chicken.", 
    "Step 4: Bake the chicken in the preheated oven for 20-25 minutes, 
    or until cooked through and no longer pink in the center.", 
    "Step 5: While the chicken is baking, prepare the tomato and olive topping.
    In a bowl, combine the cherry tomatoes, black olives, feta cheese, 
    and chopped parsley.", 
    "Step 6: Once the chicken is cooked, remove it from the oven 
    and let it rest for a few minutes. Then, top each chicken breast 
    with the tomato and olive mixture.", 
    "Step 7: Serve the chicken with a side of steamed vegetables or a 
    fresh salad. Enjoy!" ] }"
    """
    prompt = f"""Your task is to create a summary of the recipe generated by the AI. \
                Summarize the recipe below, delimited by triple backticks, in \
                atmost 50 words. \
                Recipe Summary: ```{recipe}```\
                """
    success_message="Successfully returned a random recipe. Enjoy!"

    recipe = get_recipe_using_openai(api_key=api_key, \
                                     prompt=prompt, \
                                     success_message=success_message)
    return recipe

Here is the summarized response that is generated when the above code is executed

"This Mediterranean chicken recipe takes only 30 minutes to make. 
Seasoned with dried oregano, basil, garlic powder, salt, and black pepper, 
the chicken is baked and then topped with a mixture of cherry tomatoes, 
black olives, feta cheese, and parsley. Serve with steamed vegetables or 
a fresh salad. Enjoy!"

This is cool. It's only a matter of fine-tuning the prompts depending on what we want to see as a response.

We can translate into different languages, infer sentiments, understand key topics and tags when a piece of text is given, write emails, expand on a given topic and do much more. The key in all these cases is how well you write your prompts.

Conclusion

As mentioned at the beginning of the post, Prompt Engineering is definitely an art. In my opinion, to shine in that, on top of following the above tips,

  • one needs to also understand the domain and context in the picture to a deeper extent rather than just looking at the problem/requirement at hand

  • depending on the scenario, the value of temperature should be set appropriately as that makes a lot of difference in the final response

Learning to write good prompts makes a lot of difference.

Few words of caution as well, using LLMs responsibly should come consciously from each individual, as they can be used for a lot of unwanted purposes very easily. So, let's make sure we use it ethically.

Another point to keep in mind is that these models may hallucinate sometimes. It might give an inappropriate response, it may be totally out of context too. Hence, one cannot simply rely on the response these LLMs generate and start to build pipelines without a solid mechanism for intervention and validation.

It was good fun playing with these prompts, do give it a try and have fun too 😁

Here is my GitHub link that has your-recipe-buddy repo. Feel free to open a discussion too!

References

https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/