Springing into AI - Part 3: Prompt Engineering
Welcome back. In Part 2 of the series, we looked at Large Language Model (LLM) that forms the backbone of GenAI applications. In this post, we continue our journey further by looking at Prompt Engineering. This is extremly useful as it would dictate how effectivly we can communicate with the underlying LLM for it to provide us contextually relevant information that is ccoherent to the user request.
Prompt Engineering
Communication forms an integral part of our daily lives. For them to be effective, we usually try to be varying in the quality of conversations taking place, find ourselves either specific, verbose, vague, etc. Based on how we communicate, we often get the response from the other person in same way, or sometimes in response of "I didn't understand" or "what do you mean" kind of vibe. Communicating with a LLM is no different when it comes to AI applications. The end user presents a prompt (in form of textual representation) to the application such as ChatGPT, which intern presents the prompt as tokens (previously discussed in Part 2) to the model for it to provide a response. The proverb "What yhou reap, so shall you sow", resonates strongly here to the response provided by LLM, as it tries do its best to act on what it is being presented. The figure below provides certain characterstics (left) of a good prompt, and list of techniques (right) that can be adopted for effective prompting.
Effective Prompting
As discussed above, the LLM can only act on what is being requested of it. The quality of what we are trying to get out of it is determined by the quality of our prompt. Some of the characteristics that can help us create a good prompt are:
- Context: Providing context in the prompt to the LLM on what we are requesting can be of a great help as it can narrow down on its knowledge base operation to provide meaningful response. This also helps steer the model in a certain direction. A typical bad example of a prompt here can be "Help me plan go on holiday trip". The LLM would not know where you want to go, how you want to go, when you want to go, etc. This could be much better if provide some context. A better example for the one mentioned can be something like "Help me plan a holiday trip to India in the starting of December for a month, where I want a stop over flight in either Dubai or Ethopia".
- Instruction: Input that we provide to the foundation model asking it to help us generate output based on what we ask of it. There are different prompting techniques that can be applied here. Most commonly used are:
- Zero Shot Prompting: This is as simple as asking the model directly for an output based on the input we provide. Example: "Please help me understand fundamentals of calculus". This is a very direct form of prompt and can be vague at times. It is perfect for use case where we just want something quick, and don't care much about the diversity of the response presented.
- Few Shot Prompting: We polish our prompt a bit more here where we present it some examples for the LLM to understand of exactly what we looking for. An example could be "Please help me understand the process of ordering food at restaurant. Help me understand this better in something like Example 1: Ask the waiter for the menu if not presented already, make your selection and present it to the waiter. Example 2: Ask the waiter for chef's special, and if any recommendations they would suggest, and confirm that with water". This is good for use cases where we may have kind of an idea, but want to understand it much better.
- Iteative Prompting: This technique is done over a few iterations, where we constantly provide prompts to the model asking it to refine its previous response one after another till we are satisfied with the final prompt that we could use for our use case. A typical example of this can be in conversational chat bots where say we trying to understand a concept of sciene as an example, and constantly asking it to better its response by giving more specificities every step of the way.
- System Prompting: This is means of prompting where we help steer the model in a certain direction to narrow down its scope of operation. A typical example could be something like "You are an e-commerce order management support agent that is responsible for helping the end users with their queries in regards to the order information and pricing. If anything else besides that is being requested kindly respond back in kind with response of I am only able to offer my service for user order, and cannot cater to any other request". This is good for use cases where we want to say narrow the user operation to only information about a specific entity, such as business information chat bot, etc.
- Chain of Thought: In this type of prompting technique, we let the model reason its output and present it to us in a step by step format. An example could be "I am newbie developing a website, help me build an address form. Please present me instruction(s) in step by step format that is easier for a novice to follow in sequential way. It should list everything from what all I require to get started, to what I need to do to run the project, to resolve dependencies".
- Negative Prompting: Much like System Prompting, but here we basically bounding the constrains of the prompt to be limited to a certain behavior letting them know what the model is not capable of doing.
- Role Play: Imagine yourself leading a project, you would end up assuming a certain type of role, be it project manager, software developer, business analyst, quality assurance, etc. When you interact with the LLM, you can request in prompt for it to assume a certain role. This helps it immensly to know its bounds much clear. Example can be "You are a project manager that is working on a software project dealing with financial reports for an e-commerce company".
- Example: When we communicate with the model, we can provide some examples to it so it can understand better what it is we are looking for. Imagine this as a teacher explaining a student some fundamentals of laws of motion in science with practical examples. Some of this can be seen above in prompting techniques.
- Output Type: This helps us dictate to the foundation model, the format in which we want our output presented to us. This can be something like "Show me the response in a tabular format, or a JSON syntax, or a XML syntax". This is obviously use case dependent.
The above should help us solidify our understanding of how to effectivly communicate with the LLM. It is something that we practice over and over till we get better responses, and fine tune our prompts. In the next part of the series, we will look at some of the tools at our disposal that will empower us to interact with the LLM, and finaly get our hands dirty and put everything that we learned into practice. Can barely wait, unable to control excitement ? Me too... stay tuned....

Comments
Post a Comment