Springing into AI - Part 3: Prompt Engineering
Welcome back. In Part 2 of the series, we looked at Large Language Model (LLM) that forms the backbone of GenAI applications. In this post, we continue our journey further by looking at Prompt Engineering. This is extremly useful as it would dictate how effectivly we can communicate with the underlying LLM for it to provide us contextually relevant information that is ccoherent to the user request. Prompt Engineering Communication forms an integral part of our daily lives. For them to be effective, we usually try to be varying in the quality of conversations taking place, find ourselves either specific, verbose, vague, etc. Based on how we communicate, we often get the response from the other person in same way, or sometimes in response of "I didn't understand" or "what do you mean" kind of vibe. Communicating with a LLM is no different when it comes to AI applications. The end user presents a prompt (in form of te...