Posts

Springing into AI - Part 11: Model Context Protocol (MCP) - Theory

Image
      In the previous posts, we went through a journey of having fun with learning some of the capabilities provided by SpringAI. This varied in its applicability for different use cases such as RAG , Tool Calling , Conversational Chat Memory , AI Observability , Basic Chat Client . In this post we continue to have fun by learning about Model Context Protocol (MCP). The "in" thing that has revolutionized modern day AI enterprise applications and empowered community to grow tremendously in its offering. For ease and not to get overwhelmed, the post is divided into two parts, the theory and the playground. Theory     MCP is a protocol specification that was designed by Anthropic and later open sourced. In it's basic form, it helps us to integrate external systems and enrich its capabilities with our client driven AI application offering both bi-directional, ie. client-server, server-client communication and notification approach. These capabilities range from off...

Springing into AI - Part 10: RAG

Image
     Welcome back, In Part 9 of the series we had a look at using tool-calling as a means to perform custom business logic functions and presenting the business data obtained from there to LLM so that it may have the capability to respond to prompts more contextually aimed towards a particular business use case. We continue our journey on the pre-trained to specific date shortcomings of a LLM and see how we can adapt solution further where we address the problem of presenting our documents to it using Retrieval Augmented Generation (RAG) , so that we can empower end user with information from their prompts about it. Excited ? Let's get into it. Retrieval Augmented Generation (RAG) - Theory     RAG can be summarized as a two step process. In the first step, we store our information content through a custom ETL process into a special kind of persistence store named Vector Store where each chunk is stored as a series of multi dimension N dimension vectors throug...

Springing into AI - Part 9: Tool Calling

Image
    Welcome back, from wherever you are, I hope you've had a lovely week. One of the caveats that comes with using LLM is that they are pre-trained on information up to a certain date. Furthermore, they would not be able to provide information on custom business data unless trained to do so. In this post we look to address the latter, where we present our custom data (static, obtained from third party vendor, database, etc) to LLM through means of custom business functions termed tool-calling so that it may response accordingly to relevant prompts made by user. It is to be noted, that not all LLM's support tool calling. Capabilities of different providers and their offerings can be found here . Tool-Calling Theory       Tool-Calling is one of many options (others being RAG, MCP, Agentic AI) that will follow in the series of posts that follow. Before we dive deep into those, understanding tool-calling would give a basic idea of how the solution is conceptualized...

Springing into AI - Part 8: Chat Memory

Image
     Welcome back. Growing up, or at some stage of your life if you ever watched Batman or numerous series and movies made of that superhero, you would have encountered the famous line being re-iterated every time " I am Batman " to criminals (Joker excluded) . Like these forgetful criminals who needed that reminder every time of his identity, when we interact with LLM's, our intelligent models also forget our interactions and have no recollection of what was asked to them previously, thereby making them completely stateless and preventing end user to have a "conversational" chat with it. In this part of the series we solve this by learning about " Chat Memory ", so let's get into it. You are welcome to skip the theory section and jump to playground directly.  Chat Memory - Theory     Spring offers the benefit of using the mighty Advisor(s). It is through this useful feature where we can tap into the user input request before it reaches the LLM, an...