Springing into AI - Part 14: MCP Server - Tools Playground ( Sampling )
Project
For a sample playground, we being the owners of Hobbits Inc will be creating a dumb hobbit food recipe analyzer tool simulating interaction between MCP Server and the LLM. Since we do not have a MCP Client for it with a LLM, MCP Inspector will be used to simulate real world LLM interaction with ability to add desired content and propagate back to MCP Server where the tool will obtain data and respond back. An architecture overview of our tools playground is shown below:
Setup
Our project setup encompasses the following:
- Java: 17
- Spring AI: 1.1.2
- Spring Boot: 4.0.3
- Testing tool: MCP Inspector
- Source code: MCP Server ( Sampling ) can be viewed here
- Project Demo: Youtube (MCP Server - Tools) here
Demo Screenshots
We select hobbitFoodRecipeAnalyzer tool for purpose of sampling as show below:
When we run this tool, a sampling/createMessage used by MCP is sent to the client. For our playground, this would be the MCP Inspector. The MCP Inspector receives the payload as per below. In here we can see a dummy system prompt and an assistant prompt that would be presented to the LLM. Since this is simulation, we would be emulating response.
As mentioned above, since we emulating behaviour of a LLM, we can enter our own value for text that we want to send back from LLM to the MCP Server.
The moment, the MCP Server receives this response, depending on the use case the tool can act on it and carry out on more business logic or simply return the response obtained back to the user. In our playground, we are simply returning the LLM analysis (that we manually typed for emulation) we received back as response.
Code Walkthrough
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | @McpTool( name = "hobbitFoodRecipeAnalyzer", description = "Provides summary of hobbit food recipe" ) public FoodRecipe hobbitFoodRecipeSampling( final McpSyncRequestContext context, @McpToolParam(description = "Name of the food") String food ) { if (context.sampleEnabled()) { McpSchema.CreateMessageRequest samplingMessageRequest = McpSchema.CreateMessageRequest.builder() .systemPrompt(""" You are a hobbit food recipe analyzer. You are to suggest recipes for a given food." "If you are not able to suggest a food recipe, simply respond with a friendly message" "letting user know recipe currently not available. """) .messages(List.of( new McpSchema.SamplingMessage( McpSchema.Role.ASSISTANT, new McpSchema.TextContent( "Please provide recipe for a %s".formatted(food))))) .progressToken(null) .maxTokens(1000) .stopSequences(null) .meta(null) .metadata(null) .temperature(null) .modelPreferences(null) .progressToken(null) .build(); McpSchema.CreateMessageResult messageResult = context.sample(samplingMessageRequest); McpSchema.TextContent content = (McpSchema.TextContent) messageResult.content(); return new FoodRecipe(content.text()); } return new FoodRecipe("Sorry, recipe currently not available. Please try again later."); } |
From the code above:
- Line 1: Annotation used to declare the business function as a MCP Tool. The metadata listed inside the annotation is of great importance as when a user in real world prompts the LLM, it will based on the description of the tool determine whether to invoke the tool or not
- Line 3: McpSyncRequestContext is a class provided by SpringAI that will help us gain metadata information as well as certain utility methods that we can use to invoke some of the capability of the tool. For this playground, we use Sampling capability on this object.
- Line 4: We expose certain parameters that we require input from the user before our tool can process the request. As mentioned the description mentioned here is of importance as LLM in real world will use to bind user input to these parameters.
- Line 7: Before we can request LLM for further analysis from MCP tool server, we need to check if that capability is enabled in the tool or not. This is a good design practice and not assume.
- Line 11: Here we create a system prompt for the LLM to use. This is used to streer the LLM in direction to the context of what we trying to achieve and the boundaries in which it should operate.
- Line 16 - 20. This is an ASSISTANT level prompt that is provided to LLM. Imagine this as a sample user interacting with the LLM, only difference here being that the end user here is our MCP server interacting with the LLM.
- Line 21 - 28: These are the fine tuning parameters of the LLM that we would use to control the behaviour of the LLM to make it extremly precise to vary its freedom to have certain flexibility in the way it generates the response. For dummy playground, most of these are set empty as we wont be interacting with the LLM, and instead using MCP Inspector to manually type the response we wish to send back for emulation purposes.
- Line 30 - 31: We sample the request we created in lines above to present to the LLM. This is a blocking request at this stage with a default timeout set by Spring AI. For this purpose we requesting recipe to be analyzed and provide response of that we could then present to the user. so they can feed their tummy in Middle Earth.
- Line 32 - 33: For this sample playground, we chose text content, but there are various responses that we can gather such as resource, media etc. Spring AI provides different classes to handle each type of request. We can simply use instance of and validate the type of response to process accordingly.
- Line 34: We simply return the response back to the user thereby completing the task

.jpg)
.jpg)
.jpg)
.jpg)
Comments
Post a Comment