Springing into AI - Part 7: Observability

Welcome back, and hope you having a wonderful day. Init's Part 6 of the series, we had a look at our first exciting chat application where we were finally able to get hands dirty and interact with the LLM that was running on our local machine. How exciting 😃was that to see it in action. In this part of the series we will be looking at an important concept of "Observability" that will help us provide insight into token usage amongst other factors, so let's get into it. Backend applications like the one we created previously require love from us engineers. This isn't the same type of love that was shared in Titanic by Leo and Kate, but more of a monitoring love that empowers us to know the state of our application from some fine grained metrics that is of interest to us so that we may have a behavioral sense of our running application. A typical example, in a Java application maybe wanting to know the amount of memory used in JVM (Java ...