Welcome to Portkey Forum

Updated 10 months ago

Integrating Vector Database Querying and Evaluation in Portkey

I am currently rewriting prompts I am using to query a vector database. That got me thinking, it would be great if I could monitor the results, change the prompt parameters like temperature etc. all inside Portkey and look through the evals there consistently, because this is now the only thing that is not in the same place. Even if Portkey doesn't actively query the vector database, would it be possible to simply integrate this just for monitoring purposes? I wouldn't even mind if you returned the prompt per API call or something, but I would benefit from being able to change the prompt without always rebuilding the docker container to test it.
V
e
d
7 comments
@ekevu. firstly, curious what is your current stack from OpenAI --> Portkey --> Docker and what it does.

And if I got this right, what you're saying is, on the Evals tab, along with the response JSON, you would like to see it's function calls? Or beyond that, we can expose a connector to Portkey, and you would send custom (meta)data that can be appended to the response and you'd like to see that in Portkey logs as well?
I am building a prompt chain, so the output from one prompt gets inputted into the next prompt etc. Part of it is querying the vector database with external data sources and prepare them into a standardised format that the LLM can pick up later. Based on your answer, I am not sure if we understood each other correctly. But for querying the vector database, I am using a prompt which I have hardcoded inside python. Our vector database is qdrant and we are using GPT-4-Turbo for this. Now, I am supervising the prompt chain inside Portkey, which means that per chain that runs, I can look through all items and see where in the chain occured an output I am not happy with, so that I can fix it for the future. The vector database query is the only one which I don't have in Portkey, so I need to monitor the results somewhere else. Of course, while building this, I can write unittests and test the prompt. But what I also found handy is that once the Portkey API call has been integrated into the code, I can fix a bug immediately and I don't have to rebuild the docker container with a prompt change every single time and update our production application. Does that make sense?
Got it. Thanks for taking the time to explain!
  • Solution 1 is: We extend our monitoring capability (and proxy) to vector db queries
  • Solution 2 is: If your workflow involves a GPT-4 prompt that creates the final Qdrant query, the results returned from Qdrant can be appended as metadata to your GPT-4 request and you can see those results in Portkey logs. You can then debug your prompt, rerun the pipeline and see if it changes anything right in the logs.
Does that make sense?
Yes, I am using a stable prompt containing variables, though, so solution1 would be it. I totally understand that this is not easy and I would be even fine if the results got duplicated into Portkey, so I would simply retrieve a variable (the prompt) from Portkey, but I still had the result inside Portkey for monitoring. I just think the ability to monitor everything in one place is really a strong side of Portkey.
Hello @ekevu. would love to chat as I also had used Vector DB call within my prompt chain
Thank you! Sharing this with the team. Will let you know if/when we pick this up!
Add a reply
Sign up and join the conversation on Discord