Welcome to Portkey Forum

Updated 8 months ago

possibility of getting prompts directly through the SDK or APIs

Hi! We are considering starting to use the prompt library, but it would not be helpful for us without the possibility of getting prompts directly through the SDK or APIs. Any idea when this functionality will be available?
1
S
V
H
14 comments
Prompts Library through SDK looks like this:

Plain Text
import { Portkey } from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "", // defaults to process.env["PORTKEY_API_KEY"]
})

// Make the prompt creation call with the variables
const promptCompletion = portkey.prompts.completions.create({
    promptID: "pp-id",
    variables: {"variable1":"","variable2":""}
})

Did you mean something else when referring to SDK or APIs for prompts created in prompt library?
Hey Hans - to align, what you'd like is the ability to see the contents of the saved prompt template on Portkey through an API, right?
What i want is to use prompt templates from portkey without using portkeys chat completion function in the sdk, as i need to use open ai directly.
@Hans Magnus just published the docs for the render endpoint and wrote specifically the example you shared. Please check it out and let me know if this solves it for you:

https://portkey.ai/docs/product/prompt-library/retrieve-prompt-templates
This seems great! Will test it out tomorrow! πŸš€
any reason you'd not like to use OpenAI through Portkey? The observability and config benefits could be really useful on top of prompts
The reason is that I have not seen any benefits that justify the re-writez What other observability functionality is triggered trough the sdk vs proxy? I already use config in portkey, not tied to the sdk either?
Yep everything is separate - so you can plug the modules as you'd like. The key benefit you get by managing your prompts on Portkey is the ability to version changes and have granular publishing flow.

So, your prompt engineers can come to Portkey, try out the prompts in a sandbox environment to make sure that they are optimised with the relevant parameters etc, and you can then just deploy that prompt with a single prompt_id instead of having to manage the whole bulky prompt text in your code.

This especially helps when you want to make changes to your prompts - the prompt engineer can just test things again and publish the new prompt template, meanwhile your production code automatically gets updated to the latest version without you having to change anything.
I agree that this is an opportunity, which is why I asked for the API in the first place. Another reason I use OpenAI directly is that we use Vercel AI SDK, and the compatibility is better with OpenAI and non-existing with Portkey.

I have tested multiple different prompt solutions as well but I wanted one solution and landed on Portkey for now. Here is a list of tools I have tested if you are curious: https://www.gantry.io/, https://www.vellum.ai/, https://humanloop.com/, https://klu.ai/, https://orquesta.cloud/, https://www.langchain.com/langsmith, https://lunary.ai/
Interesting, thanks for sharing! And love that Portkey is working well for you!

We should think about doing a deep Vercel AI SDK integration πŸ˜„ cc @rohit
Thanks! Would love to know what you liked in Portkey and what would you like to see given the deep product research you've done.
@rohit

Main reasons:
  1. flexibility in usage and API first mindset over SDK underpins that
  2. great gateway
  3. Easy to work with together with Vercel AI SDK
What I would say I miss from vellum and others would be better prompt support:
  1. Support for multiple environments (dev, preview, prod - or similar)
  2. Better connection to observability
  3. Realtime collaboration
  4. Better playground, most of the others have good playgrounds to work with prompts and compare, vellum might be best here.
Better UX (many people working on prompts and observability are not engineers)

Look at langsmith for how they do tracing, feels smooth.
Another one would be storing traces (logs) as reusable test scripts, which would make it easier to compare the results of different runs and test multiple models in one go.
thanks so much for the feedback here. This is awesome
Add a reply
Sign up and join the conversation on Discord