Welcome to Portkey Forum

V
Venge
Offline, last seen 3 weeks ago
Joined November 4, 2024
Why isn't it possible to define a config (fallbacks, load balancing, etc.) or link to a config id in a prompt template? If you can select a single virtual key you should also be able to select a config that is used. In my opinion both features make only sense if combined, else you still have to override some stuff in code and cannot use the UI-based prompt management to iterate on prompt configurations properly. Or am I missing something?
5 comments
G
V
Hey Portkey-Team!
We have two new issues: πŸ˜‰

  1. I just created a Vertex AI virtual key. The Gemini models work fine but the Anthropic models fail. Seems you are using a wrong endpoint: "vertex-ai error: Publisher Model projects/celtic-pulsar-437515-n6/locations/europe-west3/publishers/google/models/claude-3-5-sonnet@20240620 not found."
Right endpoint is "publishers/anthropic/models/claude-3-5-sonnet". We really need this, would be great if you could fix this soon!

  1. We just discovered that the prompt versioning only covers the actual prompt, not the model selection. This is quite useless for us as we regularity test and switch models to best handle the llm calls. And prompts are often engineered for certain models as well. We really need to be able select different models in different versions and tags as well. e.g we have the problem that the Azure OpenAI responses are not 100% like the OpenAI responses, having caused our Prod environment to go down after switching one of our prompts to another virtual key for our dev environment. With clear separation/isolation of model selection via versions and tags this wouldn't have happened as the problem would have been discovered isolated in our DEV environment without spilling over to PROD (Published).
34 comments
V
G