Hey Portkey-Team!
We have two new issues: π
- I just created a Vertex AI virtual key. The Gemini models work fine but the Anthropic models fail. Seems you are using a wrong endpoint: "vertex-ai error: Publisher Model projects/celtic-pulsar-437515-n6/locations/europe-west3/publishers/google/models/claude-3-5-sonnet@20240620 not found."
Right endpoint is "publishers/
anthropic/models/claude-3-5-sonnet". We really need this, would be great if you could fix this soon!
- We just discovered that the prompt versioning only covers the actual prompt, not the model selection. This is quite useless for us as we regularity test and switch models to best handle the llm calls. And prompts are often engineered for certain models as well. We really need to be able select different models in different versions and tags as well. e.g we have the problem that the Azure OpenAI responses are not 100% like the OpenAI responses, having caused our Prod environment to go down after switching one of our prompts to another virtual key for our dev environment. With clear separation/isolation of model selection via versions and tags this wouldn't have happened as the problem would have been discovered isolated in our DEV environment without spilling over to PROD (Published).