Welcome to Portkey Forum

Updated 2 months ago

Anthropic models on Vertex and prompt versioning with models

Hey Portkey-Team!
We have two new issues: πŸ˜‰

  1. I just created a Vertex AI virtual key. The Gemini models work fine but the Anthropic models fail. Seems you are using a wrong endpoint: "vertex-ai error: Publisher Model projects/celtic-pulsar-437515-n6/locations/europe-west3/publishers/google/models/claude-3-5-sonnet@20240620 not found."
Right endpoint is "publishers/anthropic/models/claude-3-5-sonnet". We really need this, would be great if you could fix this soon!

  1. We just discovered that the prompt versioning only covers the actual prompt, not the model selection. This is quite useless for us as we regularity test and switch models to best handle the llm calls. And prompts are often engineered for certain models as well. We really need to be able select different models in different versions and tags as well. e.g we have the problem that the Azure OpenAI responses are not 100% like the OpenAI responses, having caused our Prod environment to go down after switching one of our prompts to another virtual key for our dev environment. With clear separation/isolation of model selection via versions and tags this wouldn't have happened as the problem would have been discovered isolated in our DEV environment without spilling over to PROD (Published).
G
V
34 comments
Hey @Venge sorry if the documentation was missing this info, but in the model for anthropic hosted on vertex you need to pass it this way:
anthropic.claude-3-5-sonnet@20240620
Hey Dobby. Not sure how to achieve this without a manual override via code.

The Virtual key just contains the region, no way to specify a endpoint or the model. In the Prompt I just select the key and model from a dropdown. No way to add this prefix there.
Anthropic models on Vertex and prompt versioning with models
Will fix the UI for this, but this should work when used with the SDK
But I get you want to save the prompt, will fix the UI and let you know
Thanks. Do you have an ETA for this fix? I would rather avoid manual model overrides for prompts in code...
expect it by tomorrow
Perfect! thanks!
About the 2nd suggestion: Do you have plans to include the key&model selection in the versioning? (in my opinion a necessary change to have working versioning)
will discuss it and get back
@Venge this is already the default behaviour, you can use it like so:
Plain Text
from portkey_ai import Portkey

portkey = Portkey(
    api_key="{YOUR_API_KEY}",
)

completion = portkey.prompts.completions.create(
    prompt_id="pp-capitals-d13078",
    variables={ "country": "North Korea" }
)
the model and the virtual key you used while creating the prompt are used for this request
You can override the virtual_key if you need to by passing the virtual key in the config object and passing it to the client like this
Plain Text
...

portkey = Portkey(
    api_key="{PORTKEY_KEY}",
    config={
        "override_params": {
            "model": "gpt-4o"
        },
        "virtual_key": "overriding_key"
    } # alternatively pass the saved config key
)

...
I don't understand. How does this relate to versioning and tagging of prompts?

To make it more clear:
We iterate on our prompts, creating new versions from time to time. The "Published" version is fetches from PROD environment. The other versions (not yet published) are for Dev and Staging (we need more Tags here - I already requested that a couple of weeks ago to be able to explicitly tag STAGING and DEV without implicitly relying on LATEST tag) where we optimize prompts. Part of this optimization includes model selection. If we switch to another virtual key/model and update the prompt, this switch if not bound to the current/new version of the prompt, but has global effect for the prompt configuration. Therefore it also effects the "Published" Version on PROD (with a wrong prompt not tailed to the now selected model) which in our case broke production.

Versioning should/must be complete for the prompt configuration, not only a part of it.

Again: I don't want to ever override any prompt settings in code. The whole purpose of using Prompt management is to iterate on prompt configuration OUTSIDE the code, just referencing the config from code via the prompt id.
okay, what I understand is that you want to be able to use an unpublished version of your prompt in dev
I'll check if this is something we want to do, but you should ideally be using different workspaces for dev and prod
https://portkey.ai/docs/product/enterprise-offering/org-management/workspaces#workspaces
You ought to create a separate prompt in dev workspace and use it for the iteration cycle instead of doing that in production
Yes and No. πŸ™‚
  1. Yes I want to do exactly this. Using different workspaces or projects would work but manually copying over prompts is errorprone and clumsy. So this is not the right way. If this is the planned workflow then there need to be a promotion workflow for prompt configurations that automatically copies configs from DEV -> STAGING -> PROD, else using a Staging environment is quite useless. For the same reason you don't put your dev, staging and prod code into three separate git repositories. You need to ensure technically that a deployment in PROD is practically identical to STAGING. Same for the prompt configuration. Manually syncing them (by copying over) is not feasable in a professional context. And ideally this promotion (if not done via Tagging in the same project/workspace) need to support CI/CD via an api call.
Using tags for stages is a much simpler and less error-prone approach. That's why other tools recommend that way (https://langfuse.com/faq/all/managing-different-environments) and do this much better than Portkey currently.

  1. Despite this use case: Versioning of prompts makes no sense if the model selection is not part of it. Prompts are optimized for models. Switching back to an older version without switching to the model which was active when the prompt was defined is not yielding the same results, neither syntactically nor in regards of contents (as it's a different model). Therefore versioning the full configuration for a prompt (everything you see and can update on the screen) is the natural way.
Any news about the Anthropic endpoint bug? I still get the error message.
This is how it looks like in Langfuse. Very convenient. Every Stage can pull the right version of every prompt. Easy to iterate and promote improvement to later stages.
Attachment
screenshot_256.png
will do it today and let you know
got it!
So let me summarize this:
You want to be able to add custom tags to the prompts, and different virtual keys to be bound to the version as chosen.
Both are actually useful suggestions tho, thanks a ton, we'll look into it
binding different virtual keys we'll definitely be doing.
also we'll allow adding custom tags.
Currently you can edit the model (not the virtual key) and prompt and use an unpublisjed version like this with 'promptId@versionNumber'
Plain Text
completion = portkey.prompts.completions.create(
    prompt_id="pp-capitals-d13078@3",
    variables={ "country": "North Korea" }
)
Yep exactly!
Great! The custom tags I already discussed with Vrushank a while ago. Thanks for implementing it! There's not much left until PortKey is really production ready for us... πŸ™‚
About the version: Yep, I know. We currently pull the PUBLISHED version for PROD and @LATEST version for dev and staging. Versions could be a workaround for a third stage, but explicit tags would be much more convenient.
Anthropoid models on vertex have been fixed
Thanks! I've just seen it! Thanks for the quick help. I still get some error but this might also be connected to our account and rate limits. Will check this and get back to you.
Maybe another quick idea: Not all models are available in all regions. e.g. I selected Claude 3.5 in europe-west-3, but it's not available there resulting in another NOT_FOUND error message. Of course that's not your fault πŸ™‚ But it might help to filter out the model selection in the UI to only show models which are available in the selected region. Same might be true for other configurations other than vertex AI.
Add a reply
Sign up and join the conversation on Discord