Welcome to Portkey Forum

Updated last week

Specifying Location for Gemini Models in Portkey

Hello all, just discovered Portkey. Looks exciting. I am trying to use Gemini models but I only have quota in us-central1. How do I specify the location in the Portkey calls for Gemini Models?

If I use OpenAI library directly, Google says I have to do this:

OpenAI Client

client = openai.OpenAI(
base_url=f"https://{location}-aiplatform.googleapis.com/v1/projects/{project_id}/locations/{location}/endpoints/openapi",
api_key=credentials.token,
)

response = client.chat.completions.create(
model="google/gemini-1.5-flash-002",
messages=[{"role": "user", "content": "Why is the sky blue?"}],
)


Same works with Portkey as well? I mean I specify location and project_id in base_url and everything will be taken care of?
V
K
2 comments
Hi @KrisD not in this case. You can continue using the OpenAI SDK, and your base_url will always be https://api.portkey.ai/v1

But you can pass your Vertex details like this:

Plain Text
client = openai.OpenAI(
    base_url="https://api.portkey.ai/v1",
    api_key=credentials.token,
    default_headers={
        "x-portkey-api-key":"YOUR_PORTKEY_API_KEY",
        "x-portkey-provider":"vertex-ai",
        "x-portkey-vertex-project-id":"YOUR_VERTEX_PROJECT_ID",
        "x-portkey-vertex-region":"us-central1"
}
)

response = client.chat.completions.create(
    model="google/gemini-1.5-flash-002",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)
Thanks for the reply Vrushank. Appreciate the help, will try this πŸ™‚
Add a reply
Sign up and join the conversation on Discord