Welcome to Portkey Forum

Hi there, i am trying to setup inference profile on portkey, even upgraded to production plan but unable to figure out where and how to set this up, in docs it says we will get an option but unablet to find the same
https://portkey.ai/docs/product/ai-gateway/virtual-keys/bedrock-amazon-assumed-role
3 comments
G
V
[
{
"role": "system",
"content": "My System Prompt"
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "when does the flight from baroda to bangalore land tomorrow, what time, what is its flight number, and what is its baggage belt?"
}
]
},
{
"role": "assistant",
"content": [
{
"type": "tool_use",
"id": "toolu_018fPMpEdPiCjkK8AkfCXgLh",
"name": "send_message_to_agent",
"input": {
"agent_name": "aggreg-agent",
"message": "WHAT: flight information from Baroda to Bangalore for tomorrow\nWHEN: tomorrow\nWHERE: Baroda to Bangalore"
}
}
]
},
{
"role": "user",
"content": [
{
"content": "tool call successful",
"type": "tool_result",
"tool_use_id": "toolu_018fPMpEdPiCjkK8AkfCXgLh"
}
]
}
]
33 comments
V
s
G
I’m using Sonnet v2 with AWS Bedrock in the us-east-1 region. It seems that invocation is only supported with an inference profile for Sonnet v2, and I need to set this as the model ID: us.anthropic.claude-3-5-sonnet-20241022-v2:0.

However, in Portkey, all the model IDs start with anthropic.XXX instead of us.XXX. How do I configure portkey with inference profile.
4 comments
V
H
V
to use prompt caching by anthropic, do we just include the promptcaching cache type ephemeral as they have specified??
2 comments
V
s
Getting an error i havent been getting before with portkey
2 comments
C
We recently upgraded to the pro plan, the cost we see in analysis dashboard doesn't match up with our api providers billing. Also the metadata tab shows a differnt number which is more or less close to what provider shows. What is the expected behaviour of the analysis tab on the dashboard ?
3 comments
V
n
Not able to create virtual Keys
41 comments
W
s
ERROR:app.services.ai.portkey.client:Error in embedding batch 192 to 288: Error code: 500 - {'error': {'message': 'cohere error: internal server error, this has been reported to our developers. id 2f67ae23bc584dda09f0ade7ffe165d2', 'type': None, 'param': None, 'code': None}, 'provider': 'cohere'}
6 comments
G
M
is it possible to create virtual key with custom host provider ?

as we use a not known cloud provider, with llama3 hosted, we have an custom url to reach the servise, but i dont find a way to create virtual key in this contexte
11 comments
G
c
V
I have a problem, with use virtual key for AzureOpenAI:
20 comments
G
G
Hey, is it possible to use Portkey Gateway proxy with batch requests?
For example OpenAI & Vertex both support batches API, but with different format.
In both platforms you can upload a JSONL file and then call a batch request, the batch will be processed and returned.
Is there a way to do something like this in Portkey, but in a unified format using the OpenAI client?
3 comments
W
a
Hey, is Search Grounding on Gemini API supported in Portkey?

I am trying to add this to my config: "tools": {
"google_search_retrieval": {
"dynamic_retrieval_config": {
"mode": "MODE_DYNAMIC",
"dynamic_threshold": 0.5
}
}
},
And I get this: line 35: Property tools is not allowed.

Am I on the wrong track here or is it unsupported? Thanks!
2 comments
G
A
https://portkey.ai/docs/api-reference/admin-api/control-plane/virtual-keys/create-virtual-key
cannot be triggered on non-enterprise plans ?

Plain Text
import requests

url = "https://api.portkey.ai/v1/virtual-keys"

payload = {
    "provider": "azure-openai",
    "key": "openai-test",
    "name": "Key 1 Azure Open AI",
    "note": "description",
    "apiVersion": "a",
    "deploymentName": "b",
    "resourceName": "c"
}
headers = {
    "x-portkey-api-key": "****",
    "Content-Type": "application/json"
}

response = requests.request("POST", url, json=payload, headers=headers)

print(response.text)

>>>{
  "success": false,
  "data": {
    "message": "You do not have enough permissions to execute this request",
    "errorCode": "AB03",
    "request_id": "27c675d5-c322-4f7e-89cc-064c26c913f9"
  }
}
4 comments
W
K
Hey, I cannot seem to get portkey to work with langchain for google's models.
This is a sample of what I am using
Plain Text
from langchain_openai import ChatOpenAI
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL

PORTKEY_API_KEY = "..."
VIRUTAL_KEY = "..." # Virtual Key I created

portkey_headers = createHeaders(api_key=PORTKEY_API_KEY,virtual_key= VIRUTAL_KEY)

llm = ChatOpenAI(api_key="x", base_url=PORTKEY_GATEWAY_URL, default_headers=portkey_headers, model="gemini-1.5-pro")

llm.invoke("What is the meaning of life, universe and everything?")


I have attached the trace back.
I am on the latest version of all the packages as I ran pip install -U langchain-core portkey_ai langchain-openai before starting
1 comment
G
Hi
How are the costs of Gemini Vertex AI calls calculated? Are you using the pricing mentioned in https://cloud.google.com/vertex-ai/generative-ai/pricing?
I see that the costs in portkey logs are higher than what I see in GCP billing reports.
7 comments
G
R
Search filters appears to be down from many days, face this very often, any ideas, team?
5 comments
W
d
Hello,

I am the dev of an organization and created an account to test it out. We liked the possibilities of the platform and want to upgrade now. I invited someone who can upgrade the account, but i cant make him to an owner of the organization and in a result of this, he can not upgrade the account.
How can i switch the owner of an orga?
18 comments
V
S
Nope, using open ai with a virtual key
9 comments
G
m
uhhhhm i think i found a bug

just added openrouter and used Run Test Request on the Getting Started Area
5 comments
G
N
R
Raj
·

dify ai

How do I get portkey to be the model provider in dify.ai
2 comments
G
How can I suppoert custom models w/ Portkey?
14 comments
V
f
Hello everyone, i am testing Portkey atm

it looks pretty Promising

i use Rust with the async-openai crate and now i get some errors, and i dont know how to fix them

with this setup:
Plain Text
let config = OpenAIConfig::new()
    .with_api_base("https://api.portkey.ai/v1")
    .with_api_key("portkeyApiKey");

println!("using Portkey");

let openai_client = Arc::new(OpenAIClient::with_config(config));


i am getting this error:
Plain Text
ERROR async_openai::error: 71: failed deserialization of: {"status":"failure","message":"Either x-portkey-config or x-portkey-provider header is required"}


what do i need exactly?
i could create a custom http client, but what does this headers mean? cant find them in the documentation
3 comments
V
N
P
Paul Cowgill
·

CrewAI

Hi, I followed the instructions for using Portkey with CrewAI and Python here https://portkey.ai/docs/integrations/agents/crewai#crewai, but the events aren't coming through. Any debugging tips?
12 comments
V
P
hey just had a question. if I am self-hosting portkey using a docker container, is there a way to still get all of the logs/metrics?
5 comments
W
G
Why isn't it possible to define a config (fallbacks, load balancing, etc.) or link to a config id in a prompt template? If you can select a single virtual key you should also be able to select a config that is used. In my opinion both features make only sense if combined, else you still have to override some stuff in code and cannot use the UI-based prompt management to iterate on prompt configurations properly. Or am I missing something?
5 comments
G
V