Welcome to Portkey Forum

Updated yesterday

Crafting Engaging Virtual Experiences

Nope, using open ai with a virtual key
S
m
9 comments
did you attach any default config to your api key?
Attachment
image.png
if the config has an overriding model, it will override the param in your request
Ok, i dont think so. I dont have the option of using a config with the api key. Maybe thats because im still on the free plan? Still testing...
Request: {"mode":"completion","temperature":0.69999999999999996,"model":"gpt-4o"}

Response: "object":"chat.completion","created":1732085564,"model":"gpt-3.5-turbo-0125",
That's strange, can you attach your code snippet/http cURL?
url -X POST "https://api.portkey.ai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "x-portkey-provider: openai" \
-H "x-portkey-virtual-key: YOUR_OPENAI_API_KEY" \
-H "x-portkey-trace-id: assistant" \
-d '{
"config": {
"model": "gpt-4o",
"mode": "completion",
"temperature": 0.7
},
"messages": [
{
"role": "user",
"content": "System: prompt"
}
],
"trace": {
"user_id": "assistant",
"metadata": {
"app_version": "1.0",
"feature": "chat"
}
}
}'
ahh, you need to pass the config as a header, not as part of the body, to achieve the effect you're trying to get, you should do
Plain Text
--header 'x-portkey-config: {"virtual_key": "openai-naren-362aeb", "override_params": {"model": "gpt-4o", "temperature": 0.7}}' \
Cool, thanks. Will try that πŸ™‚
and completion is not a supported mode
mode is for routing strategy
Plain Text
      "mode": {
        "type": "string",
        "enum": [
          "single",
          "loadbalance",
          "fallback"
        ]
      },

checkout the supported json schema here https://portkey.ai/docs/api-reference/inference-api/config-object#json-schema
Add a reply
Sign up and join the conversation on Discord