Welcome to Portkey Forum

Hello, how can i transfer Ownership?
1 comment
G
I'm trying OpenRouter as a ChatGPT replacement for experimenting with multiple models through a ChatUI but it keeps on getting stuck, anyone knows of any better alternatives where I can provide my LLM model keys and can use it through web browser in a UI?

Also, curious which is the best model for Maths Advanced problems?
4 comments
G
d
So, after setting up the gateway I still need to setup an account with PortKey.ai? 🤔
1 comment
G
I'm getting this error when trying to add Nebius virtual key on Portkey. Any suggestions on how to add one?
1 comment
b
maybe crazy idea, but it would be super cool to see a processed dataframe made from a generated json object

https://www.perplexity.ai/search/can-you-make-a-dataframe-with-h.Co2VATR4iw9qHpm8kLGw
1 comment
S
Hey folks, I had to setup load balancing for Perplexity (good problem to have). It seems to work, I get Loadbalancer active in the dashboard. But can I please trouble you guys to double check my config?

I have a hunch that I should be able to simplify it and wouldn't need to input virt_key_1 twice, but not sure how

{
"virtual_key": "virt_key_1",
"cache": {
"mode": "semantic",
"max_age": 10000
},
"retry": {
"attempts": 5,
"on_status_codes": [
429
]
},
"strategy": {
"mode": "loadbalance"
},
"targets": [
{
"virtual_key": "virt_key_1"
},
{
"virtual_key": "virt_key_2"
}
]
}
12 comments
V
G
so we'd like a way to govern access to models and prompts (and billing, ofc) for each user/role
10 comments
V
d
The open-webui integration makes use of a Portkey api key (to connect to Portkey, obvi) and a virtual key (to access the models, which makes sense), but using a single virtual key means that traffic from all users will be counted for that single key, I don't have access to tune per user, correct?
2 comments
G
Hello!
I'm trying to integrate Portkey with open-webui and I'm following the instructions here but I'm having issues making it work with my Azure OpenAI subscription.

What does this error message mean? 🤔
4 comments
V
G
Hello, where can I see my innvoices?
1 comment
V
s
souzagaabriel
·

n8n

I would love to use Portkey, but today we use it inside n8n in a structure (a ready node) that would need to have the same API structure as OpenAI. However, to make a call in Portkey platform, it is necessary to pass the key in the headers, which makes it unfeasible for us to use since it does not have this possibility. Do you have any suggestions?
4 comments
V
I am using Llama-3.3-Vision-Llama-Turbo model for some prompts, but I am getting cost as 0 cents for in portkey. @Vrushank | Portkey this seems like a small pricing bug.
3 comments
V
S
Hi, another question: looks like after the addition of the help page (https://github.com/Portkey-AI/gateway/pull/692) the config for the vscode debug is no longer working for me (https://github.com/Portkey-AI/gateway/pull/602) as it does not seem to copy the public/index.html into the build/src dir. Not sure if this is the root cause, but can you help me update the task config for the vscode debug to fix this if thats the case? The error is bascially not finding /build/src/public/index.html
8 comments
V
G
e
Hi, the fine tuning endpoints are part of the gateway? If so, can you point to examples in the code where are implemented?
9 comments
V
e
W
o1 not working
4 comments
V
a
Hello! It appears that the Playground under Config doesn't allow requests. Am I in the wrong place to interactively test guardrails?
5 comments
V
i
Hi everyone. I need help to configure the result of the guardrails on my own dashboard.
3 comments
V
M
Hey how do we add labels to partials?
4 comments
V
s
smile
·

Guardrails

@Vrushank | Portkey I sent an email regarding guardrail support. Could you please check it out?
2 comments
V
s
I am getting an error, but I don’t think I have more than 10k logs in January. It seems like the system is calculating the total logs instead.

Even if it’s calculating total logs, I should still have access to the 10k free logs, and the extra logs should be paid. That’s what the error message says (please check the screenshot).
7 comments
V
H
W
Another feature that would be nice to have would be to see the request timing for the entire trace on traces page, rather than having to calculate it from each individual request
4 comments
V
H
It would be great to have breadcrumb navigation for folders on prompt library, right now when changing multiple prompts in a folder one has to go back to main page of prompt library, select folder and then select the prompt. A better way would be to go back to the folder from the prompt page itself.
1 comment
V
Awesome! :nod:

What's the typical response time? I sent an email about 24 hours ago and didn't receive a response yet. 😬
1 comment
V
It would be awesome if I could have a button in the logs to automatically fill the variables of the corresponding prompt template with the values for the log call. So I can easily reproduce and iterate on a specific logged call. Copy&Pasting is cumbersome.
1 comment
V