I'm trying OpenRouter as a ChatGPT replacement for experimenting with multiple models through a ChatUI but it keeps on getting stuck, anyone knows of any better alternatives where I can provide my LLM model keys and can use it through web browser in a UI?
Also, curious which is the best model for Maths Advanced problems?
Hey folks, I had to setup load balancing for Perplexity (good problem to have). It seems to work, I get Loadbalancer active in the dashboard. But can I please trouble you guys to double check my config?
I have a hunch that I should be able to simplify it and wouldn't need to input virt_key_1 twice, but not sure how
The open-webui integration makes use of a Portkey api key (to connect to Portkey, obvi) and a virtual key (to access the models, which makes sense), but using a single virtual key means that traffic from all users will be counted for that single key, I don't have access to tune per user, correct?
Hello! I'm trying to integrate Portkey with open-webui and I'm following the instructions here but I'm having issues making it work with my Azure OpenAI subscription.
I would love to use Portkey, but today we use it inside n8n in a structure (a ready node) that would need to have the same API structure as OpenAI. However, to make a call in Portkey platform, it is necessary to pass the key in the headers, which makes it unfeasible for us to use since it does not have this possibility. Do you have any suggestions?
I am using Llama-3.3-Vision-Llama-Turbo model for some prompts, but I am getting cost as 0 cents for in portkey. @Vrushank | Portkey this seems like a small pricing bug.
Hi, another question: looks like after the addition of the help page (https://github.com/Portkey-AI/gateway/pull/692) the config for the vscode debug is no longer working for me (https://github.com/Portkey-AI/gateway/pull/602) as it does not seem to copy the public/index.html into the build/src dir. Not sure if this is the root cause, but can you help me update the task config for the vscode debug to fix this if thats the case? The error is bascially not finding /build/src/public/index.html
I am getting an error, but I don’t think I have more than 10k logs in January. It seems like the system is calculating the total logs instead.
Even if it’s calculating total logs, I should still have access to the 10k free logs, and the extra logs should be paid. That’s what the error message says (please check the screenshot).
Another feature that would be nice to have would be to see the request timing for the entire trace on traces page, rather than having to calculate it from each individual request
It would be great to have breadcrumb navigation for folders on prompt library, right now when changing multiple prompts in a folder one has to go back to main page of prompt library, select folder and then select the prompt. A better way would be to go back to the folder from the prompt page itself.
It would be awesome if I could have a button in the logs to automatically fill the variables of the corresponding prompt template with the values for the log call. So I can easily reproduce and iterate on a specific logged call. Copy&Pasting is cumbersome.