Welcome to Portkey Forum

e
ekevu.
Offline, last seen 3 weeks ago
Joined November 4, 2024
Can it be the case that Portkey doesn't log requests anymore when I use requests in the previous setup (before the recent changes)?
9 comments
e
V
Can you show me the whole curl command, please?
8 comments
e
V
Bug report - if you update your foundation model, click on "update" to save the changes and leave the page before the button-click event has finished processing, the list overview in "Prompts" will not update
3 comments
R
b
V
Oddly, my colleague and I have tried the same prompt in the portkey dashboard, but I can use Claude 3 Haiku, but she gets the error message "request not allowed". We tried the same input variables. Why is that?
12 comments
e
v
a
Also, is there a way for you to add Claude 3 Haiku?
1 comment
V
There are providers like replicate.com that charge you per second of usage instead of tokens used. I was wondering, under what circumstances does it make sense to use this instead of the ordinary APIs? And can I expect more tokens per second this way than with token-based APIs?
3 comments
e
V
S
Hi, I keep getting this error: Error code: 404 - The model gpt-4-turbo-preview does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.
It doesn't matter which Openai model I choose, the error stays the same
16 comments
v
V
e
Can you guys check the pricing calculation for mistral-medium? Maybe I am making a mistake, but I think you are off by factor 100. EDIT: Sorry, I was wrong.
1 comment
v
Yes. For some reason, I can only choose AM, not PM. But if I could set the time correctly, it would solve the problem.
1 comment
V
Regarding the Perplexity API integration. 1. I believe "maximum tokens" is limiting the total tokens used, but shouldn't it limit the output tokens only? 2. Using the model pplx-70b-online, no price is shown.
3 comments
e
v
When trying Mistral API, I am getting this response: {"detail":[{"type":"extra_forbidden","loc":["body","safe_mode"],"msg":"Extra inputs are not permitted","input":false,"url":"https://errors.pydantic.dev/2.5/v/extra_forbidden"}]} I am inside the Portkey dashboard. Apparently, I have an "unsupported parameter" in my request, but how can I understand which one?
3 comments
e
v
Fine-tuning will become relevant for me in a while.
4 comments
V
e
I am currently rewriting prompts I am using to query a vector database. That got me thinking, it would be great if I could monitor the results, change the prompt parameters like temperature etc. all inside Portkey and look through the evals there consistently, because this is now the only thing that is not in the same place. Even if Portkey doesn't actively query the vector database, would it be possible to simply integrate this just for monitoring purposes? I wouldn't even mind if you returned the prompt per API call or something, but I would benefit from being able to change the prompt without always rebuilding the docker container to test it.
7 comments
V
e
d
Feature request: The ability to archive prompts (get them out of sight for clarity, but without deleting all the data associated with them)
1 comment
V
I am having the issue again that while working in Portkey in the browser on a prompt, it updates and all changes are lost, unless I save manually every few seconds. This might be related to the fact that I need to go away into other windows when prompting and then come back. But I am losing a lot of work right now.
6 comments
e
R
It would be great to have a filter options in Portkey like "where the variable X has the value Y". I am using a prompt chain where some variables get populated automatically with the output of another prompt, and I am currently checking if a faulty output in one place caused a consistent issue in the chain. Or I could compare.
1 comment
V
UX issue - it is hard to see which variable goes where if they all start with the same name. Can you perhaps just show the whole name?
5 comments
e
R
V
I wrote a prompt today, which gives me occassionally 3k tokens full of line breaks as output (the prompt is supposed to create this much as text). Is it possible to understand if this was the original OpenAI output? I used the Portkey SDK in python, but saw the same results in portkey evals. It only happens sometimes.
12 comments
V
e
r
I am doing work on prompts in portkey and have noticed that every once in a while the prompt reloads for a fraction of a second, like an autosave running. However, it deletes unsaved changes I have in the prompt. Do you know what I mean? It might be connected to the fact that I have to switch between browser tabs. Currently, I click "update" all the time, so that I don't lose any information.
3 comments
d
V
e
Can we add tags to prompts? This would help to sort them by flexible categories, e.g. by versions or by type.
3 comments
V
r
e
Suggestion: Make a copy button in evals to copy-paste the prompt output quickly
9 comments
V
e
Why do some prompts give me the opportunity to switch between text and JSON output format, but others don't? And what does that mean exactly?
7 comments
V
e
T
Does someone know what makes a variable in the frontend of portkey invalid? I only get the error message "invalid variable passed" when trying to generate a completion and I find it difficult to understand which variable it concerns and why.
8 comments
V
e
v
Feature request: It would be nice to optionally export the prompt outputs in a .csv format. Let's say I ran the prompt 100 times, then I could easily identify issues this way without needing to click through all of them.
2 comments
e
V
I keep getting 502s from Portkey right now containing this: <class 'openai.error.Timeout'>, │ │
│ │ │ <class 'openai.error.ServiceUnavailableError'> → OpenAI API seems to be operational according to their status page, any issues from your side?
6 comments
e
a
v
V