Welcome to Portkey Forum

Updated 4 weeks ago

Calling a Prompt Endpoint from OpenWebUI Client

I was wondering if someone can explain how can call a prompt endpoint from my openwebui client using a function? Just changing the endpoint in the plugin doesn’t seem to work and gives a 500 error?
s
r
11 comments
hey @ruu8010 , you can do some preprocessing before calling an endpoint in a openwebui pipe right?
you can share your function code if you're still facing this, we'd be happy to help you out
This is the function code I'm using now. I added a valve where I put in the prompt endpoint. But this code only returns a 500 error from the portkey api. The code does work with the chat/completions endoint.
@ruu8010 is your payload correct? check this API doc for payload structure https://portkey.ai/docs/api-reference/inference-api/prompts/prompt-completion
If I understand correctly, you're trying to use this endpoint right? prompts/{promptId}/completions
yes that's correct, if fill that in in the valve where I subsitute the promptId for the actual promptId of course
I have a hard time figuring out what is in the body param of the pipes function as that holds the information for the payload I guess
you should be able to see the console logs in your browser
I send this as a payload to the prompt endpoint:

Plain Text
payload = {
            "variables": {
                "messages": body["messages"]
            }
        }


And this is my system prompt template:

Plain Text
{{#messages}}

{{message}}

{{/messages}}

You are a helpful assistant that doesn't take life too seriously and tries to answer every user question with a joke, or at least always looks for something to joke about. Now let's respond to the latest query!


It keeps returning a json object in my openwebui instead of a rendered message and from looking at the content of the returned message, apparently the user query doesn't reach the endpoint:

Plain Text
{"id":"chatcmpl-AtETOxs6Be8ppa1OQIO1cg1JeX2nl","object":"chat.completion","created":1737727038,"model":"gpt-4o-2024-08-06","choices":[{"index":0,"message":{"role":"assistant","content":"Well, it seems like we’re off to a silent start! What do we have in common with ninjas? We both love sneaking up on a good conversation. How can I help you today?","refusal":null},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":53,"completion_tokens":42,"total_tokens":95,"prompt_tokens_details":{"cached_tokens":0,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}},"service_tier":"default","system_fingerprint":"fp_50cad350e4"}
I think you can fix this with formatting the response, let me know if you want to jump on quick call and debug
Ok, solution was to add the variable in the prompt template to the user message instead of the system message. Now it works as expected
Add a reply
Sign up and join the conversation on Discord