Welcome to Portkey Forum

Updated 2 months ago

prompt render

At a glance
The community members are discussing issues with the render endpoint of a prompt library. When using an empty history_messages variable, the endpoint returns a 502 error. The community members provide example prompts and code snippets to illustrate the problem. They also mention that the content field of the returned messages is not a string as expected, but a ValidatorIterator object. One community member suggests a modified prompt template that may help, and another mentions encountering a 413 "Request Entity Too Large" error when using a large context for a RAG prompt. There is no explicitly marked answer in the comments.
When I am using json mode for prompt library and I have an empty variable, the render endpoint returns 502

example :

create a prompt template with this,

Plain Text
[{
  "content": [
    {
      "type": "text",
      "text": "You are an helpful AI assistant. My name is {{name}}"
    }
  ],
  "role": "system"
},{{history_messages}},{
  "content": [
    {
      "type": "text",
      "text": "{{input}}"
    }
  ],
  "role": "user"
}]


If you call the render endpoint with
Plain Text
variables= {"name" : "portkey"}

you get 500 error


If you call with
Plain Text
variables= {"name" : "portkey", "history_messages" : []}

you get 502 error.

So in this how do I call with an empty history_messages?
W
H
b
7 comments
hey @Harold Senzei looking into this
@Witherwings Also when using render with prompts like this, the messages is not being returned correctly, the content field for message is being set to
ValidatorIterator(index=0, schema=None),
Example prompt
Plain Text
[{
  "content": [
    {
      "type": "text",
      "text": "You're a helpful assistant."
    }
  ],
  "role": "system"
},{{history_messages}},{
  "content": [
    {
      "type": "text",
      "text": "Tell me a joke about the topic {{topic}}"
    }
  ],
  "role": "user"
}]

When used using render
Plain Text
from portkey_ai import Portkey
from app.core.config import settings

prompt_id = "pp-sample-dcb456"
portkey_api_key = settings.PORTKEY_API_KEY
portkey_client = Portkey(api_key=portkey_api_key)

history_messages =[
    {"role": "user", "content": [{"type": "text","text": "Always start the joke with 'This is a Joke about'"}]},
    {"role": "assistant", "content": [{"type": "text","text": "Sure, I will do that"}]},
]

# history_messages =[
#    {"role": "user", "content": "Always start the joke with 'This is a Joke about'"},
#    {"role": "assistant", "content": "Sure, I will do that"},
# ] 

portkey_client.prompts.render(
    prompt_id=prompt_id,
    variables={"history_messages": history_messages, "topic" : "dogs"}
)
returns
Plain Text
PromptRender(success=True, data=PromptRenderData(messages=[ChatCompletionMessage(content=ValidatorIterator(index=0, schema=None), role='system', function_call=None, tool_calls=None), ChatCompletionMessage(content=ValidatorIterator(index=0, schema=None), role='user', function_call=None, tool_calls=None), ChatCompletionMessage(content=ValidatorIterator(index=0, schema=None), role='assistant', function_call=None, tool_calls=None), ChatCompletionMessage(content=ValidatorIterator(index=0, schema=None), role='user', function_call=None, tool_calls=None)], prompt=None, model='gpt-4o-mini', suffix=None, max_tokens=500, temperature=1.0, top_k=None, top_p=1.0, n=1, stop_sequences=None, timeout=None, functions=None, function_call=None, logprobs=None, top_logprobs=None, echo=None, stop=None, presence_penalty=0, frequency_penalty=0, best_of=None, logit_bias=None, user=None, organization=None, tool_choice=None, tools=None))

where content is not a string which is what is expected.
For the second type of history message(the commented out part), some message content fields are text while others are ValidatorIterator
understood. looking into this
Plain Text
[{
  "content": [
    {
      "type": "text",
      "text": "You are an helpful AI assistant. My name is {{name}}"
    }
  ],
  "role": "system"
},{{#history_messages}}{{history_messages}},{{/history_messages}}{
  "content": [
    {
      "type": "text",
      "text": "{{input}}"
    }
  ],
  "role": "user"
}]


Can you try this prompt please?!
Sure I will look, for some render request for our RAG prompt, I am getting

Plain Text
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx</center>
</body>
</html>


cause it is an rag prompt , the context size is large but not larger than the model context
Add a reply
Sign up and join the conversation on Discord