Welcome to Portkey Forum

Updated 2 months ago

Missing param in the Python SDK

At a glance
Hey when using prompt render with response_mode json set in the template. I have a new parameter in response body
Plain Text
  "response_format": {
      "type": "json_object"
    },
but when using the python sdk the PromptRenderData object does not have that field is it expected?
Plain Text
class PromptRenderData(BaseModel):
    messages: Optional[List[ChatCompletionMessage]] = None
    prompt: Optional[str] = None
    model: Optional[str] = None
    suffix: Optional[str] = None
    max_tokens: Optional[int] = None
    temperature: Optional[float] = None
    top_k: Optional[int] = None
    top_p: Optional[float] = None
    n: Optional[int] = None
    stop_sequences: Optional[List[str]] = None
    timeout: Union[float, None] = None
    functions: Optional[List[Function]] = None
    function_call: Optional[Union[None, str, Function]] = None
    logprobs: Optional[bool] = None
    top_logprobs: Optional[int] = None
    echo: Optional[bool] = None
    stop: Optional[Union[str, List[str]]] = None
    presence_penalty: Optional[int] = None
    frequency_penalty: Optional[int] = None
    best_of: Optional[int] = None
    logit_bias: Optional[Dict[str, int]] = None
    user: Optional[str] = None
    organization: Optional[str] = None
    tool_choice: Optional[Union[None, str]] = None
    tools: Optional[List[Tool]] = None

how do I access it?
V
H
C
11 comments
Hi @Harold Senzei thanks for pointing this out! cc @Chandeep | Portkey

Is this a critical blocker for you? I expect our devs will be unavailable tomorrow, so can expedite if needed
No it is alright. The latest version of the sdk has extra='allow' so the field should be available, even if it isn't defined
Correct. I was just checking it. extra=allow should be handling this.
Might not be showing as a suggestions while coding.
Hey @Vrushank | Portkey @Chandeep | Portkey , the render endpoint does not return the provider name in the response. It only returns the model name, but for a use case where user wants to use different providers for different prompts and is managing this with prompt library. They would need the provider parameter.

Example use case:

Prompt1 uses openai gpt4o
Prompt2 uses google gemini1.5pro

if using prompt/completion endpoint, the config can be done on the ui for the prompts but when using the prompt/render endpoint + chat/completions we need the data from render endpoint to pass it to the completions.

is there some other way to do this?
Got your point. Let me talk with the team on this and get back to you.
Thanks. For now I am seeing if there is a way to get this to work using configs, I will update
@Harold Senzei so we had a discussion on this sometime back - @visarg suggested that /render's scope should be what it is right now, and for other scenarios, we build a READ API that does what you want
Attachment
CleanShot_2024-12-27_at_10.29.272x.png
This would mean 3 api calls for each llm call when using langchain.
I found a way to use prompt/completions endpoint with langchain, but it is a bit hacky, though

Plain Text
from langchain_openai import ChatOpenAI
from portkey_ai.api_resources.streaming import Stream
from portkey_ai import Portkey

prompt_id = "pp-sample-dcb456"
portkey_api_key = "XXXX"
portkey_client = Portkey(api_key=portkey_api_key)

def __enter__(self):
    return self

def __exit__(
    self,
    *args,
) -> None:
    self.close()

def close(self) -> None:
    self.response.close()


Stream.__enter__ = __enter__
Stream.__exit__ = __exit__
Stream.close = close

llm = ChatOpenAI(
    client=portkey_client.prompts.completions
)
null_kwargs = {"model" : None,  "temperature" : None}
binded_llm = llm.bind(prompt_id=prompt_id, variables={"topic" : "dogs"}, **null_kwargs)
binded_llm.invoke("test")


This is cause OpenAI completions method is a context manager while portkey's isn't
Interesting. @Chandeep | Portkey thoughts?
Yeah! This was insightful. I can check and see if we can implement it in our code as well, without this being a break change for any one.
Add a reply
Sign up and join the conversation on Discord