Welcome to Portkey Forum

Updated 2 weeks ago

Portkey library integration with o3-mini model in langchain

doesn't portkey change from max_tokens to max_completion_tokens at gateway level for o3-mini? I am using the portkey library integrated with langchain. When I change the model to o3-mini on the prompts page, all calls are falling with 400 due to the args issue.


ref for implementation: https://discord.com/channels/1143393887742861333/1321096743336808479/1322092663591272531
s
H
V
11 comments
no it doesn't, but it maps max_completion_tokens to max_tokens for rest of the providers (anthropic, vertex etc)

You can just use max_completion_tokens for everything, max_tokens is deprecated by openai)
Got it, I will update it thanks
@sega So Langchain by default is adding the max_tokens arg in the ChatOpenAI implementation, I can add extra arg easily but to remove I will have to modify langchain internal which would be a maintenance head ache is there any other way to use o3?
@sega @Vrushank | Portkey I just cross checked this is bug on portkey's side on the prompt library, langchain is not passing any params but the prompt library for completions is adding max_tokens
I am not able to test with o3-mini on the ui too
omg, langchain be messing with user requests always
lemme check
@Harold Senzei initialize with max_tokens set to None and max_completion_tokens set to your value
Plain Text
max_tokens=None, max_completion_tokens=100,
yeah the prompt library bug is separate, which is getting a fix today. On direct API calls, it should work as @sega said
Add a reply
Sign up and join the conversation on Discord