doesn't portkey change from max_tokens to max_completion_tokens at gateway level for o3-mini? I am using the portkey library integrated with langchain. When I change the model to o3-mini on the prompts page, all calls are falling with 400 due to the args issue.
@sega So Langchain by default is adding the max_tokens arg in the ChatOpenAI implementation, I can add extra arg easily but to remove I will have to modify langchain internal which would be a maintenance head ache is there any other way to use o3?
@sega @Vrushank | Portkey I just cross checked this is bug on portkey's side on the prompt library, langchain is not passing any params but the prompt library for completions is adding max_tokens