There is a production issue following the latest update. The prompt functionality has started breaking, and it seems that URLs are no longer being supported and that image inputs only expect base64 encoded images. Additionally, the LLM being used (GPT-4o) is encountering errors, with the dashboard displaying generation errors after the UI refresh for the same example that was working before the changes. Can you get this checked ASAP?
Possible bug: When I choose a reasoning model (o1-mini, o3-mini etc) and set the max_tokens, it shows an error saying the following (screenshot attached).
Is there any way to use Florence2 through Portkey's client? One possible way I think might work is through HuggingFace, but any plans of supporting it natively/ through something like a Fal or RunPod or Modal? If this already exists, can someone point me to the docs? Thanks!