Hi there, I'm trying to figure out how the fallback strategy works with vision on gemini and other LLMs? I've posted a github issue but no response (it seems the issues are not very actively monitored)
https://github.com/Portkey-AI/gateway/issues/721I think your config is fantastic, but both fallback, loadbalance or really any strategy which uses multi LLMs is incompatible with gemini + others (say gemini + openai).
The reason being is that gemini requires image URLs to be uploaded to a google storage bucket and you to provide the URLs gs://..... (or googles internal file manager, but same issue) in either case those urls are not accessable by other LLMs since they are not public links, just internal google bucket links.
So, how could we use one call which passes gemini compatible urls + public urls for other LLMs ? right now this seems impossible. We need a way to pass both kinds of urls in a special way in the "image_url" param.
We can slightly workaround it if we only send to one LLM and don't use any config (which is sad since it's a key selling point of portkey). We know the LLM we are sending to and we can prepare the data correctly.
However, if we want to use configs the problem gets worse because if we are using a config id, then we don't even know if portkey backend will send to non-google LLMs, we don't really know anything since that's handled completely transparently by Portkey and only admin level clients could even see the config settings.