I'm trying to create a virtual key to connect to my Azure AI Foundry deployed models. When I use Azure Inference as provider I can only select a model from a default list, but I don't see my actual deployed models there. How could I make this work? For example: I have LLama 3.3 deployed in Azure AI Foundry, how can I creaet a virtual key to connect to that model?
I'm also stuck on this. Please do help in guiding how we can map the configs appropriately for a Llama 3.3 and a Deepseek model deployed in Azure AI Foundry.
I've tried to set it up as a custom provider. But I always still have to choose an AI Provider. So in this case it still leads to and error although the input field for the Azure Deployment Name is (righly so) not visible.
Cool, thanks for confirming!. As said earlier, will be pushing the changes to support direct provider instead of the work around. Will update here once the fix is live.