Welcome to Portkey Forum

Updated 3 days ago

Azure Foundry

I'm trying to create a virtual key to connect to my Azure AI Foundry deployed models. When I use Azure Inference as provider I can only select a model from a default list, but I don't see my actual deployed models there. How could I make this work? For example: I have LLama 3.3 deployed in Azure AI Foundry, how can I creaet a virtual key to connect to that model?
1
V
r
T
8 comments
Hi @ruu8010 tagging @visarg & @b4s36t4 who can take a look!
@visarg , @segfaulte or anyone else know how to achieve this?
I'm also stuck on this. Please do help in guiding how we can map the configs appropriately for a Llama 3.3 and a Deepseek model deployed in Azure AI Foundry.
@Vrushank | Portkey Any updates on this?
I've tried to set it up as a custom provider. But I always still have to choose an AI Provider. So in this case it still leads to and error although the input field for the Azure Deployment Name is (righly so) not visible.
Hi. So sorry for the delay. Azure AI Services is under feature flag which became hard for us to find the documentation for.

As @ruu8010 specified for the time being the work-around is to use custom-llm provider.

But to note the header should be api-key instead of Authorization.

Will be pushing a fix for this soon, so that you can use the Provider directly instead of work-arounds.
I can confirm that using the custom provider under azure inference with the api-key header works
Cool, thanks for confirming!. As said earlier, will be pushing the changes to support direct provider instead of the work around. Will update here once the fix is live.
Add a reply
Sign up and join the conversation on Discord