Welcome to Portkey Forum

Updated last year

Portkey's Integration with Llamaindex Query Engine

How does Portkey integrate with Llamaindex query engine? CC: @Vrushank | Portkey
V
d
10 comments
Since Portkey is integrated as a custom LLM within Llamaindex, you should be able to create a custom query engine with the LLM defined through Portkey client right?
Attachment
image.png
@Vrushank | Portkey can you give an example of the code with portkey and query engine
Yes! Let me share today
@Vrushank | Portkey possible to share this asap, need it bit urgently?
@deepanshu_11 tried out this with Portkey.

Here's the notebook: https://colab.research.google.com/drive/1BkbQh5RPQC7jNbM44xuH0D4QPG--U0yv?usp=sharing

Let me know what you think!
Embedding calls are not currently logged.. but seems to work well except for that
@Vrushank | Portkey currently my llm = OpenAI(model="gpt-4", temperature=0.1, system_prompt=system_prompt, embed_model=OpenAIEmbedding(model="text-embedding-ada-002"))

Where can I add system_prompt and embed_model?

Also, thanks for the prompt response, you are fantastic ๐Ÿ™‚
Plain Text
import portkey
from llama_index.llms import Portkey

chat_client = Portkey(mode="single", api_key="PORTKEY_API_KEY")

system_prompt = [{"role": "system", "content": "..."}]

chat_client.add_llms(portkey.LLMOptions(provider="openai", model="gpt-4", temperature=0.1, virtual_key="OPENAI_VIRTUAL_KEY", messages=system_prompt
))

chat_llm = chat_client

embedding_client = Portkey(mode="single", api_key="PORTKEY_API_KEY")

embedding_client.add_llms(portkey.LLMOptions(provider="openai", model="text-embedding-ada-002", virtual_key="OPENAI_VIRTUAL_KEY"))

embedding_llm = embedding_client


I had no idea you could add system prompt + embedding model all directly to a single LLM instance of Llamaindex. How does that work? Can you point me to the Llamaindex doc on this? ๐Ÿ˜ฎ
Lemme try this, not remember where I saw it, probably in one of their example in github
Add a reply
Sign up and join the conversation on Discord