import portkey
from llama_index.llms import Portkey
chat_client = Portkey(mode="single", api_key="PORTKEY_API_KEY")
system_prompt = [{"role": "system", "content": "..."}]
chat_client.add_llms(portkey.LLMOptions(provider="openai", model="gpt-4", temperature=0.1, virtual_key="OPENAI_VIRTUAL_KEY", messages=system_prompt
))
chat_llm = chat_client
embedding_client = Portkey(mode="single", api_key="PORTKEY_API_KEY")
embedding_client.add_llms(portkey.LLMOptions(provider="openai", model="text-embedding-ada-002", virtual_key="OPENAI_VIRTUAL_KEY"))
embedding_llm = embedding_client
I had no idea you could add system prompt + embedding model all directly to a single LLM instance of Llamaindex. How does that work? Can you point me to the Llamaindex doc on this? ๐ฎ