Welcome to Portkey Forum

Updated 7 months ago

Can someone verify the langchain support for portkey?

Can someone verify the langchain support for Portkey?
Plain Text
File "/Users/kaushikbokka/apps/leather/language-models/venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 556, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/Users/kaushikbokka/apps/leather/language-models/venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 417, in generate
    raise e
  File "/Users/kaushikbokka/apps/leather/language-models/venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
    self._generate_with_cache(
  File "/Users/kaushikbokka/apps/leather/language-models/venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 626, in _generate_with_cache
    result = self._generate(
  File "/Users/kaushikbokka/apps/leather/language-models/venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 910, in _generate
    output_str = self._call(messages, stop=stop, run_manager=run_manager, **kwargs)
  File "/Users/kaushikbokka/apps/leather/language-models/venv/lib/python3.8/site-packages/portkey_ai/llms/langchain/chat.py", line 173, in _call
    return message.get("content", "") if message else ""
AttributeError: 'ChatCompletionMessage' object has no attribute 'get'


seems to be failing with the latest version.
V
k
19 comments
Can you share the code snippet you're using? We just published new Langchain documentation - so it should be working πŸ˜„
So no more using of from portkey_ai.llms.langchain import ChatPortkey?
Correct! That integration we felt wasn't working well, so we've pushed an update
it's a breaking change in prod :(. Should have been a deprecation warning.

no worries. where could we find the the updated docs?
Sorry about that :/

So we haven't deprecated it actually, but since it was flaky and had Langchain dependencies that were tough to manage, we moved upstream to Langchain itself

Here's the documentation: https://portkey.ai/docs/welcome/integration-guides/langchain-python
It would work for Azure as well - if you have saved your Azure details to Portkey and have a virtual key, then you can just use the ChatOpenAI method and pass your Azure virtual key there
perf! thanks. let me try
Plain Text
class BaseChain(ABC):
    def __init__(
        self,
        temperature: Union[int, float] = 1,
        max_tokens: int = 256,
        llm: Optional[BaseLLM] = None,
        system_message: str = "",
        human_message: str = "",
        virtual_key: str = "azure-openai-ce-948b68",
        config: Optional[Union[Mapping, str]] = None,
        retry_count: Optional[int] = None,
        cache: Optional[str] = None,
        cache_force_refresh: Optional[str] = None,
        cache_age: Optional[int] = None,
        custom_metadata: Optional[str] = None,
    ):
        self._system_message = system_message
        self._human_message = human_message
        if llm is not None:
            self._llm = llm
        else:
            if "PORTKEY_API_KEY" not in os.environ:
                raise ValueError("PORTKEY_API_KEY env variable not set")
            PORTKEY_API_KEY = os.getenv("PORTKEY_API_KEY")
            if custom_metadata is None:
                custom_metadata = {"chain": self.tag.replace("_chain", "")}
            self._llm_kwargs = {
                "temperature": temperature,
                "max_tokens": max_tokens,
                "api_key": PORTKEY_API_KEY,
                "config": config,
                "virtual_key": virtual_key,
                "retry_count": retry_count,
                "cache": cache,
                "cache_force_refresh": cache_force_refresh,
                "cache_age": cache_age,
                "custom_metadata": custom_metadata,
                # "request_timeout": 1200,
            }
            self._llm = ChatPortkey(**self._llm_kwargs)


This is how we have designed the BaseChain, which is subclassed by LLM Chains
Thanks for sharing - so the key change is, you'd first instantiate the ChatOpenAI client instead of ChatPortkey, pass the following:
  • custom_metadata
  • virtual_key
  • api_key
  • config
  • cache
  • cache_force_refresh
  • cache_age
To the ChatOpenAI client, and pass
  • temperature
  • max_tokens
While making your request.
Does this make sense?
I want to use config instead of virtual_key
and createHeaders doesn't support config
Plain Text
portkey_headers = createHeaders(
    api_key=PORTKEY_API_KEY,
    config=config
)

llm = ChatOpenAI(api_key="X", base_url=PORTKEY_GATEWAY_URL, default_headers=portkey_headers)

llm.invoke("What is the meaning of life, universe and everything?")
`
okay, it does. got it working.

So, api_key="X" is a dummy.
I was just trying it out by calling Anthropic Bedrock with ChatOpenAI. It works
Add a reply
Sign up and join the conversation on Discord