Welcome to Portkey Forum

Home
Members
MolBioBoy
M
MolBioBoy
Offline, last seen 4 days ago
Joined November 4, 2024
Getting an error i havent been getting before with portkey
2 comments
C
ERROR:app.services.ai.portkey.client:Error in embedding batch 192 to 288: Error code: 500 - {'error': {'message': 'cohere error: internal server error, this has been reported to our developers. id 2f67ae23bc584dda09f0ade7ffe165d2', 'type': None, 'param': None, 'code': None}, 'provider': 'cohere'}
6 comments
G
M
Forgot to add. The only way the embedding works is if I use default settings by portkey, but I have no idea which model this uses and need to change model depending on input in some scenarios
13 comments
M
G
🥹 any input on this guys
8 comments
V
M
Could be an edge case, but the ability to choose models for prompt templates based on which API key is used. ie. I have 3 keys (dev, staging and prod). In staging I want to use llama 8b to save money while in prod I want 70b. Right now I need to implement this logic in my backend, but I could see value for something like dynamic model choice based on X (not necessarily API key but could be anything like user ID or something like that?)
4 comments
G
M
ability to filter the analytics for which prompt is used, for now i have to go through each log in a time range to figure out what i need to
4 comments
M
V
Different environments for prompts (ie. dev, staging, production)
5 comments
V
M
Also, side question. When sending a list of strings to be embedded, does the return maintain the order? ie. if I send "hello", "world", will the embeddings for hello be index 0 and world be 1?
2 comments
M
G
Unsure if its just me but when i click on logs pertaining to embeddings, the whole platform crashes.
9 comments
G
M
V
R
Hi guys, im struggling a little to use cohere on your platform. Fireworks works fine, but when I made a cohere virtual key it stops working.

For example, to create a prompt when I use fireworks it shows me all the models, but when I use cohere it throws an error saying "no foundation models available"

Similarly, when I try to create embeddings using cohere + explicitly naming the model I want to use, it gives me an error as well:

Plain Text
openai.BadRequestError: Error code: 400 - {'error': {'message': 'cohere error: invalid request: valid input_type must be provided with the provided model', 'type': None, 'param': None, 'code': None}, 'provider': 'cohere'}


This is how im using the library to generate embeddings:

Plain Text
    ...
    async def create_embedding(self, input: Union[str, List[str]], model: str):
        return await self._client.embeddings.create(input=input, model=model)
    ...
    
    #USAGE
    client = create_portkey_fireworks_client(client_type="sync", provider="cohere")
    input = ["I am happy", "I am sad"]
    x = client.create_embedding(input, model="embed-multilingual-v3.0")
    print(x)


Please let me know what im doing wrong :/
2 comments
V