Welcome to Portkey Forum

Updated 10 months ago

Perplexity API integration: Clarifying token limits and pricing

Regarding the Perplexity API integration. 1. I believe "maximum tokens" is limiting the total tokens used, but shouldn't it limit the output tokens only? 2. Using the model pplx-70b-online, no price is shown.
v
e
3 comments
Checking the cost issue on priority. This is what max tokens refer to for perplexity:

The maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window.
@ekevu. Cost should reflect for pplx-70b-online model. We have updated the pricing for it. We have also updated the cost in old logs.

Regarding the max_tokens query, please go through the above message. Any particular reason that you feel its considering total_tokens instead of completion tokens?
Thank you for updating the pricing, I can see it now! Never mind about the other issue, I cannot reproduce it right now.
Add a reply
Sign up and join the conversation on Discord