Welcome to Portkey Forum

Updated last year

Generating text using the 'n' parameter

You can do this using the ‘n’ param - it’s used same as top_p, temperature etc
Attachment
IMG_9490.png
d
V
3 comments
Actually not like that, lets say we want to generate emails for 10 different users, so the prompt context will be different for each but company context would be same, so instead of making 10 different OpenAI calls, we want to reduce it to least possible?
Interesting. Even in that scenario, each of your final prompts are different right because the content changes each time.

If you have sufficient tokens, maybe you could alter the prompt to stuff context for two users in a single prompt.
Or even more, and have the model return the response in a JSON format or something
Add a reply
Sign up and join the conversation on Discord