So when it comes to using AI api. A lot of them charge by using tokens.
With the pro plan of perplexity you get 5 dollars of free tokens. But then you will get charged by tokens used.
When AI services charge for API usage by tokens, they are essentially billing based on the amount of data processed by the API. Here’s a breakdown of how this works:
What Are Tokens?
Tokens are units of data. In the context of AI models like GPT, a token can be as short as one character or as long as one word. For example, the word “chat” is one token, but a longer word like “extraordinary” could be broken into multiple tokens.
Tokenization is the process of splitting text into tokens. The model reads text and divides it into these smaller units, which it then uses to understand and generate responses.
How Are Tokens Used?
Input Tokens: When you send a request to an AI API, the text you input is tokenized. The API counts the number of tokens in your input.
Output Tokens: After processing the input, the AI generates a response, which is also tokenized. The number of tokens in the output is also counted.
Billing by Tokens
Pricing Model: AI services often charge based on the number of tokens processed. The cost may include both the input and output tokens. For example, if you send a request with 10 tokens and receive a response with 50 tokens, you might be billed for 60 tokens in total.
Rate Calculation: The rate per token can vary depending on the AI model used and the service provider. For example, using a more advanced model with better capabilities might cost more per token.
Yes! Use the pplx-7b-online and pplx-70b-online models, which leverage information from Perplexity’s search index and the public internet. See our blog post for more details.
Let me rephrase. Think I am unclear. Let me try and clarify.
I’m using the perplexity module in a scenario on Make.com.
I want to use the “llama-3.1-sonar-large-128k-online” model in that scenario.
My experience with the “llama-3.1-sonar-large-128k-online” model (in that module in that specific scenario) is that it is hallucinating, and not working properly. Meaning it only returns hallucinations and basically therefore ignore the input prompt.
BACKGROUND
I previously used to have a paid “pro” account on perplexity.ai. I nolonger have this pro account on perplexity.ai, now I am a “free” user. When I had the “pro” account I have bought API credits. These credits are still there.
Question
So I was wondering (since experiencing these problems) if, I can use “llama-3.1-sonar-large-128k-online” model when I DONT have a paid account on perplexity.ai?
Yes, that’s right, the modules are no longer available at Perplexity. My experience is that online research is almost impossible with the Perplexity API. The output is completely hallucinatory, as if I were to go to the official website and make a query there. But I need online research and so I access Google Search via API. There I get my results for keywords and can then process this data further. The disadvantage is that the google queries cost money at some point.