Hello Community,
I noticed today that the Perplexity “Create a Chat Completion” module only responds with a generic text. Its job was to visit a URL and highlight the main points.
The output I received has nothing to do with the visited article from my given URL. I tested various models and had the same result.
When I enter the same prompt with the URL at perplexity.ai, I got a result which confirms that the URL was actually visited and researched.
However, this is not the case with the make.com module.
What’s going on?
Regards,
Markus
1 Like
If you make a call with the API using something like postman or a python script, do you get the same result? If you do, then the problem is the API, not Make.com.
If you get the correct result, then you would need to talk to the developer of the Perplexity integration.
L
Hey @Markus1, which models do you use?
Perplexity offers the online models “llama-3-sonar-small-32k-online” and “llama-3-sonar-large-32k-online” which do at least claim to have internet access. As these models aren’t open source, nobody can tell you if they really look up stuff on the URL. I personally do have quite a good experience using those models.
The UI and the API in Perplexity do have significant differences. E.g. the UI has access to models such as GPT4 and Claude which is not available through the API.
If you use-case is really driven by the search functionality and you want to take control over the process, I’d advise to either use a tool such as Exa.Ai. Alternatively, there are tools such as MultiOn.Ai which do have an API for certain “agentic” behaviour such as visiting a website and looking up specific information.
Best,
Richard
1 Like
Hello Markus1, did you find solution for the problem, if yes please advise because im facing the same issue. thanks