How to use agents to create scientific/complex long texts

:bullseye: What is your goal?

I want to produce a substantial scientific, fact-based document made up of multiple chapters, each around 25 pages in length. Each chapter should result from deep research based on academic literature, reports, studies, think tank white papers, public government guidelines, policy documents, and other credible sources.

Because this process will be repeated across different fields of study, I need a multi-agent scenario in which several agents can work both autonomously and in coordination to develop each chapter. This process will be repeated many time on different subjects. While each chapter addresses a specific subject, they are all interconnected and must form part of a coherent overall document.

:thinking: What is the problem & what have you tried?

The agents LLM do not provide the quality in-depth research outcome I need for each chapter. In fact I was in the last Make community online session and the speaker said that the LLMs within make do not provide the reasoning power as the chatGPT or Claude used outside Make (because they have different tools available for them). Is there a way around it? Is there a way to call chatgpt to do the research outside Make and then send out its result bak to the make scenario? how to go around it?

I think the limit of the reasoning power is more limited to Make scenario runtime. Unless you build an architecture that allows starting the prompt and fetching the result separately. But I have found that Make is not shining in use cases such as this, as Make is a visual orchestration platform that excels in agentic processing and task automation. If you want a long running, deep research LLM prompt that can iterate on it’s own products, then there are better systems for that.

But that is from my experience, happy to be proven wrong.

Cheers,
Henk

1 Like

I don’t think Make or generic LLMs are a good tool for deep scientific research and creating long articles.

I strongly suggest investing in a local system with a local agent, trained on your data instead.

Generic LLMs will 100% hallucinate data and you will have to fragment this a lot in Make so you don’t hit API limits or Make’s timeouts. This will introduce further weak points that may cause the research to break.

1 Like

Thanks a lot @Henk-Operative for your reply! Makes sense. Could you please tell me from your experience which products are better to do that? I think you mean I send the prompt to that product that processes the outcome and send it back to make, or create a document? Would a http external call to OpenAI solve the problem? as it would generate the whole outcome inside openai and is not dependable of make’s runtime? I guess?

Thanks @Stoyan_Vatov for your answer. Indeed, but I cannot invest in a local system with a local agent. Would that be possible for Make to send the prompt to another tool, outside Make, not sure which tool, that would process the whole request and then create a document for further processing or return to make?

The market has exploded with options, if you take any deep reasoning model with research capabilities you can ask it to create an outline and start from there. If you want it purely scientific with references and credentials, https://elicit.com/ is a good start. Claude Cowork also does a good job if you give it the right skills and plugins.

Setting up a flow where you use the API of an AI provider is a bigger hassle than paying for a subscription for either of the two options above imo.

Cheers,
Henk

1 Like