The Ultimate Guide to Automating Your WordPress Blog with Fresh, Original Content

Hey there, Makers! :raised_hands:

Want to take your blog to the next level with AI-powered content? In this guide, I’ll show you how I created an automated AI research paper TL;DR blog using LlamaParse and Make. :robot::books:

In our previous guide, we explored how to use Jina AI Reader to extract content from URLs, which works great for web pages and online articles. However, when dealing with AI research papers, the content is often in PDF or other document formats. That’s where LlamaParse comes in! :bookmark_tabs:

LlamaParse is a game-changer, acting as a super-smart research assistant that quickly extracts content from complex documents (.pdf, .pptx, .docx, .html, .xml, and more). :mag: By integrating it with Make, I’ve built a fully automated content creation and publication workflow. :art:

All the prompts used in this workflow are included in the scenario blueprints, so you can simply download and import them directly into your Make account.

blueprint (5).json (157.1 KB)

The best part? LlamaParse offers a generous free tier, giving you 1,000 free pages per day! If you need even more, their paid plan includes 7,000 free pages per week, with additional pages costing just $0.003 each. :moneybag:

You can visit the LlamaParse website at LlamaCloud to obtain your API key.

If you want to make your life easier, don’t miss our custom LlamaParse app, made just for Make. It offers a simple, hassle-free way to integrate LlamaParse into your Make workflows, so you can focus on creating amazing content.

let’s dive in and see how you can use LlamaParse and Make to create your own AI-powered blog! :rocket:

So, here’s the problem with LLMs: they’re not always updated with the latest information. One solution is to feed the AI with the most recent data, but extracting and feeding that information can be a real headache. That’s where LlamaParse comes into play! :llama::bookmark_tabs:

We’re going to create a semi-RAG (Retrieval Augmented Generation) system that leverages LlamaParse to extract key insights from the latest AI research papers and feed them to our LLM.

This way, our AI-powered blog will always be up-to-date with the most recent breakthroughs in the field. :arrows_counterclockwise::date:

Let me explain how our scenario works:

This scenario automates the process of generating blog posts based on AI research papers from arXiv. It retrieves a research paper, extracts its content, generates a blog post outline and full article, creates an engaging featured image, and publishes the post on a WordPress blog.

Scenario Flow

  1. Retrieve the next research paper from a Google Sheet.
  2. Download the paper’s PDF file.
  3. Extract the paper’s content using LlamaParse.
  4. Update the Google Sheet to mark the paper as processed.
  5. Generate a comprehensive, SEO-optimized blog post outline using Anthropic Claude.
  6. Create an engaging, easy-to-understand blog post based on the outline and paper content using Anthropic Claude.
  7. Generate a concise prompt for an AI image that represents the blog post using OpenAI.
  8. Create a high-quality featured image using OpenAI DALL-E 3.
  9. Publish the blog post with the featured image on a WordPress blog.

The reason we’re using Claude here is because some research papers can have many pages, and Claude is better at retaining and summarizing the content of lengthy PDFs. making it more suitable for handling the large content of research papers efficiently

:memo: Adapting this scenario to your unique use case
The semi-RAG system we’ve described for creating an AI-powered blog can be easily adapted to any scenario where you need to feed your LLM with up-to-date information. Whether you’re a financial analyst, healthcare professional, or marketing expert, this setup can be tailored to your specific needs. :star2:

You might be wondering how we can get the arXiv PDF URLs and save them into the Google Sheet. In that case, we can use this scenario:

Automated arXiv AI Research Paper Retrieval

blueprint (6).json (27.9 KB)

This scenario allows you to subscribe to any new AI-related research papers from arXiv and automatically save their details, including the PDF URLs, into a Google Sheet.

Scenario Flow

  1. Retrieve the RSS feed items from the arXiv AI category using the RSS module.
  2. Aggregate the retrieved data using the Basic Aggregator module.
  3. Feed the aggregated data into the Iterator module.
  4. Add a new row to the specified Google Sheet for each research paper, including the title, description, and URL.

For reference, here’s the template of the Google Sheet used in this workflow: arxiv - Google Sheets

We have also used the formula =SUBSTITUTE(C2, “abs”, “pdf”) in the Google Sheet to quickly convert the arXiv URLs to their corresponding PDF URLs.

In this example, we are retrieving all the available AI research papers from the arXiv RSS feed. However, you can easily modify the scenario to subscribe to any new papers related to your specific interests by adjusting the RSS feed URL.

Conclusion

And there you have it, Makers! :tada: You now have a powerful, automated AI-powered blog that stays up-to-date with the latest research papers. By combining the strengths of LlamaParse and Make, you can create a content creation workflow that saves you time and keeps your readers engaged.

Remember, this setup is highly adaptable and can be tailored to fit your unique needs :star2:

If you have any questions or need further assistance, don’t hesitate to leave a comment below or reach out to me at bilalmansouri.com.

Happy Making! :blush:

7 Likes

Congratulation: you have written a very usefull post: make has a new paied user :slight_smile: and llamaParse also :grinning:

Do you know how long it takes, until llamaParse activates the license key?
I still get the message “Failed to verify connection ‘My LlamaParse connection’. [400]: Invalid token for the given product ID” when I try to establish the connection with my API Key, my email address and my Licence Key

Best regards

1 Like

Hi Gabriel

Congratulations on purchasing our custom app! :tada:

I see that you’ve reached out regarding the connection issue. Please check your inbox, as I’ve sent you additional information that may help with the license key activation. Make sure to use the email you provided when you purchased the app. If you still encounter any issues, just let me know!

Hi,

I’m trying to use the custom LlamaParse app in my automation project. Basically, it’s the same as the showcase you shared, but the LlamaParse Upload File module always shows a “pending” status in the output. Is this normal?

Best regards,

Alexandre Castro

Hi Alexandre,

I understand your question about the “pending” status in LlamaParse. This is actually expected behavior, and I can help explain how to properly handle it!

The PDF parsing process is asynchronous, which means we need to check its status after submission. I’ve attached a blueprint that demonstrates the correct implementation:

llamaparse.json (15.1 KB)

  1. First, we download the PDF file
  2. Then use the Upload File module to submit it
  3. Add a small delay (2 seconds in this example)
  4. Use the “Get a Job” module to check if processing is complete
  5. Finally, use a router to handle both JSON and Markdown results

This blueprint uses a sample receipt PDF, making it perfect for testing. You can import it and use it as a reference for your own implementation.

The key is that you always need the status check step - the “pending” status you’re seeing is just the initial state while LlamaParse processes your document.

Let me know if you need any clarification or have questions about adapting this for your specific needs!

Best regards

Bilal Mansouri

Hi Bilal,

Thank you for your help. I implemented it as you showed, but I received an error message in “Get Result as JSON”: [403] Access to this job is forbidden.
While analyzing the module, I noticed that both “Get Result as JSON” and “Get Result as Markdown” have the same Job ID: 13. Is this correct?

Best Regards,

Alexandre