Automating Transcriptions from Huberman Lab podcast (Dr. Andrew Huberman) with Podscribe, Google Drive, and ChatGPT (Custom GPT)

What are you trying to achieve?

Hi everyone!

I’m trying to automate the management of podcast transcriptions for Dr. Andrew Huberman’s podcast (Huberman Lab on YouTube) using Make.com and ChatGPT. Specifically, I’d like to avoid the manual process I currently use to manage these transcriptions from Podscribe (Podcast transcripts, sponsors, and audience data - Podscribe)

For context, Podscribe is an amazing platform that provides high-quality transcriptions for popular podcasts, including Huberman Lab, in .txt format. The problem is that every time I want to use these transcriptions with my custom GPT (which I’ve set up in ChatGPT), I have to manually:

Download the .txt file from Podscribe.
Convert the .txt to a Word document and save it as a PDF.
Finally, upload the PDF into my GPT so it can process the transcript and use it in the chat interface.
This manual process is time-consuming, and I started wondering if it’s possible to automate all of this using Make.com, so that I can simply interact with my GPT and request new podcast episodes directly from the ChatGPT interface. The GPT would communicate with Make.com, download the transcription, format it properly, and store it in Google Drive—all without any manual intervention.

Goals of the automation:

When I request a specific podcast episode from my GPT, the automation should:
Search Google Drive to check if the requested episode has already been downloaded (to avoid duplicates).
If the episode is already in Google Drive, reply with: “This episode is already downloaded.”
If the episode is not found, the automation should:
Download the episode’s transcription from Podscribe or YouTube.
Convert the transcription into a Google Docs file.
Save the document to Google Drive (in a specified folder or a default one).
I also want the automation to handle different types of requests:
“latest”: Download the latest episode.
“keyword”: Search and download an episode by keyword.
“specific”: Download an episode by its exact title.
Implement error handling:
If an episode isn’t found or there’s an issue with the API call, I’d like an appropriate error message returned.
Ultimately, I want to be able to request new podcast episodes directly from the ChatGPT interface, and have everything happen automatically in the background with Make.com, saving me a lot of time and effort.

Steps taken so far

I’ve set up the webhook and added modules for Google Drive search, HTTP requests to Podscribe, and Google Docs creation, but I’m not sure if I’ve configured them correctly.
Below is a screenshot of my current flow. I’m not sure if it’s structured properly, so I’d appreciate any guidance or advice.

Screenshots: scenario setup, module configuration, errors


Welcome, @jpmeniconi, to the Make Community.

I’ve not yet worked with OpenAI’s Custom GPT actions. However, I’m curious about your use case because of the interactivity aspects. I’m happy to help, given you have the patience to discuss things here as we build up step-by-step.

One thing that’s on my mind is that I don’t get the big-picture context. Is the CGPT to read the transcripts and then produce summaries, or what?

I ask because CGPT knowledge isn’t changeable via the OpenAI API. However, using OpenAI’s Assistants, we could manage the Vector Store, which provides the Assistant with a knowledge base that can be queried.

Then, using Make to monitor the podcasts, it automatically pulls the transcripts into an OpenAI vector store, which the Assistant can read.

The bummer is that the Assistant requires a different interface than ChatGPT typically provides. In the past, I’ve used Slack. However, I wonder about using CGPT Actions to connect to Make, which connects to the OpenAI Assistant for the interactive bit.

I think that’d be cool, so we don’t have to build an interface. CGPT has a flexible knowledge base, though it’s the assistant answering things.

Okay, that’s my conceptual approach at the moment. Do you want to take the journey together?

Hi Michael!

Thanks so much for your thoughtful and quick reply! I’ve been reflecting on your feedback, and I really appreciate the insights you’ve shared. After thinking it through, I’ve realized that adding an API or using tokens to automate the entire process isn’t the best fit for me. It would mean additional costs for every request I make, and I’d like to avoid that. I’d prefer to keep things as “free” as possible, so I’ve been thinking about a hybrid approach instead.

To clarify my use case, I’ve built a Custom GPT based on Andrew Huberman’s content to allow me to ask questions about health, wellness, neuroscience, and other related topics. Dr. Huberman brings top experts and researchers to his podcast, and they discuss very specific health-related topics. My goal is to have my GPT always ready to respond with not only general internet knowledge but also the wealth of information from the Huberman Lab Podcast. So far, I’ve been manually uploading PDFs of podcast transcripts into the GPT, and it’s been working wonderfully—the GPT can answer health-related questions with great clarity. But now I’d love to automate the process so my GPT stays updated week by week as new episodes are released, without me having to manually process the transcripts.

Perhaps, a hybrid solution may be the best fit for my needs. I’d like Make.com to help me automate the downloading and conversion of podcast transcriptions from Podscribe (or YouTube) into Google Docs, and then save them in Google Drive with the correct file names. Once the file is ready, I’d receive a email notification through Gmail, letting me know that the latest transcription is saved and ready for me to upload. From there, I’ll manually upload it to my Custom GPT in ChatGPT.

This hybrid approach keeps things simple and cost-effective. I want to avoid extra complexity and costs, such as using APIs or services like OpenAI Assistants or Vector Stores. I’m already using ChatGPT Plus ($20/month), and I’d like to stick with that without adding extra charges for API credits or other paid services. I feel like this solution strikes a good balance between automating the more tedious parts (like downloading and saving) while still giving me control over the final step.

This solution would help me keep my Custom GPT updated with the latest podcast episodes without me needing to manually download and process the files each time. I’d receive a notification when the transcription is ready, and I can upload it to ChatGPT at my convenience.

I’d love your help with setting up this! especially making sure the notifications are set up correctly and that the files are named properly. Thanks again for your guidance, and I’m looking forward to moving forward with your support!

Best regards,
JP

Good day, JP;

Much appreciated for the further background information. Knowing that you’re attempting to build a knowledge base via CustomGPT is helpful.

For the hybrid solution you mention, it seems that a weekly scheduled Make scenario to operate as you describe should work fine.

Where specifically are you challenged now?