Why are my projects always so complicated? Issues with triggers, routers and connections

What are you trying to achieve?

I am trying to run a scenario where I evaluate the sales of various products across different store locations.
I have so many issues with it, I don’t even know where to start.

My inputs are:
A) the product code. Product codes are listed in a Google Sheet.
B) one .docx file per store location. These are files already saved and are static.
C) one .docx file and two .cvs files. These contain dynamic information about each product.
A, B and C should be fed into ChatGPT for analysis.

My questions:

  1. How do I feed the multiple files of A, B and C into ChatGPT? It seems that a router cannot have multiple connections coming in.

  2. The files in C: the .docx is an outcome from ChatGPT and the two .cvs are outcomes from ran python codes. How do I automate the production of these files for each of the product codes in A? Is there a trigger that I can put in place to make sure I produce the files in C for all product codes in A every time I run the scenario?

  3. After all that is done, how do I run the whole scenario for every store location (in B above)? I would like to avoid creating a new scenario per location. Does it make sense to create a Google Sheet with 2 columns, one column with a list of all product codes and a second column with the name of the first location. Underneath, all product codes again and the name of the second location, etc until all locations are cross referenced with each product code? Would I be able to have the scenario automatically pull A and B based on this Google Sheet list?

Anyone who made it through this whole post is a hero!

Steps taken so far

Went through explanation videos and read through community posts.

Hello @Maria4 and welcome to the Make Community!

Let me just break this down…

  • You have multiple stores, each sells multiple products.
  • Each store has a docx file and you have a reference to map each store to each store’s file.
  • Each product has a docx file and 2 csv files.
    ** The DOCX is produced by ChatGPT, does this need to be done within Make or is done outside of Make and the file is stored somewhere and just needs to be retrieved?
    ** The CSV files are from Python scripts. Do you need Make to produce these or is that done outside of Make and you just need Make to retrieve them?

However this entire process develops, it needs to run once for each store or once for each product?

I am slightly confused as to what your inputs will be.
Do you want to analyze all sales of all products at a specific store or analyze all sales of a single product across multiple stores?

You might consider making a Loom video explaining your inputs and goals…just use dummy data if you can. This might be worthwhile for you anyway so you have somewhat of a testing environment.

My initial thoughts are: you need a database, one that’s easy for both Python and Make to access and update. There are so many options out there, like SmartSuite, Airtable, Notion, etc…