Csv headers are gone after parsing even with headers ON?

I’m grabbing a csv off an ftp server. Then parsing the csv with parse csv module. Then creating multiple csv’s with create csv module. My csv file from the ftp server has headers in the first row. In the parse csv module I have checked yes on ‘CSV contains headers’ but yet they are gone from its output. If I check NO then I do get my headers but recognized as a regular row and not a header row so I lose my headers down the pipeline anyways. What could I be doing wrong here? Seems ridiculously straight forward and yet I can’t get my desired output. Any help is much appreciated. Thank you all.
1 overview

Howdy @jaj_vr Welcome to the :make: Make Community!

Can you take a camera-viewfinder-duotone screenshot of your make scenario for me,
along with the relevant module configurations
and share-all-duotoneshare the images here?

1 Like

ok thank you. added screenshots to op. any help is much appreciated. thanks.

The headers get used into the output bundle structure yes? Isn’t that the desired output? Or am I missing exactly what you’d like to see in the output of the parse csv call.

Can you tell us what the desired output is?

No. The headers are nowhere to be found in the output bundle. I assume my headers should be used in place of ‘Column 1’, ‘Column 2’, etc… Does the [parse csv] module with ‘headers ON’ not pass the headers down the pipeline? I assumed it would.

Further down the pipeline I create multiple csv files based on csv passed into this parse module and need those headers.

1 Like

Those are headers in output bundle no?

Not sure what you’re referring to. The sftp module only shows that it picked up a csv file. The header row is inside the csv file as seen in the parse csv input bundle that I highlighted with a red box. For example the first header field is PrimaryEnrollmentNumber.

I see. Have you tried a simpler 2 line file with just a few fields?

The file is quite simple and a valid csv file as it is.

I’m having the same problem.

1 Like

maybe you’re referring to http headers or something like that but i am referring to the first row of a csv file as a header row.

@jaj_vr, the header toggle purpose is to tell Make if you need the first row or not. If the first row is a header in general you can ignore it because you will not map it downstream.

The CSV module will always return the data with the column position rather than the header name. This is just the way it is designed.

How is the fact that Make gives you a column number instead of the actual header name a problem?

1 Like

It’s a problem if the number of columns and their postions in the csv vary because then you never know which information you find at which column number.

You could read the header row separately and make an array?

1 Like

Yeah but that would be a problem regardless, it is hard to build automations if the structure of the csv is not constant.

Even with Google Sheets the mapping is done using column ID.

It is doable though, just a bit complex.

Agreed. Why doesn’t csv use the header row as a key reference? Seems a miss.


This is how the data is returned with the Google Sheets module, it is using the raw column id for mapping also. The header name is displayed as a convenience but if the columns are shuffled it will break the mapping.

Back in the day Zapier used to map by column name but they changed it to raw column id also. I think it’s just more robust this way.


My question - What software is used to create this .csv file?

It seems odd that software doesn’t always output the same columns in the same order. This seems to be an bug with the software that outputs the .csv file.

Note - if this is actually multiple .csv files with separate information coming from multiple sources, then it is best to create multiple scenarios, then merge that data in a separate Sheet or Data Store.

Yes, it is annoying that the column headers (titles) don’t show up in Make, but that shouldn’t make a difference overall - just open the .csv in Google Sheets as you map the column numbers in your scenario.

I definitely understand the annoyance, though… I am working with a very large .csv file in a huge scenario right now and it’s definitely annoying going back and forth.