My workflow starts from a Slack private channel which I use as my interface to record information. This can be either a link or an image screenshot. If it’s a link, I give it necessary information about what the link is about. This link and the description is converted to vector embeddings and stored in Pinecone with the URL and description in the metadata.
If it’s an image, the image is first stored in my Onedrive. OpenAI image recognition is asked to describe the image so that I can get enough keywords to create the vector embeddings. Then gets stored again in Pinecone. I store the description and the Onedrive link to the image in the metadata.
Querying information is simple. Slack is my interface again, and the NLP query is converted to keywords by OpenAI. The keywords are queried in Pinecone, and the best results are returned to slack with the corresponding score. I can then pick up the best result for me.