I am using http request model to make a API call to my local AI model and I am using LM Studio and Ngrok to send request to my LLM but when I send small prompt my http request model is working but when i send large prompt and Data to my ai it can’t work. I also try local tunnel to send my request from http module but it give me same error. I also attach my LM studio. My token limit is 32000.
Now my large prompt contains 2030 tokens. Please tell me How to send my large request from http module. I also use text Parser to remove all space but this can’t work