Hey I wasn’t able to, I was working on other parts of the problem in the flow
but here is the input and output bundle - btw really appreciate the help
Input -
[
{
“topK”: 15,
“topP”: 1,
“model”: “gemini-1.5-pro”,
“messages”: [
{
“role”: “user”,
“content”: “” },
{
“role”: “model”,
“content”: “Sure I can help you with that.”
},
{
“role”: “user”,
“content”: “”
}
],
“projectId”: “betterwithaihackathon”,
“temperature”: 0.9,
“maxOutputTokens”: 5000,
“serviceEndpointLocationId”: “europe-west9”
}
]
Output -
[
{
“textResponse”: “”,
“predictions”: [
{
“candidates”: [
{
“content”: {
“role”: “model”,
“parts”: [
{
“text”: “{”
}
]
}
}
]
},
{
“candidates”: [
{
“content”: {
“role”: “model”,
“parts”: [
{
“text”: “\n "headline": "”
}
]
},
“safetyRatings”: [
{
“category”: “HARM_CATEGORY_HATE_SPEECH”,
“probability”: “NEGLIGIBLE”,
“probabilityScore”: 0.15546274,
“severity”: “HARM_SEVERITY_NEGLIGIBLE”,
“severityScore”: 0.13106197
},
{
“category”: “HARM_CATEGORY_DANGEROUS_CONTENT”,
“probability”: “NEGLIGIBLE”,
“probabilityScore”: 0.15687835,
“severity”: “HARM_SEVERITY_NEGLIGIBLE”,
“severityScore”: 0.049958523
},
{
“category”: “HARM_CATEGORY_HARASSMENT”,
“probability”: “NEGLIGIBLE”,
“probabilityScore”: 0.17384852,
“severity”: “HARM_SEVERITY_NEGLIGIBLE”,
“severityScore”: 0.1046602
},
{
“category”: “HARM_CATEGORY_SEXUALLY_EXPLICIT”,
“probability”: “NEGLIGIBLE”,
“probabilityScore”: 0.28866935,
“severity”: “HARM_SEVERITY_NEGLIGIBLE”,
“severityScore”: 0.12940271
}
]
}
]
},
{
“candidates”: [
{
“content”: {
“role”: “model”,
“parts”: [
{
“text”: “”
}
]
},
“finishReason”: “STOP”
}
],
“usageMetadata”: {
“promptTokenCount”: 1311,
“candidatesTokenCount”: 299,
“totalTokenCount”: 1610
}
}
]
}
]
I have truncated the output bundle there are different number of candidates every time under predictions