Running real inference at scale? Apply for our limited $10K credit program — Find out more
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": " such",
"token": 1778,
"finish_reason": null,
"logprobs": null
}
],
"created": 1733382157
}
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": " as",
"token": 439,
"finish_reason": null,
"logprobs": null
}
],
"created": 1733382157
}
...
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": "",
"finish_reason": "length",
"logprobs": null
}
],
"created": 1733382157
}
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 10,
"total_tokens": 15
},
"created": 1733382157
}
data: [DONE]
Represents a streamed chunk of a completions response returned by model, based on the provided input.
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": " such",
"token": 1778,
"finish_reason": null,
"logprobs": null
}
],
"created": 1733382157
}
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": " as",
"token": 439,
"finish_reason": null,
"logprobs": null
}
],
"created": 1733382157
}
...
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": "",
"finish_reason": "length",
"logprobs": null
}
],
"created": 1733382157
}
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 10,
"total_tokens": 15
},
"created": 1733382157
}
data: [DONE]
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": " such",
"token": 1778,
"finish_reason": null,
"logprobs": null
}
],
"created": 1733382157
}
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": " as",
"token": 439,
"finish_reason": null,
"logprobs": null
}
],
"created": 1733382157
}
...
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [
{
"index": 0,
"text": "",
"finish_reason": "length",
"logprobs": null
}
],
"created": 1733382157
}
data: {
"id": "cmpl-26a1e10db8544bc3adb488d2d205288b",
"model": "meta-llama-3.1-8b-instruct",
"object": "text_completion",
"choices": [],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 10,
"total_tokens": 15
},
"created": 1733382157
}
data: [DONE]
text_completion
.Show child attributes
stop
means the API returned the full completions generated by the model without running into any limits.
length
means the generation exceeded max_tokens
or the conversation exceeded the max context length.Available options: stop
, length
Show child attributes
logprobs
.Was this page helpful?