evaluation_run_prompt_results
Creates, updates, deletes, gets or lists an evaluation_run_prompt_results
resource.
Overview
Name | evaluation_run_prompt_results |
Type | Resource |
Id | digitalocean.genai.evaluation_run_prompt_results |
Fields
The following fields are returned by SELECT
queries:
- genai_get_evaluation_run_prompt_results
A successful response.
Name | Datatype | Description |
---|---|---|
prompt_id | integer (int64) | Prompt ID |
ground_truth | string | The ground truth for the prompt. (example: example string) |
input | string | (example: example string) |
input_tokens | string (uint64) | The number of input tokens used in the prompt. (example: 12345) |
output | string | (example: example string) |
output_tokens | string (uint64) | The number of output tokens used in the prompt. (example: 12345) |
prompt_chunks | array | The list of prompt chunks. |
prompt_level_metric_results | array | The metric results for the prompt. |
Methods
The following methods are available for this resource:
Name | Accessible by | Required Params | Optional Params | Description |
---|---|---|---|---|
genai_get_evaluation_run_prompt_results | select | evaluation_run_uuid , prompt_id | To retrieve results of an evaluation run, send a GET request to /v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id} . |
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
evaluation_run_uuid | string | Evaluation run UUID. (example: "123e4567-e89b-12d3-a456-426614174000") |
prompt_id | integer | Prompt ID to get results for. (example: 1) |
SELECT
examples
- genai_get_evaluation_run_prompt_results
To retrieve results of an evaluation run, send a GET request to /v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}
.
SELECT
prompt_id,
ground_truth,
input,
input_tokens,
output,
output_tokens,
prompt_chunks,
prompt_level_metric_results
FROM digitalocean.genai.evaluation_run_prompt_results
WHERE evaluation_run_uuid = '{{ evaluation_run_uuid }}' -- required
AND prompt_id = '{{ prompt_id }}' -- required;