Skip to main content

evaluation_run_prompt_results

Creates, updates, deletes, gets or lists an evaluation_run_prompt_results resource.

Overview

Nameevaluation_run_prompt_results
TypeResource
Iddigitalocean.genai.evaluation_run_prompt_results

Fields

The following fields are returned by SELECT queries:

A successful response.

NameDatatypeDescription
prompt_idinteger (int64)Prompt ID
ground_truthstringThe ground truth for the prompt. (example: example string)
inputstring (example: example string)
input_tokensstring (uint64)The number of input tokens used in the prompt. (example: 12345)
outputstring (example: example string)
output_tokensstring (uint64)The number of output tokens used in the prompt. (example: 12345)
prompt_chunksarrayThe list of prompt chunks.
prompt_level_metric_resultsarrayThe metric results for the prompt.

Methods

The following methods are available for this resource:

NameAccessible byRequired ParamsOptional ParamsDescription
genai_get_evaluation_run_prompt_resultsselectevaluation_run_uuid, prompt_idTo retrieve results of an evaluation run, send a GET request to /v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}.

Parameters

Parameters can be passed in the WHERE clause of a query. Check the Methods section to see which parameters are required or optional for each operation.

NameDatatypeDescription
evaluation_run_uuidstringEvaluation run UUID. (example: "123e4567-e89b-12d3-a456-426614174000")
prompt_idintegerPrompt ID to get results for. (example: 1)

SELECT examples

To retrieve results of an evaluation run, send a GET request to /v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}.

SELECT
prompt_id,
ground_truth,
input,
input_tokens,
output,
output_tokens,
prompt_chunks,
prompt_level_metric_results
FROM digitalocean.genai.evaluation_run_prompt_results
WHERE evaluation_run_uuid = '{{ evaluation_run_uuid }}' -- required
AND prompt_id = '{{ prompt_id }}' -- required;