Optional
customCustom evaluators to apply to a dataset run. Each evaluator is provided with a run trace containing the model outputs, as well as an "example" object representing a record in the dataset.
Use evaluators
instead.
This feature is deprecated and will be removed in the future.
It is not recommended for use.
Optional
evalThe language model specification for evaluators that require one.
Optional
evaluators?: (T | EvalConfig | U)[]LangChain evaluators to apply to a dataset run. You can optionally specify these by name, or by configuring them with an EvalConfig object.
Optional
formatConvert the evaluation data into formats that can be used by the evaluator. This should most commonly be a string. Parameters are the raw input from the run, the raw output, raw reference output, and the raw run.
// Chain input: { input: "some string" }
// Chain output: { output: "some output" }
// Reference example output format: { output: "some reference output" }
const formatEvaluatorInputs = ({
rawInput,
rawPrediction,
rawReferenceOutput,
}) => {
return {
input: rawInput.input,
prediction: rawPrediction.output,
reference: rawReferenceOutput.output,
};
};
The prepared data.
RunEvalConfig in LangSmith is a configuration class for running evaluations on datasets. Its primary purpose is to define the parameters and evaluators that will be applied during the evaluation of a dataset. This configuration can include various evaluators, custom evaluators, and different keys for inputs, predictions, and references.
T - The type of evaluators.
U - The type of custom evaluators.
Generated using TypeDoc
Configuration class for running evaluations on datasets.