trustyai:
devFlags:
manifests:
- contextDir: config
sourcePath: ''
uri: https://github.com/trustyai-explainability/trustyai-service-operator/tarball/main
managementState: Managed
Info alert:Important Notice
Please note that more information about the previous v2 releases can be found here. You can use "Find a release" search bar to search for a particular release.
Evaluating AI systems
Overview of evaluating AI systems
Evaluate your AI systems to generate an analysis of your model’s ability by using the following TrustyAI tools:
-
LM-Eval: You can use TrustyAI to monitor your LLM against a range of different evaluation tasks and to ensure the accuracy and quality of its output. Features such as summarization, language toxicity, and question-answering accuracy are assessed to inform and improve your model parameters.
-
RAGAS: Use Retrieval-Augmented Generation Assessment (RAGAS) with TrustyAI to measure and improve the quality of your RAG systems in Open Data Hub. RAGAS provides objective metrics that assess retrieval quality, answer relevance, and factual consistency.
-
Llama Stack: Use Llama Stack components and providers with TrustyAI to evaluate and work with LLMs.
Evaluating large language models
A large language model (LLM) is a type of artificial intelligence (AI) program that is designed for natural language processing tasks, such as recognizing and generating text.
As a data scientist, you might want to monitor your large language models against a range of metrics, in order to ensure the accuracy and quality of its output. Features such as summarization, language toxicity, and question-answering accuracy can be assessed to inform and improve your model parameters.
Open Data Hub now offers Language Model Evaluation as a Service (LM-Eval-aaS), in a feature called LM-Eval. LM-Eval provides a unified framework to test generative language models on a vast range of different evaluation tasks.
The following sections show you how to create an LMEvalJob custom resource (CR) which allows you to activate an evaluation job and generate an analysis of your model’s ability.
Setting up LM-Eval
LM-Eval is a service designed for evaluating large language models that has been integrated into the TrustyAI Operator.
The service is built on top of two open-source projects:
-
LM Evaluation Harness, developed by EleutherAI, that provides a comprehensive framework for evaluating language models
-
Unitxt, a tool that enhances the evaluation process with additional functionalities
The following information explains how to create an LMEvalJob custom resource (CR) to initiate an evaluation job and get the results.
|
Note
|
LM-Eval is only available in the latest community builds. To use LM-Eval on Open Data Hub, ensure that you use ODH 2.20 or later versions and add the following |
Configurable global settings for LM-Eval services are stored in the TrustyAI operator global ConfigMap, named trustyai-service-operator-config. The global settings are located in the same namespace as the operator.
You can configure the following properties for LM-Eval:
| Property | Default | Description |
|---|---|---|
|
|
Detect if there are GPUs available and assign a value for the |
|
|
The image for the LM-Eval job. The image contains the Python packages for LM Evaluation Harness and Unitxt. |
|
|
The image for the LM-Eval driver. For detailed information about the driver, see the |
|
|
The image-pulling policy when running the evaluation job. |
|
8 |
The default batch size when invoking the model inference API. Default batch size is only available for local models. |
|
24 |
The maximum batch size that users can specify in an evaluation job. |
|
10s |
The interval to check the job pod for an evaluation job. |
After updating the settings in the ConfigMap, restart the operator to apply the new values.
Enabling external resource access for LMEval jobs
LMEval jobs do not allow internet access or remote code execution by default. When configuring an LMEvalJob, it may require access to external resources, for example task datasets and model tokenizers, usually hosted on Hugging Face. If you trust the source and have reviewed the content of these artifacts, an LMEvalJob can be configured to automatically download them.
Follow the steps below to enable online access and remote code execution for LMEval jobs. Choose to update these settings by using either the CLI or in the console. Enable one or both settings according to your needs.
Enabling online access and remote code execution for LMEval Jobs using the CLI
You can enable online access using the CLI for LMEval jobs by setting the allowOnline specification to true in the LMEvalJob custom resource (CR). You can also enable remote code execution by setting the allowCodeExecution specification to true. Both modes can be used at the same time.
|
Important
|
Enabling online access or code execution involves a security risk. Only use these configurations if you trust the source(s). |
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
Get the current
DataScienceClusterresource, which is located in theredhat-ods-operatornamespace:$ oc get datasciencecluster -n redhat-ods-operatorExample outputNAME AGE default-dsc 10d -
Enable online access and code execution for the cluster in the
DataScienceClusterresource with thepermitOnlineandpermitCodeExecutionspecifications. For example, create a file namedallow-online-code-exec-dsc.yamlwith the following contents:Exampleallow-online-code-exec-dsc.yamlresource enabling online access and remote code executionapiVersion: datasciencecluster.opendatahub.io/v2 kind: DataScienceCluster metadata: name: default-dsc spec: # ... components: trustyai: managementState: Managed eval: lmeval: permitOnline: allow permitCodeExecution: allow # ...The
permitCodeExecutionandpermitOnlinesettings are disabled by default with a value ofdeny. You must explicitly enable these settings in theDataScienceClusterresource for theLMEvalJobinstance to enable internet access or permission to run any externally downloaded code. -
Apply the updated
DataScienceCluster:$ oc apply -f allow-online-code-exec-dsc.yaml -n redhat-ods-operator-
Optional: Run the following command to check that the
DataScienceClusteris in a healthy state:$ oc get datasciencecluster default-dscExample outputNAME READY REASON default-dsc True
-
-
For new LMEval jobs, define the job in a YAML file as shown in the following example. This configuration requests both internet access, with
allowOnline: true, and permission for remote code execution with,allowCodeExecution: true:Example lmevaljob-with-online-code-exec.yamlapiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: lmevaljob-with-online-code-exec namespace: <your_namespace> spec: # ... allowOnline: true allowCodeExecution: true # ...The
allowOnlineandallowCodeExecutionsettings are disabled by default with a value offalsein theLMEvalJobCR. -
Deploy the LMEval Job:
$ oc apply -f lmevaljob-with-online-code-exec.yaml -n <your_namespace>
|
Important
|
If you upgrade to version 2.25, some TrustyAI |
-
Run the following command to verify that the
DataScienceClusterhas the updated fields:$ oc get datasciencecluster default-dsc -n redhat-ods-operator -o "jsonpath={.data}" -
Run the following command to verify that the
trustyai-dsc-configConfigMap has the same flag values set in theDataScienceCluster.$ oc get configmaps trustyai-dsc-config -n redhat-ods-applications -o "jsonpath={.spec.components.trustyai.eval.lmeval}"Example output{"eval.lmeval.permitCodeExecution":"true","eval.lmeval.permitOnline":"true"}
Updating LMEval job configuration using the web console
Follow these steps to enable online access (allowOnline) and remote code execution (allowCodeExecution) modes through the Open Data Hub web console for LMEval jobs.
|
Important
|
Enabling online access or code execution involves a security risk. Only use these configurations if you trust the source(s). |
-
You have cluster administrator privileges for your Open Data Hub cluster.
-
In the OpenShift Container Platform console, click Operators → Installed Operators.
-
Search for the Open Data Hub Operator, and then click the Operator name to open the Operator details page.
-
Click the Data Science Cluster tab.
-
Click the default instance name (for example, default-dsc) to open the instance details page.
-
Click the YAML tab to show the instance specifications.
-
In the
spec:components:trustyai:eval:lmevalsection, set thepermitCodeExecutionandpermitOnlinefields to a value ofallow:spec: components: trustyai: managementState: Managed eval: lmeval: permitOnline: allow permitCodeExecution: allow -
Click Save.
-
From the Project drop-down list, select the project that contains the LMEval job you are working with.
-
From the Resources drop-down list, select the
LMEvalJobinstance that you are working with. -
Click Actions → Edit YAML
-
Ensure that the
allowOnlineandallowCodeExecutionare set totrueto enable online access and code execution for this job when writing yourLMEvalJobcustom resource:apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: example-lmeval spec: allowOnline: true allowCodeExecution: true -
Click Save.
| Field | Default | Description |
|---|---|---|
|
|
Enables this job to access the internet (e.g., to download datasets or tokenizers). |
|
|
Allows this job to run code included with downloaded resources. |
LM-Eval evaluation job
LM-Eval service defines a new Custom Resource Definition (CRD) called LMEvalJob. An LMEvalJob object represents an evaluation job. LMEvalJob objects are monitored by the TrustyAI Kubernetes operator.
To run an evaluation job, create an LMEvalJob object with the following information: model, model arguments, task, and secret.
|
Note
|
For a list of TrustyAI-supported tasks, see LMEval task support. |
After the LMEvalJob is created, the LM-Eval service runs the evaluation job. The status and results of the LMEvalJob object update when the information is available.
|
Note
|
Other TrustyAI features (such as bias and drift metrics) cannot be used with non-tabular models (including LLMs). Deploying the |
The sample LMEvalJob object contains the following features:
-
The
google/flan-t5-basemodel from Hugging Face. -
The dataset from the
wnlicard, a subset of the GLUE (General Language Understanding Evaluation) benchmark evaluation framework from Hugging Face. For more information about thewnliUnitxt card, see the Unitxt website. -
The following default parameters for the
multi_class.relationUnitxt task:f1_micro,f1_macro, andaccuracy. This template can be found on the Unitxt website: click Catalog, then click Tasks and select Classification from the menu.
The following is an example of an LMEvalJob object:
apiVersion: trustyai.opendatahub.io/v1alpha1
kind: LMEvalJob
metadata:
name: evaljob-sample
spec:
model: hf
modelArgs:
- name: pretrained
value: google/flan-t5-base
taskList:
taskRecipes:
- card:
name: "cards.wnli"
template: "templates.classification.multi_class.relation.default"
logSamples: true
After you apply the sample LMEvalJob, check its state by using the following command:
oc get lmevaljob evaljob-sample
Output similar to the following appears:
NAME: evaljob-sample
STATE: Running
Evaluation results are available when the state of the object changes to Complete. Both the model and dataset in this example are small. The evaluation job should finish within 10 minutes on a CPU-only node.
Use the following command to get the results:
oc get lmevaljobs.trustyai.opendatahub.io evaljob-sample \
-o template --template={{.status.results}} | jq '.results'
The command returns results similar to the following example:
{
"tr_0": {
"alias": "tr_0",
"f1_micro,none": 0.5633802816901409,
"f1_micro_stderr,none": "N/A",
"accuracy,none": 0.5633802816901409,
"accuracy_stderr,none": "N/A",
"f1_macro,none": 0.36036036036036034,
"f1_macro_stderr,none": "N/A"
}
}
-
The
f1_micro,f1_macro, andaccuracyscores are 0.56, 0.36, and 0.56. -
The full results are stored in the
.status.resultsof theLMEvalJobobject as a JSON document. -
The command above only retrieves the results field of the JSON document.
|
Note
|
The provided |
LM-Eval evaluation job properties
The LMEvalJob object contains the following features:
-
The
google/flan-t5-basemodel. -
The dataset from the
wnlicard, from the GLUE (General Language Understanding Evaluation) benchmark evaluation framework. -
The
multi_class.relationUnitxt task default parameters.
The following table lists each property in the LMEvalJob and its usage:
| Parameter | Description |
|---|---|
|
Specifies which model type or provider is evaluated. This field directly maps to the
|
|
A list of paired name and value arguments for the model type. Arguments vary by model provider. You can find further details in the models section of the LM Evaluation Harness library on GitHub. Below are examples for some providers:
|
|
Specifies a list of tasks supported by |
|
Specifies the task using the Unitxt recipe format:
|
|
Sets the number of few-shot examples to place in context. If you are using a task from Unitxt, do not use this field. Use |
|
Set a limit to run the tasks instead of running the entire dataset. Accepts either an integer or a float between |
|
Maps to the |
|
If this flag is passed, then the model outputs and the text fed into the model are saved at per-prompt level. |
|
Specifies the batch size for the evaluation in integer format. The |
|
Specifies extra information for the
|
|
This parameter defines a custom output location to store the the evaluation results. Only Persistent Volume Claims (PVC) are supported. |
|
Creates an operator-managed PVC to store the job results. The PVC is named
|
|
Binds an existing PVC to a job by specifying its name. The PVC must be created separately and must already exist when creating the job. |
|
If this parameter is set to |
|
If this parameter is set to |
|
Mount a PVC as the local storage for models and datasets. |
|
(Optional) Sets the system instruction for all prompts passed to the evaluated model. |
|
Applies the specified chat template to prompts. Contains two fields:
* |
Properties for setting up custom Unitxt cards, templates, or system prompts
You can choose to set up custom Unitxt cards, templates, or system prompts. Use the parameters set out in the Custom Unitxt parameters table in addition to the preceding table parameters to set customized Unitxt items:
| Parameter | Description |
|---|---|
|
Defines one or more custom resources that is referenced in a task recipe. The following custom cards, templates, and system prompts are supported:
|
Performing model evaluations in the dashboard
LM-Eval is a Language Model Evaluation as a Service (LM-Eval-aaS) feature integrated into the TrustyAI Operator. It offers a unified framework for testing generative language models across a wide variety of evaluation tasks.
You can use LM-Eval through the Open Data Hub dashboard or the OpenShift CLI (oc).
These instructions are for using the dashboard.
-
You have logged in to Open Data Hub with administrator privileges.
-
You have enabled the TrustyAI component, as described in Enabling the TrustyAI component.
-
You have created a project in Open Data Hub.
-
You have deployed an LLM model in your project.
|
Note
|
By default, the Develop & train → Evaluations page is hidden from the dashboard navigation menu. To show the Develop & train → Evaluations page in the dashboard, go to the |
-
In the dashboard, click Develop & train → Evaluations. The Evaluations page opens. It contains:
-
A Start evaluation run button. If you have not run any previous evaluations, only this button is displayed.
-
A list of evaluations you have previously run, if any exist.
-
A Project dropdown option you can click to show the evaluations relating to one project instead of all projects.
-
A filter to sort your evaluations by model or evaluation name.
The following table outlines the elements and functions of the evaluations list:
-
| Property | Function |
|---|---|
Evaluation |
The name of the evaluation. |
Model |
The model that was used in the evaluation. |
Evaluated |
The date and time when the evaluation was created. |
Status |
The status of your evaluation: running, completed, or failed. |
More options icon |
Click this icon to access the options to delete the evaluation, or download the evaluation log in JSON format. |
-
From the Project dropdown menu, select the namespace of the project where you want to evaluate the model.
-
Click the Start evaluation run button. The Model evaluation form is displayed.
-
Fill in the details of the form. The model argument summary is displayed after you complete the form details:
-
Model name: Select a model from all the deployed LLMs in your project.
-
Evaluation name: Give your evaluation a unique name.
-
Tasks: Choose one or more evaluation tasks against which to measure your LLM. The 100 most common evaluation tasks are supported.
-
Model type: Choose the type of model based on the type of prompt-formatting you use:
-
Local-completion: You assemble the entire prompt chain yourself. Use this when you want to evaluate models that take a plain text prompt and return a continuation.
-
Local-chat-completion: The framework injects roles or templates automatically. Use this for models that simulate a conversation by taking a list of chat messages with roles like
userandassistantand reply appropriately.
-
-
Security settings:
-
Available online: Choose enable to allow your model to access the internet to download datasets.
-
Trust remote code: Choose enable to allow your model to trust code from outside of the project namespace.
NoteThe Security settings section is grayed out if the security option in global settings is set to
active.
-
-
-
Observe that a model argument summary is displayed as soon as you fill in the form details.
-
Complete the tokenizer settings:
-
Tokenized requests: If set to
true, the evaluation requests are broken down into tokens. If set tofalse, the evaluation dataset remains as raw text. -
Tokenizer: Type the model’s tokenizer URL that is required for the evaluations.
-
-
Click Evaluate. The screen returns to the model evaluation page of your project and your job is displayed in the evaluations list.
Note-
It can take time for your evaluation to complete, depending on factors including hardware support, model size, and the type of evaluation task(s). The status column reports the current status of the evaluation: completed, running, or failed.
-
If your evaluation fails, the evaluation pod logs in your cluster provide more information.
-
LM-Eval scenarios
The following procedures outline example scenarios that can be useful for an LM-Eval setup.
Accessing Hugging Face models with an environment variable token
If the LMEvalJob needs to access a model on HuggingFace with the access token, you can set up the HF_TOKEN as one of the environment variables for the lm-eval container.
-
You have logged in to Open Data Hub.
-
Your cluster administrator has installed Open Data Hub and enabled the TrustyAI service for the project where the models are deployed.
-
To start an evaluation job for a
huggingfacemodel, apply the following YAML file to your project through the CLI:apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: evaljob-sample spec: model: hf modelArgs: - name: pretrained value: huggingfacespace/model taskList: taskNames: - unfair_tos/ logSamples: true pod: container: env: - name: HF_TOKEN value: "My HuggingFace token"For example:
$ oc apply -f <yaml_file> -n <project_name> -
(Optional) You can also create a secret to store the token, then refer the key from the
secretKeyRefobject using the following reference syntax:env: - name: HF_TOKEN valueFrom: secretKeyRef: name: my-secret key: hf-token
Using a custom Unitxt card
You can run evaluations using custom Unitxt cards. To do this, include the custom Unitxt card in JSON format within the LMEvalJob YAML.
-
You have logged in to Open Data Hub.
-
Your cluster administrator has installed Open Data Hub and enabled the TrustyAI service for the project where the models are deployed.
-
Pass a custom Unitxt Card in JSON format:
apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: evaljob-sample spec: model: hf modelArgs: - name: pretrained value: google/flan-t5-base taskList: taskRecipes: - template: "templates.classification.multi_class.relation.default" card: custom: | { "__type__": "task_card", "loader": { "__type__": "load_hf", "path": "glue", "name": "wnli" }, "preprocess_steps": [ { "__type__": "split_random_mix", "mix": { "train": "train[95%]", "validation": "train[5%]", "test": "validation" } }, { "__type__": "rename", "field": "sentence1", "to_field": "text_a" }, { "__type__": "rename", "field": "sentence2", "to_field": "text_b" }, { "__type__": "map_instance_values", "mappers": { "label": { "0": "entailment", "1": "not entailment" } } }, { "__type__": "set", "fields": { "classes": [ "entailment", "not entailment" ] } }, { "__type__": "set", "fields": { "type_of_relation": "entailment" } }, { "__type__": "set", "fields": { "text_a_type": "premise" } }, { "__type__": "set", "fields": { "text_b_type": "hypothesis" } } ], "task": "tasks.classification.multi_class.relation", "templates": "templates.classification.multi_class.relation.all" } logSamples: true -
Inside the custom card specify the Hugging Face dataset loader:
"loader": { "__type__": "load_hf", "path": "glue", "name": "wnli" }, -
(Optional) You can use other Unitxt loaders (found on the Unitxt website) that contain the
volumesandvolumeMountsparameters to mount the dataset from persistent volumes. For example, if you use theLoadCSVUnitxt command, mount the files to the container and make the dataset accessible for the evaluation process.
|
Note
|
The provided scenario example does not work on |
Using PVCs as storage
To use a PVC as storage for the LMEvalJob results, you can use either managed PVCs or existing PVCs. Managed PVCs are managed by the TrustyAI operator. Existing PVCs are created by the end-user before the LMEvalJob is created.
|
Note
|
If both managed and existing PVCs are referenced in outputs, the TrustyAI operator defaults to the managed PVC. |
-
You have logged in to Open Data Hub.
-
Your cluster administrator has installed Open Data Hub and enabled the TrustyAI service for the project where the models are deployed.
Managed PVCs
To create a managed PVC, specify its size. The managed PVC is named <job-name>-pvc and is available after the job finishes. When the LMEvalJob is deleted, the managed PVC is also deleted.
-
Enter the following code:
apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: evaljob-sample spec: # other fields omitted ... outputs: pvcManaged: size: 5Gi
-
outputsis the section for specifying custom storage locations -
pvcManagedwill create an operator-managed PVC -
size(compatible with standard PVC syntax) is the only supported value
Existing PVCs
To use an existing PVC, pass its name as a reference. The PVC must exist when you create the LMEvalJob.
The PVC is not managed by the TrustyAI operator, so it is available after deleting the LMEvalJob.
-
Create a PVC. An example is the following:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: "my-pvc" spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi -
Reference the new PVC from the
LMEvalJob.apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: evaljob-sample spec: # other fields omitted ... outputs: pvcName: "my-pvc"
Using a KServe Inference Service
To run an evaluation job on an InferenceService which is already deployed and running in your namespace, define your LMEvalJob CR, then apply this CR into the same namespace as your model.
NOTE
The following example only works with Hugging Face or vLLM-based model-serving runtimes.
-
You have logged in to Open Data Hub.
-
Your cluster administrator has installed Open Data Hub and enabled the TrustyAI service for the project where the models are deployed.
-
You have a namespace that contains an InferenceService with a vLLM model. This example assumes that a vLLM model is already deployed in your cluster.
-
Your cluster has Domain Name System (DNS) configured.
-
Define your
LMEvalJobCR:apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: evaljob spec: model: local-completions taskList: taskNames: - mmlu logSamples: true batchSize: 1 modelArgs: - name: model value: granite - name: base_url value: $ROUTE_TO_MODEL/v1/completions - name: num_concurrent value: "1" - name: max_retries value: "3" - name: tokenized_requests value: false - name: tokenizer value: huggingfacespace/model env: - name: OPENAI_TOKEN valueFrom: secretKeyRef: name: <secret-name> key: token -
Apply this CR into the same namespace as your model.
A pod spins up in your model namespace called evaljob. In the pod terminal, you can see the output via tail -f output/stderr.log.
-
base_urlshould be set to the route/service URL of your model. Make sure to include the/v1/completionsendpoint in the URL. -
env.valueFrom.secretKeyRef.nameshould point to a secret that contains a token that can authenticate to your model.secretRef.nameshould be the secret’s name in the namespace, whilesecretRef.keyshould point at the token’s key within the secret. -
secretKeyRef.namecan equal the output of:oc get secrets -o custom-columns=SECRET:.metadata.name --no-headers | grep user-one-token -
secretKeyRef.keyis set totoken
Setting up LM-Eval S3 Support
Learn how to set up S3 support for your LM-Eval service.
-
You have logged in to Open Data Hub.
-
Your cluster administrator has installed Open Data Hub and enabled the TrustyAI service for the project where the models are deployed.
-
You have a namespace that contains an S3-compatible storage service and bucket.
-
You have created an
LMEvalJobthat references the S3 bucket containing your model and dataset. -
You have an S3 bucket that contains the model files and the dataset(s) to be evaluated.
-
Create a Kubernetes Secret containing your S3 connection details:
apiVersion: v1 kind: Secret metadata: name: "s3-secret" namespace: test labels: opendatahub.io/dashboard: "true" opendatahub.io/managed: "true" annotations: opendatahub.io/connection-type: s3 openshift.io/display-name: "S3 Data Connection - LMEval" data: AWS_ACCESS_KEY_ID: BASE64_ENCODED_ACCESS_KEY # Replace with your key AWS_SECRET_ACCESS_KEY: BASE64_ENCODED_SECRET_KEY # Replace with your key AWS_S3_BUCKET: BASE64_ENCODED_BUCKET_NAME # Replace with your bucket name AWS_S3_ENDPOINT: BASE64_ENCODED_ENDPOINT # Replace with your endpoint URL (for example, https://s3.amazonaws.com) AWS_DEFAULT_REGION: BASE64_ENCODED_REGION # Replace with your region type: OpaqueNoteAll values must be
base64encoded. For example:echo -n "my-bucket" | base64 -
Deploy the
LMEvalJobCR that references the S3 bucket containing your model and dataset:apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: evaljob-sample spec: allowOnline: false model: hf # Model type (HuggingFace in this example) modelArgs: - name: pretrained value: /opt/app-root/src/hf_home/flan # Path where model is mounted in container taskList: taskNames: - arc_easy # The evaluation task to run logSamples: true offline: storage: s3: accessKeyId: name: s3-secret key: AWS_ACCESS_KEY_ID secretAccessKey: name: s3-secret key: AWS_SECRET_ACCESS_KEY bucket: name: s3-secret key: AWS_S3_BUCKET endpoint: name: s3-secret key: AWS_S3_ENDPOINT region: name: s3-secret key: AWS_DEFAULT_REGION path: "" # Optional subfolder within bucket verifySSL: falseImportantThe `LMEvalJob` will copy all the files from the specified bucket/path. If your bucket contains many files and you only want to use a subset, set the `path` field to the specific sub-folder containing the files that you require. For example use `path: "my-models/"`.
-
Set up a secure connection using SSL.
-
Create a ConfigMap object with your CA certificate:
apiVersion: v1 kind: ConfigMap metadata: name: s3-ca-cert namespace: test annotations: service.beta.openshift.io/inject-cabundle: "true" # For injection data: {} # OpenShift will inject the service CA bundle # Or add your custom CA: # data: # ca.crt: |- # -----BEGIN CERTIFICATE----- # ...your CA certificate content... # -----END CERTIFICATE----- -
Update the
LMEvalJobto use SSL verification:apiVersion: trustyai.opendatahub.io/v1alpha1 kind: LMEvalJob metadata: name: evaljob-sample spec: # ... same as above ... offline: storage: s3: # ... same as above ... verifySSL: true # Enable SSL verification caBundle: name: s3-ca-cert # ConfigMap name containing your CA key: service-ca.crt # Key in ConfigMap containing the certificate
-
-
After deploying the
LMEvalJob, open thekubectlcommand-line and enter this command to check its status:kubectl logs -n test job/evaljob-sample -n test -
View the logs with the
kubectlcommandkubectl logs -n test job/<job-name>to make sure it has functioned correctly. -
The results are displayed in the logs after the evaluation is completed.
Using LLM-as-a-Judge metrics with LM-Eval
You can use a large language model (LLM) to assess the quality of outputs from another LLM, known as LLM-as-a-Judge (LLMaaJ).
You can use LLMaaJ to:
-
Assess work with no clearly correct answer, such as creative writing.
-
Judge quality characteristics such as helpfulness, safety, and depth.
-
Augment traditional quantitative measures that are used to evaluate a model’s performance (for example,
ROUGEmetrics). -
Test specific quality aspects of your model output.
Follow the custom quality assessment example below to learn more about using your own metrics criteria with LM-Eval to evaluate model responses.
This example uses Unitxt to define custom metrics and to see how the model (flan-t5-small) answers questions from MT-Bench, a standard benchmark. Custom evaluation criteria and instructions from the Mistral-7B model are used to rate the answers from 1-10, based on helpfulness, accuracy, and detail.
-
You have logged in to Open Data Hub.
-
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:-
Installing the OpenShift CLI for OpenShift Container Platform
-
Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
-
-
Your cluster administrator has installed Open Data Hub and enabled the TrustyAI service for the project where the models are deployed.
-
You are familiar with how to use Unitxt.
-
You have set the following parameters:
Table 6. Parameters Parameter Description Custom template
Tells the judge to assign a score between 1 and 10 in a standardized format, based on specific criteria.
processors.extract_mt_bench_rating_judgmentPulls the numerical rating from the judge’s response.
formats.models.mistral.instructionFormats the prompts for the Mistral model.
Custom LLM-as-judge metric
Uses Mistral-7B with your custom instructions.
-
In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI (
oc) as shown in the following example:$ oc login <openshift_cluster_url> -u <admin_username> -p <password> -
Apply the following manifest by using the
oc apply -f -command. The YAML content defines a custom evaluation job (LMEvalJob), the namespace, and the location of the model you want to evaluate. The YAML contains the following instructions:-
Which model to evaluate.
-
What data to use.
-
How to format inputs and outputs.
-
Which judge model to use.
-
How to extract and log results.
NoteYou can also put the YAML manifest into a file using a text editor and then apply it by using the
oc apply -f file.yamlcommand.
-
apiVersion: trustyai.opendatahub.io/v1alpha1
kind: LMEvalJob
metadata:
name: custom-eval
namespace: test
spec:
allowOnline: true
allowCodeExecution: true
model: hf
modelArgs:
- name: pretrained
value: google/flan-t5-small
taskList:
taskRecipes:
- card:
custom: |
{
"__type__": "task_card",
"loader": {
"__type__": "load_hf",
"path": "OfirArviv/mt_bench_single_score_gpt4_judgement",
"split": "train"
},
"preprocess_steps": [
{
"__type__": "rename_splits",
"mapper": {
"train": "test"
}
},
{
"__type__": "filter_by_condition",
"values": {
"turn": 1
},
"condition": "eq"
},
{
"__type__": "filter_by_condition",
"values": {
"reference": "[]"
},
"condition": "eq"
},
{
"__type__": "rename",
"field_to_field": {
"model_input": "question",
"score": "rating",
"category": "group",
"model_output": "answer"
}
},
{
"__type__": "literal_eval",
"field": "question"
},
{
"__type__": "copy",
"field": "question/0",
"to_field": "question"
},
{
"__type__": "literal_eval",
"field": "answer"
},
{
"__type__": "copy",
"field": "answer/0",
"to_field": "answer"
}
],
"task": "tasks.response_assessment.rating.single_turn",
"templates": [
"templates.response_assessment.rating.mt_bench_single_turn"
]
}
template:
ref: response_assessment.rating.mt_bench_single_turn
format: formats.models.mistral.instruction
metrics:
- ref: llmaaj_metric
custom:
templates:
- name: response_assessment.rating.mt_bench_single_turn
value: |
{
"__type__": "input_output_template",
"instruction": "Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of the response. Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: \"[[rating]]\", for example: \"Rating: [[5]]\".\n\n",
"input_format": "[Question]\n{question}\n\n[The Start of Assistant's Answer]\n{answer}\n[The End of Assistant's Answer]",
"output_format": "[[{rating}]]",
"postprocessors": [
"processors.extract_mt_bench_rating_judgment"
]
}
tasks:
- name: response_assessment.rating.single_turn
value: |
{
"__type__": "task",
"input_fields": {
"question": "str",
"answer": "str"
},
"outputs": {
"rating": "float"
},
"metrics": [
"metrics.spearman"
]
}
metrics:
- name: llmaaj_metric
value: |
{
"__type__": "llm_as_judge",
"inference_model": {
"__type__": "hf_pipeline_based_inference_engine",
"model_name": "mistralai/Mistral-7B-Instruct-v0.2",
"max_new_tokens": 256,
"use_fp16": true
},
"template": "templates.response_assessment.rating.mt_bench_single_turn",
"task": "rating.single_turn",
"format": "formats.models.mistral.instruction",
"main_score": "mistral_7b_instruct_v0_2_huggingface_template_mt_bench_single_turn"
}
logSamples: true
pod:
container:
env:
- name: HF_TOKEN
valueFrom:
secretKeyRef:
name: hf-token-secret
key: token
resources:
limits:
cpu: '2'
memory: 16Gi
A processor extracts the numeric rating from the judge’s natural language response. The final result is available as part of the LMEval Job Custom Resource (CR).
|
Note
|
The provided scenario example does not work for |
Using llama stack with TrustyAI
This section contains tutorials for working with Llama Stack in TrustyAI. These tutorials demonstrate how to use various Llama Stack components and providers to evaluate and work with language models.
The following sections describe how to work with Llama Stack and provide example use cases:
-
Using the Llama Stack external evaluation provider with lm-evaluation-harness in TrustyAI
-
Running custom evaluations with LM-Eval Llama Stack external evaluation provider
-
Using the trustyai-fms Guardrails Orchestrator with Llama Stack
Using Llama Stack external evaluation provider with lm-evaluation-harness in TrustyAI
This example demonstrates how to evaluate a language model in Open Data Hub using the LMEval Llama Stack external eval provider in a Python workbench. To do this, configure a Llama Stack server to use the LMEval eval provider, register a benchmark dataset, and run a benchmark evaluation job on a language model.
-
You have installed Open Data Hub, version 2.29 or later.
-
You have cluster administrator privileges for your Open Data Hub cluster.
-
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:-
Installing the OpenShift CLI for OpenShift Container Platform
-
Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
-
-
You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace.
-
You have installed TrustyAI Operator in your Open Data Hub cluster.
-
You have set KServe to Raw Deployment mode in your cluster.
-
Create and activate a Python virtual environment for this tutorial in your local machine:
python3 -m venv .venv source .venv/bin/activate -
Install the required packages from the Python Package Index (PyPI):
pip install \ llama-stack \ llama-stack-client \ llama-stack-provider-lmeval -
Create the model route:
oc create route edge vllm --service=<VLLM_SERVICE> --port=<VLLM_PORT> -n <MODEL_NAMESPACE> -
Configure the Llama Stack server. Set the variables to configure the runtime endpoint and namespace. The VLLM_URL value should be the
v1/completionsendpoint of your model route and the TRUSTYAI_LM_EVAL_NAMESPACE should be the namespace where your model is deployed. For example:export TRUSTYAI_LM_EVAL_NAMESPACE=<MODEL_NAMESPACE> export MODEL_ROUTE=$(oc get route -n "$TRUSTYAI_LM_EVAL_NAMESPACE" | awk '/predictor/{print $2; exit}') export VLLM_URL="https://${MODEL_ROUTE}/v1/completions" -
Download the
providers.dprovider configuration directory and therun.yamlexecution file:curl --create-dirs --output providers.d/remote/eval/trustyai_lmeval.yaml https://raw.githubusercontent.com/trustyai-explainability/llama-stack-provider-lmeval/refs/heads/main/providers.d/remote/eval/trustyai_lmeval.yaml curl --create-dirs --output run.yaml https://raw.githubusercontent.com/trustyai-explainability/llama-stack-provider-lmeval/refs/heads/main/run.yaml -
Start the Llama Stack server in a virtual environment, which uses port
8321by default:llama stack run run.yaml --image-type venv -
Create a Python script in a Jupyter workbench and import the following libraries and modules, to interact with the server and run an evaluation:
import os import subprocess import logging import time import pprint -
Start the Llama Stack Python client to interact with the running Llama Stack server:
BASE_URL = "http://localhost:8321" def create_http_client(): from llama_stack_client import LlamaStackClient return LlamaStackClient(base_url=BASE_URL) client = create_http_client() -
Print a list of the current available benchmarks:
benchmarks = client.benchmarks.list() pprint.pprint(f"Available benchmarks: {benchmarks}") -
LMEval provides access to over 100 preconfigured evaluation datasets. Register the ARC-Easy benchmark, a dataset of grade-school level, multiple-choice science questions:
client.benchmarks.register( benchmark_id="trustyai_lmeval::arc_easy", dataset_id="trustyai_lmeval::arc_easy", scoring_functions=["string"], provider_benchmark_id="string", provider_id="trustyai_lmeval", metadata={ "tokenizer": "google/flan-t5-small", "tokenized_requests": False, } ) -
Verify that the benchmark has been registered successfully:
benchmarks = client.benchmarks.list() pprint.print(f"Available benchmarks: {benchmarks}") -
Run a benchmark evaluation job on your deployed model using the following input. Replace phi-3 with the name of your deployed model:
job = client.eval.run_eval( benchmark_id="trustyai_lmeval::arc_easy", benchmark_config={ "eval_candidate": { "type": "model", "model": "phi-3", "provider_id": "trustyai_lmeval", "sampling_params": { "temperature": 0.7, "top_p": 0.9, "max_tokens": 256 }, }, "num_examples": 1000, }, ) print(f"Starting job '{job.job_id}'") -
Monitor the status of the evaluation job using the following code. The job will run asynchronously, so you can check its status periodically:
def get_job_status(job_id, benchmark_id):
return client.eval.jobs.status(job_id=job_id, benchmark_id=benchmark_id)
while True:
job = get_job_status(job_id=job.job_id, benchmark_id="trustyai_lmeval::arc_easy")
print(job)
if job.status in ['failed', 'completed']:
print(f"Job ended with status: {job.status}")
break
time.sleep(20)
-
Retrieve the evaluation job results once the job status reports back as
completed:pprint.pprint(client.eval.jobs.retrieve(job_id=job.job_id, benchmark_id="trustyai_lmeval::arc_easy").scores)
Running custom evaluations with LM-Eval and Llama Stack
This example demonstrates how to use the LM-Eval Llama Stack external eval provider to evaluate a language model with a custom benchmark. Creating a custom benchmark is useful for evaluating specific model knowledge and behavior.
The process involves three steps:
-
Uploading the task dataset to your Open Data Hub cluster
-
Registering it as a custom benchmark dataset with Llama Stack
-
Running a benchmark evaluation job on a language model
-
You have installed Open Data Hub, version 2.29 or later.
-
You have cluster administrator privileges for your Open Data Hub cluster.
-
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:-
Installing the OpenShift CLI for OpenShift Container Platform
-
Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
-
-
You have a large language model (LLM) for chat generation or text classification, or both, deployed on vLLM Serving Runtime in your Open Data Hub cluster.
-
You have installed TrustyAI Operator in your Open Data Hub cluster.
-
You have set KServe to Raw Deployment mode in your cluster.
-
Upload your custom benchmark dataset to your OpenShift cluster using a PersistentVolumeClaim (PVC) and a temporary pod. Create a PVC named
my-pvcto store your dataset. Run the following command in your CLI, replacing <MODEL_NAMESPACE> with the namespace of your language model:oc apply -n <MODEL_NAMESPACE> -f - << EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi EOF -
Create a pod object named
dataset-storage-podto download the task dataset into the PVC. This pod is used to copy your dataset from your local machine to the Open Data Hub cluster:oc apply -n <MODEL_NAMESPACE> -f - << EOF apiVersion: v1 kind: Pod metadata: name: dataset-storage-pod spec: containers: - name: dataset-container image: 'quay.io/prometheus/busybox:latest' command: ["/bin/sh", "-c", "sleep 3600"] volumeMounts: - mountPath: "/data/upload_files" name: dataset-storage volumes: - name: dataset-storage persistentVolumeClaim: claimName: my-pvc EOF -
Copy your locally stored task dataset to the pod to place it within the PVC. In this example, the dataset is named
example-dk-bench-input-bmo.jsonllocally and it is copied to thedataset-storage-podunder the path/data/upload_files/.oc cp example-dk-bench-input-bmo.jsonl dataset-storage-pod:/data/upload_files/example-dk-bench-input-bmo.jsonl -n <MODEL_NAMESPACE> -
Once the custom dataset is uploaded to the PVC, register it as a benchmark for evaluations. At a minimum, provide the following metadata and replace the
DK_BENCH_DATASET_PATHand any other metadata fields to match your specific configuration:-
The TrustyAI LM-Eval Tasks GitHub web address
-
Your branch
-
The commit hash and path of the custom task.
client.benchmarks.register( benchmark_id="trustyai_lmeval::dk-bench", dataset_id="trustyai_lmeval::dk-bench", scoring_functions=["accuracy"], provider_benchmark_id="dk-bench", provider_id="trustyai_lmeval", metadata={ "custom_task": { "git": { "url": "https://github.com/trustyai-explainability/lm-eval-tasks.git", "branch": "main", "commit": "8220e2d73c187471acbe71659c98bccecfe77958", "path": "tasks/", } }, "env": { # Path of the dataset inside the PVC "DK_BENCH_DATASET_PATH": "/opt/app-root/src/hf_home/example-dk-bench-input-bmo.jsonl", "JUDGE_MODEL_URL": "http://phi-3-predictor:8080/v1/chat/completions", # For simplicity, we use the same model as the one being evaluated "JUDGE_MODEL_NAME": "phi-3", "JUDGE_API_KEY": "", }, "tokenized_requests": False, "tokenizer": "google/flan-t5-small", "input": {"storage": {"pvc": "my-pvc"}} }, )
-
-
Run a benchmark evaluation on your model:
job = client.eval.run_eval( benchmark_id="trustyai_lmeval::dk-bench", benchmark_config={ "eval_candidate": { "type": "model", "model": "phi-3", "provider_id": "trustyai_lmeval", "sampling_params": { "temperature": 0.7, "top_p": 0.9, "max_tokens": 256 }, }, "num_examples": 1000, }, ) print(f"Starting job '{job.job_id}'") -
Monitor the status of the evaluation job. The job runs asynchronously, so you can check its status periodically:
import time def get_job_status(job_id, benchmark_id): return client.eval.jobs.status(job_id=job_id, benchmark_id=benchmark_id) while True: job = get_job_status(job_id=job.job_id, benchmark_id="trustyai_lmeval::dk-bench") print(job) if job.status in ['failed', 'completed']: print(f"Job ended with status: {job.status}") break time.sleep(20)
Detecting personally identifiable information (PII) by using Guardrails with Llama Stack
The trustyai_fms Orchestrator server is an external provider for Llama Stack that allows you to configure and use the Guardrails Orchestrator and compatible detection models through the Llama Stack API.
This implementation of Llama Stack combines Guardrails Orchestrator with a suite of community-developed detectors to provide robust content filtering and safety monitoring.
This example demonstrates how to use the built-in Guardrails Regex Detector to detect personally identifiable information (PII) with Guardrails Orchestrator as Llama Stack safety guardrails, using the LlamaStack Operator to deploy a distribution in your Open Data Hub namespace.
|
Note
|
Guardrails Orchestrator with Llama Stack is not supported on |
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:-
Installing the OpenShift CLI for OpenShift Container Platform
-
Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
-
-
You have installed Open Data Hub, version 2.29 or later.
-
You have installed Open Data Hub, version 2.20 or later.
-
You have a large language model (LLM) for chat generation or text classification, or both, deployed in your namespace.
-
A cluster administrator has installed the following Operators in OpenShift Container Platform:
-
Red Hat Authorino Operator, version 1.2.1 or later
-
Red Hat OpenShift Service Mesh, version 2.6.7-0 or later
-
-
Configure your Open Data Hub environment with the following configurations in the
DataScienceCluster. Note that you must manually update thespec.llamastack.managementStatefield toManaged:spec: trustyai: managementState: Managed llamastack: managementState: Managed kserve: defaultDeploymentMode: RawDeployment managementState: Managed nim: managementState: Managed rawDeploymentServiceConfig: Headless serving: ingressGateway: certificate: type: OpenshiftDefaultIngress managementState: Removed name: knative-serving serviceMesh: managementState: Removed -
Create a project in your Open Data Hub namespace:
PROJECT_NAME="lls-minimal-example" oc new-project $PROJECT_NAME -
Deploy the Guardrails Orchestrator with regex detectors by applying the Orchestrator configuration for regex-based PII detection:
cat <<EOF | oc apply -f - kind: ConfigMap apiVersion: v1 metadata: name: fms-orchestr8-config-nlp data: config.yaml: | detectors: regex: type: text_contents service: hostname: "127.0.0.1" port: 8080 chunker_id: whole_doc_chunker default_threshold: 0.5 --- apiVersion: trustyai.opendatahub.io/v1alpha1 kind: GuardrailsOrchestrator metadata: name: guardrails-orchestrator spec: orchestratorConfig: "fms-orchestr8-config-nlp" enableBuiltInDetectors: true enableGuardrailsGateway: false replicas: 1 EOF -
In the same namespace, create a Llama Stack distribution:
apiVersion: llamastack.io/v1alpha1 kind: LlamaStackDistribution metadata: name: llamastackdistribution-sample namespace: <PROJECT_NAMESPACE> spec: replicas: 1 server: containerSpec: env: - name: VLLM_URL value: '${VLLM_URL}' - name: INFERENCE_MODEL value: '${INFERENCE_MODEL}' - name: MILVUS_DB_PATH value: '~/.llama/milvus.db' - name: VLLM_TLS_VERIFY value: 'false' - name: FMS_ORCHESTRATOR_URL value: '${FMS_ORCHESTRATOR_URL}' name: llama-stack port: 8321 distribution: name: rh-dev storage: size: 20Gi
|
Note
|
— After deploying the LlamaStackDistribution CR, a new pod is created in the same namespace. This pod runs the LlamaStack server for your distribution.
—
|
-
Once the Llama Stack server is running, use the
/v1/shieldsendpoint to dynamically register a shield. For example, register a shield that uses regex patterns to detect personally identifiable information (PII). -
Open a port-forward to access it locally:
oc -n $PROJECT_NAME port-forward svc/llama-stack 8321:8321 -
Use the
/v1/shieldsendpoint to dynamically register a shield. For example, register a shield that uses regex patterns to detect personally identifiable information (PII):curl -X POST http://localhost:8321/v1/shields \ -H 'Content-Type: application/json' \ -d '{ "shield_id": "regex_detector", "provider_shield_id": "regex_detector", "provider_id": "trustyai_fms", "params": { "type": "content", "confidence_threshold": 0.5, "message_types": ["system", "user"], "detectors": { "regex": { "detector_params": { "regex": ["email", "us-social-security-number", "credit-card"] } } } } }' -
Verify that the shield was registered:
curl -s http://localhost:8321/v1/shields | jq '.' -
The following output indicates that the shield has been registered successfully:
{ "data": [ { "identifier": "regex_detector", "provider_resource_id": "regex_detector", "provider_id": "trustyai_fms", "type": "shield", "params": { "type": "content", "confidence_threshold": 0.5, "message_types": [ "system", "user" ], "detectors": { "regex": { "detector_params": { "regex": [ "email", "us-social-security-number", "credit-card" ] } } } } } ] } -
Once the shield has been registered, verify that it is working by sending a message containing PII to the
/v1/safety/run-shieldendpoint:-
Email detection example:
curl -X POST http://localhost:8321/v1/safety/run-shield \ -H "Content-Type: application/json" \ -d '{ "shield_id": "regex_detector", "messages": [ { "content": "My email is test@example.com", "role": "user" } ] }' | jq '.'This should return a response indicating that the email was detected:
{ "violation": { "violation_level": "error", "user_message": "Content violation detected by shield regex_detector (confidence: 1.00, 1/1 processed messages violated)", "metadata": { "status": "violation", "shield_id": "regex_detector", "confidence_threshold": 0.5, "summary": { "total_messages": 1, "processed_messages": 1, "skipped_messages": 0, "messages_with_violations": 1, "messages_passed": 0, "message_fail_rate": 1.0, "message_pass_rate": 0.0, "total_detections": 1, "detector_breakdown": { "active_detectors": 1, "total_checks_performed": 1, "total_violations_found": 1, "violations_per_message": 1.0 } }, "results": [ { "message_index": 0, "text": "My email is test@example.com", "status": "violation", "score": 1.0, "detection_type": "pii", "individual_detector_results": [ { "detector_id": "regex", "status": "violation", "score": 1.0, "detection_type": "pii" } ] } ] } } } -
Social security number (SSN) detection example:
curl -X POST http://localhost:8321/v1/safety/run-shield \ -H "Content-Type: application/json" \ -d '{ "shield_id": "regex_detector", "messages": [ { "content": "My SSN is 123-45-6789", "role": "user" } ] }' | jq '.'This should return a response indicating that the SSN was detected:
{ "violation": { "violation_level": "error", "user_message": "Content violation detected by shield regex_detector (confidence: 1.00, 1/1 processed messages violated)", "metadata": { "status": "violation", "shield_id": "regex_detector", "confidence_threshold": 0.5, "summary": { "total_messages": 1, "processed_messages": 1, "skipped_messages": 0, "messages_with_violations": 1, "messages_passed": 0, "message_fail_rate": 1.0, "message_pass_rate": 0.0, "total_detections": 1, "detector_breakdown": { "active_detectors": 1, "total_checks_performed": 1, "total_violations_found": 1, "violations_per_message": 1.0 } }, "results": [ { "message_index": 0, "text": "My SSN is 123-45-6789", "status": "violation", "score": 1.0, "detection_type": "pii", "individual_detector_results": [ { "detector_id": "regex", "status": "violation", "score": 1.0, "detection_type": "pii" } ] } ] } } } -
Credit card detection example:
curl -X POST http://localhost:8321/v1/safety/run-shield \ -H "Content-Type: application/json" \ -d '{ "shield_id": "regex_detector", "messages": [ { "content": "My credit card number is 4111-1111-1111-1111", "role": "user" } ] }' | jq '.'This should return a response indicating that the credit card number was detected:
{ "violation": { "violation_level": "error", "user_message": "Content violation detected by shield regex_detector (confidence: 1.00, 1/1 processed messages violated)", "metadata": { "status": "violation", "shield_id": "regex_detector", "confidence_threshold": 0.5, "summary": { "total_messages": 1, "processed_messages": 1, "skipped_messages": 0, "messages_with_violations": 1, "messages_passed": 0, "message_fail_rate": 1.0, "message_pass_rate": 0.0, "total_detections": 1, "detector_breakdown": { "active_detectors": 1, "total_checks_performed": 1, "total_violations_found": 1, "violations_per_message": 1.0 } }, "results": [ { "message_index": 0, "text": "My credit card number is 4111-1111-1111-1111", "status": "violation", "score": 1.0, "detection_type": "pii", "individual_detector_results": [ { "detector_id": "regex", "status": "violation", "score": 1.0, "detection_type": "pii" } ] } ] } } }
-
Evaluating RAG systems with Ragas
As an AI engineer, you can use Retrieval-Augmented Generation Assessment (Ragas) to measure and improve the quality of your RAG systems in Open Data Hub. Ragas provides objective metrics that assess retrieval quality, answer relevance, and factual consistency, enabling you to identify issues, optimize configurations, and establish automated quality gates in your development workflows.
Ragas is integrated with Open Data Hub through the Llama Stack evaluation API and supports two deployment modes: an inline provider for development and testing, and a remote provider for production-scale evaluations using Open Data Hub pipelines.
About Ragas evaluation
Ragas addresses the unique challenges of evaluating RAG systems by providing metrics that assess both the retrieval and generation components of your application. Unlike traditional language model evaluation that focuses solely on output quality, Ragas evaluates how well your system retrieves relevant context and generates responses grounded in that context.
Key Ragas metrics
Ragas provides multiple metrics for evaluating RAG systems. Here are some of the metrics:
- Faithfulness
-
Measures the generated answer to determine whether it is consistent with the retrieved context. A high faithfulness score indicates that the answer is well-grounded in the source documents, reducing the risk of hallucinations. This is critical for enterprise and regulated environments where accuracy and trustworthiness are paramount.
- Answer Relevancy
-
Evaluates whether the generated answer is consistent with the input question. This metric ensures that your RAG system provides pertinent responses rather than generic or off-topic information.
- Context Precision
-
Measures the precision of the retrieval component by evaluating whether the retrieved context chunks contain information relevant to answering the question. High precision indicates that your retrieval system is returning focused, relevant documents rather than irrelevant noise.
- Context Recall
-
Measures the recall of the retrieval component by evaluating whether all necessary information for answering the question is present in the retrieved contexts. High recall ensures that your retrieval system is not missing important information.
- Answer Correctness
-
Compares the generated answer with a ground truth reference answer to measure accuracy. This metric is useful when you have labeled evaluation datasets with known correct answers.
- Answer Similarity
-
Measures the semantic similarity between the generated answer and a reference answer, providing a more nuanced assessment than exact string matching.
Use cases for Ragas in AI engineering workflows
Ragas enables AI engineers to accomplish the following tasks:
- Automate quality checks
-
Create reproducible, objective evaluation jobs that can be automatically triggered after every code commit or model update. Automatic quality checks establish quality gates to prevent regressions and ensure that you deploy only high-quality RAG configurations to production.
- Enable evaluation-driven development (EDD)
-
Use Ragas metrics to guide iterative optimization. For example, test different chunking strategies, embedding models, or retrieval algorithms against a defined benchmark. You can discover the optimal RAG configuration that maximizes performance metrics. For example, you can maximize faithfulness while minimizing computational cost.
- Ensure factual consistency and trustworthiness
-
Measure the reliability of your RAG system by setting thresholds on metrics like faithfulness. Metrics thresholds ensure that responses are consistently grounded in source documents, which is critical for enterprise applications where hallucinations or factual errors are unacceptable.
- Achieve production scalability
-
Leverage the remote provider pattern with Open Data Hub pipelines to execute evaluations as distributed jobs. The remote provider pattern allows you to run large-scale benchmarks across thousands of data points without blocking development or consuming excessive local resources.
- Compare model and configuration variants
-
Run comparative evaluations across different models, retrieval strategies, or system configurations to make data-driven decisions about your RAG architecture. For example, compare the impact of different chunk sizes (512 vs 1024 tokens) or different embedding models on retrieval quality metrics.
Ragas provider deployment modes
Open Data Hub supports two deployment modes for Ragas evaluation:
- Inline provider
-
The inline provider mode runs Ragas evaluation in the same process as the Llama Stack server. Use the inline provider for development and rapid prototyping. It offers the following advantages:
-
Fast processing with in-memory operations
-
Minimal configuration overhead
-
Local development and testing
-
Evaluation of small to medium-sized datasets
-
- Remote provider
-
The remote provider mode runs Ragas evaluation as distributed jobs using Open Data Hub pipelines (powered by Kubeflow Pipelines). Use the remote provider for production environments. It offers the following capabilities:
-
Running evaluations in parallel across thousands of data points
-
Providing resource isolation and management
-
Integrating with CI/CD pipelines for automated quality gates
-
Storing results in S3-compatible object storage
-
Tracking evaluation history and metrics over time
-
Supporting large-scale batch evaluations without impacting the Llama Stack server
-
Setting up the Ragas inline provider for development
You can set up the Ragas inline provider to run evaluations directly within the Llama Stack server process. The inline provider is ideal for development environments, rapid prototyping, and lightweight evaluation workloads where simplicity and quick iteration are priorities.
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:-
Installing the OpenShift CLI for OpenShift Container Platform
-
Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
-
-
You have activated the Llama Stack Operator in Open Data Hub. For more information, see Installing the Llama Stack Operator.
-
You have deployed a Llama model with KServe. For more information, see Deploying a Llama model with KServe.
-
You have created a project.
-
In a terminal window, if you are not already logged in to your OpenShift cluster, log in to the OpenShift CLI (
oc) as shown in the following example:$ oc login <openshift_cluster_url> -u <username> -p <password> -
Navigate to your project:
$ oc project <project_name> -
Create a ConfigMap for the Ragas inline provider configuration. For example, create a
ragas-inline-config.yamlfile as follows:Exampleragas-inline-config.yamlapiVersion: v1 kind: ConfigMap metadata: name: ragas-inline-config namespace: <project_name> data: EMBEDDING_MODEL: "all-MiniLM-L6-v2"-
EMBEDDING_MODEL: Used by Ragas for semantic similarity calculations. Theall-MiniLM-L6-v2model is a lightweight, efficient option suitable for most use cases.
-
-
Apply the ConfigMap:
$ oc apply -f ragas-inline-config.yaml -
Create a Llama Stack distribution configuration file with the Ragas inline provider. For example, create a
llama-stack-ragas-inline.yamlfile as follows:Examplellama-stack-ragas-inline.yamlapiVersion: llamastack.trustyai.opendatahub.io/v1alpha1 kind: LlamaStackDistribution metadata: name: llama-stack-ragas-inline namespace: <project_name> spec: replicas: 1 server: containerSpec: env: # ... - name: VLLM_URL value: <model_url> - name: VLLM_API_TOKEN value: <model_api_token (if necessary)> - name: INFERENCE_MODEL value: <model_name> - name: MILVUS_DB_PATH value: ~/.llama/milvus.db - name: VLLM_TLS_VERIFY value: "false" - name: FMS_ORCHESTRATOR_URL value: http://localhost:123 - name: EMBEDDING_MODEL value: granite-embedding-125m # ... -
Deploy the Llama Stack distribution:
$ oc apply -f llama-stack-ragas-inline.yaml -
Wait for the deployment to complete:
$ oc get pods -wWait until the
llama-stack-ragas-inlinepod status showsRunning.
Configuring the Ragas remote provider for production
You can configure the Ragas remote provider to run evaluations as distributed jobs using Open Data Hub pipelines. The remote provider enables production-scale evaluations by running Ragas in a separate Kubeflow Pipelines environment, providing resource isolation, improved scalability, and integration with CI/CD workflows.
-
You have cluster administrator privileges for your OpenShift Container Platform cluster.
-
You have installed the Open Data Hub Operator.
-
You have a
DataScienceClustercustom resource in your environment; in thespec.componentssection thellamastackoperator.managementStateis enabled with a value ofManaged. -
You have installed the OpenShift CLI (
oc) as described in the appropriate documentation for your cluster:-
Installing the OpenShift CLI for OpenShift Container Platform
-
Installing the OpenShift CLI for Red Hat OpenShift Service on AWS
-
-
You have configured a pipeline server in your project. For more information, see Configuring a pipeline server.
-
You have activated the Llama Stack Operator in Open Data Hub. For more information, see Installing the Llama Stack Operator.
-
You have deployed a Large Language Model with KServe. For more information, see Deploying a Llama model with KServe.
-
You have configured S3-compatible object storage for storing evaluation results and you know your S3 credentials: AWS access key, AWS secret access key, and AWS default region. For more information, see Adding a connection to your project.
-
You have created a project.
-
In a terminal window, if you are not already logged in to your OpenShift cluster, log in to the OpenShift CLI (
oc) as shown in the following example:$ oc login <openshift_cluster_url> -u <username> -p <password> -
Navigate to your project:
$ oc project <project_name> -
Create a secret for storing S3 credentials:
$ oc create secret generic "<ragas_s3_credentials>" \ --from-literal=AWS_ACCESS_KEY_ID=<your_access_key> \ --from-literal=AWS_SECRET_ACCESS_KEY=<your_secret_key> \ --from-literal=AWS_DEFAULT_REGION=<your_region>ImportantReplace the placeholder values with your actual S3 credentials. These AWS credentials are required in two locations:
-
In the Llama Stack server pod (as environment variables) - to access S3 when creating pipeline runs.
-
In the Kubeflow Pipeline pods (via the secret) - to store evaluation results to S3 during pipeline execution.
The LlamaStackDistribution configuration loads these credentials from the
"<ragas_s3_credentials>"secret and makes them available to both locations. -
-
Create a secret for the Kubeflow Pipelines API token:
-
Get your token by running the following command:
$ export KUBEFLOW_PIPELINES_TOKEN=$(oc whoami -t) -
Create the secret by running the following command:
$ oc create secret generic kubeflow-pipelines-token \ --from-literal=KUBEFLOW_PIPELINES_TOKEN="$KUBEFLOW_PIPELINES_TOKEN"ImportantThe Llama Stack distribution service account does not have privileges to create pipeline runs. This secret provides the necessary authentication token for creating and managing pipeline runs.
-
-
Verify that the Kubeflow Pipelines endpoint is accessible:
$ curl -k -H "Authorization: Bearer $KUBEFLOW_PIPELINES_TOKEN" \ https://$KUBEFLOW_PIPELINES_ENDPOINT/apis/v1beta1/healthz -
Create a secret for storing your inference model information:
$ export INFERENCE_MODEL="llama-3-2-3b" $ export VLLM_URL="https://llama-32-3b-instruct-predictor:8443/v1" $ export VLLM_TLS_VERIFY="false" # Use "true" in production $ export VLLM_API_TOKEN="<token_identifier>" $ oc create secret generic llama-stack-inference-model-secret \ --from-literal INFERENCE_MODEL="$INFERENCE_MODEL" \ --from-literal VLLM_URL="$VLLM_URL" \ --from-literal VLLM_TLS_VERIFY="$VLLM_TLS_VERIFY" \ --from-literal VLLM_API_TOKEN="$VLLM_API_TOKEN" -
Get the Kubeflow Pipelines endpoint by running the following command and searching for "pipeline" in the routes. This is used in a later step for creating a ConfigMap for the Ragas remote provider configuration:
$ oc get routes -A | grep -i pipelineThis output should show that the namespace, which is the namespace you specified for
KUBEFLOW_NAMESPACE, has the pipeline server endpoint and the associated metadata one. The one to use isds-pipeline-dspa. -
Create a ConfigMap for the Ragas remote provider configuration. For example, create a
kubeflow-ragas-config.yamlfile as follows:Example kubeflow-ragas-config.yamlapiVersion: v1 kind: ConfigMap metadata: name: kubeflow-ragas-config namespace: <project_name> data: EMBEDDING_MODEL: "all-MiniLM-L6-v2" KUBEFLOW_LLAMA_STACK_URL: "http://$<distribution_name>-service.$<your_namespace>.svc.cluster.local:$<port>" KUBEFLOW_PIPELINES_ENDPOINT: "https://<kfp_endpoint>" KUBEFLOW_NAMESPACE: "<project_name>" KUBEFLOW_BASE_IMAGE: "quay.io/rhoai/odh-trustyai-ragas-lls-provider-dsp-rhel9:rhoai-3.0" KUBEFLOW_RESULTS_S3_PREFIX: "s3://<bucket_name>/ragas-results" KUBEFLOW_S3_CREDENTIALS_SECRET_NAME: "<ragas_s3_credentials>"-
EMBEDDING_MODEL: Used by Ragas for semantic similarity calculations. -
KUBEFLOW_LLAMA_STACK_URL: The URL for the Llama Stack server. This must be accessible from the Kubeflow Pipeline pods. The <distribution_name>, <namespace>, and <port> are replaced with the name of the LlamaStack distribution you are creating, the namespace where you are creating it, and the port. These 3 elements are present in the LlamaStack distribution YAML. -
KUBEFLOW_PIPELINES_ENDPOINT: The Kubeflow Pipelines API endpoint URL. -
KUBEFLOW_NAMESPACE: The namespace where pipeline runs are executed. This should match your current project namespace. -
KUBEFLOW_BASE_IMAGE: The container image used for Ragas evaluation pipeline components. This image contains the Ragas provider package installed via pip. -
KUBEFLOW_RESULTS_S3_PREFIX: The S3 path prefix where evaluation results are stored. For example:s3://my-bucket/ragas-evaluation-results. -
KUBEFLOW_S3_CREDENTIALS_SECRET_NAME: The name of the secret containing S3 credentials.
-
-
Apply the ConfigMap:
$ oc apply -f kubeflow-ragas-config.yaml -
Create a Llama Stack distribution configuration file with the Ragas remote provider. For example, create a llama-stack-ragas-remote.yaml as follows:
Example llama-stack-ragas-remote.yamlapiVersion: llamastack.io/v1alpha1 kind: LlamaStackDistribution metadata: name: llama-stack-pod spec: replicas: 1 server: containerSpec: resources: requests: cpu: 4 memory: "12Gi" limits: cpu: 6 memory: "14Gi" env: - name: INFERENCE_MODEL valueFrom: secretKeyRef: key: INFERENCE_MODEL name: llama-stack-inference-model-secret optional: true - name: VLLM_MAX_TOKENS value: "4096" - name: VLLM_URL valueFrom: secretKeyRef: key: VLLM_URL name: llama-stack-inference-model-secret optional: true - name: VLLM_TLS_VERIFY valueFrom: secretKeyRef: key: VLLM_TLS_VERIFY name: llama-stack-inference-model-secret optional: true - name: VLLM_API_TOKEN valueFrom: secretKeyRef: key: VLLM_API_TOKEN name: llama-stack-inference-model-secret optional: true - name: MILVUS_DB_PATH value: ~/milvus.db - name: FMS_ORCHESTRATOR_URL value: "http://localhost" - name: KUBEFLOW_PIPELINES_ENDPOINT valueFrom: configMapKeyRef: key: KUBEFLOW_PIPELINES_ENDPOINT name: kubeflow-ragas-config optional: true - name: KUBEFLOW_NAMESPACE valueFrom: configMapKeyRef: key: KUBEFLOW_NAMESPACE name: kubeflow-ragas-config optional: true - name: KUBEFLOW_BASE_IMAGE valueFrom: configMapKeyRef: key: KUBEFLOW_BASE_IMAGE name: kubeflow-ragas-config optional: true - name: KUBEFLOW_LLAMA_STACK_URL valueFrom: configMapKeyRef: key: KUBEFLOW_LLAMA_STACK_URL name: kubeflow-ragas-config optional: true - name: KUBEFLOW_RESULTS_S3_PREFIX valueFrom: configMapKeyRef: key: KUBEFLOW_RESULTS_S3_PREFIX name: kubeflow-ragas-config optional: true - name: KUBEFLOW_S3_CREDENTIALS_SECRET_NAME valueFrom: configMapKeyRef: key: KUBEFLOW_S3_CREDENTIALS_SECRET_NAME name: kubeflow-ragas-config optional: true - name: EMBEDDING_MODEL valueFrom: configMapKeyRef: key: EMBEDDING_MODEL name: kubeflow-ragas-config optional: true - name: KUBEFLOW_PIPELINES_TOKEN valueFrom: secretKeyRef: key: KUBEFLOW_PIPELINES_TOKEN name: kubeflow-pipelines-token optional: true - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: key: AWS_ACCESS_KEY_ID name: "<ragas_s3_credentials>" optional: true - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: key: AWS_SECRET_ACCESS_KEY name: "<ragas_s3_credentials>" optional: true - name: AWS_DEFAULT_REGION valueFrom: secretKeyRef: key: AWS_DEFAULT_REGION name: "<ragas_s3_credentials>" optional: true name: llama-stack port: 8321 distribution: name: rh-dev -
Deploy the Llama Stack distribution:
$ oc apply -f llama-stack-ragas-remote.yaml -
Wait for the deployment to complete:
$ oc get pods -wWait until the
llama-stack-podpod status showsRunning.
Evaluating RAG system quality with Ragas metrics
Evaluate your RAG system quality by testing your setup, using the example provided in the demo notebook. This demo outlines the basic steps for evaluating your RAG system with Ragas using the Python client. You can execute the demo notebook steps from a Jupyter environment.
Alternatively, you can submit an evaluation by directly using the http methods of the Llama Stack API.
|
Important
|
The Llama Stack pod must be accessible from the Jupyter environment in the cluster, which may not be the case by default. To configure this setup, see Ingesting content into a Llama model |
-
You have logged in to Open Data Hub.
-
You have created a project.
-
You have created a pipeline server.
-
You have created a secret for your AWS credentials in your project namespace.
-
You have deployed a Llama Stack distribution with the Ragas evaluation provider enabled (Inline or Remote). For more information, see Configuring the Ragas remote provider for production.
-
You have access to a workbench or notebook environment where you can run Python code.
-
From the Open Data Hub dashboard, click Projects.
-
Click the name of the project that contains the workbench.
-
Click the Workbenches tab.
-
If the status of the workbench is Running, skip to the next step.
If the status of the workbench is Stopped, in the Status column for the workbench, click Start.
The Status column changes from Stopped to Starting when the workbench server is starting, and then to Running when the workbench has successfully started.
-
Click the open icon (
) next to the workbench.Your Jupyter environment window opens.
-
On the toolbar, click the Git Clone icon and then select Clone a Repository.
-
In the Clone a repo dialog, enter the following URL
https://github.com/trustyai-explainability/llama-stack-provider-ragas.git -
In the file browser, select the newly-created
/llama-stack-provider-ragas/demosfolder.You see a Jupyter notebook named
basic_demo.ipynb. -
Double-click the
basic_demo.ipynbfile to launch the Jupyter notebook.The Jupyter notebook opens. You see code examples for the following tasks:
-
Run your Llama Stack distribution
-
Setup and Imports
-
Llama Stack Client Setup
-
Dataset Preparation
-
Dataset Registration
-
Benchmark Registration
-
Evaluation Execution
-
Inline vs Remote Side-by-side
-
-
In the Jupyter notebook, run the code cells sequentially through the Evaluation Execution.
-
Return to the Open Data Hub dashboard.
-
Click Develop & train → Pipelines → Runs. You might need to refresh the page to see that the new evaluation job running.
-
Wait for the job to show Successful.
-
Return to the workbench and run the Results Display cell.
-
Inspect the results displayed.