Upload and Evaluate your Model

How to structure your Datapoints

The input has to be a JSON file (maximum file size is 4GB) with a list of data points. Each data point is dictionary with two keys. The first key, predictions, has the predictions for this data points, containing the probabilities for each class. The second key, label, has the ground truth, i.e., the index in the predictions vector, for this data point.

To download a set of example predictions from a ResNet-18 trained on CIFAR-10 and evaluated on CIFAR-10 and CIFAR-100, click here.

Example

[
  {"prediction": [0.1, 0.2, 0.7], "label": 2},
  {"prediction": [0.91, 0.04, 0.05], "label": 0},
  {"prediction": [0.433, 0.522, 0.045], "label": 1}
]

Additionally, the predictions for multiple datasets can be uploaded at once. Also, additional meta data can be provided, containing the class_names for the model's output classes, the class_names for the individual datasets, and the type of dataset. If the class names for a dataset are not specified, it is assumed they match the class names of the model, unless the dataset type is 'out-of-distribution', in which case only the label indices are used in the report.

For the dataset type currently three options are available: 'in-distribution' for standard test sets (default value), 'out-of-distribution' for novel data that the model is not designed for to be able to process, and 'corrupted' for corrupted and perturbed data.

Example

{
  "CIFAR-3": [
    {"prediction": [0.1, 0.2, 0.7], "label": 2},
    {"prediction": [0.91, 0.04, 0.05], "label": 0},
    {"prediction": [0.433, 0.522, 0.045], "label": 1}
  ],
  "CIFAR-4": [
     {"prediction": [0.3, 0.2, 0.4, 0.1], "label": 3},
     {"prediction": [0.31, 0.04, 0.05, 0.6], "label": 1},
     {"prediction": [0.233, 0.522, 0.023, 0.222], "label": 2}
  ]},
  "meta_information": {
      "class_names_model": ["cat", "dog", "elephant"],
      "class_names_dataset": {
          "CIFAR-4": ["car", "truck", "bus", "bike"]
      },
      "dataset_type": {
          "CIFAR-4": "out-of-distribution"
      }
  }
}

Tutorial

Step 1: Create an upload file

To analyze your AI model in terms of its robustness, you first need to create an upload file in the JSON format, which contains result data generated by your AI or neural network. While there is no minimum required number, we recommend including at least 30 data points per class in the dataset. The more data points your file contains and the more representative your data is overall, the better the result of the analysis will be. Please note: You do not need to provide sensitive data, such as your AI algorithm or the data points themselves, for the analysis. All Robuscope needs is a set of result data, i.e., the predictions of the AI model. Instead of real data, this data set can also contain sample data or anonymized data. For further information on how to structure your data points please see the example on the upload page.

Step 2: Upload your file

Once you have created a file with the result data from your AI model, please upload it via the Robuscope interface. Browse to the upload page and click on »Test your model« on the top right of the page. The upload page of Robuscope opens. In the section »Your json file« please click on »Browse« to select the file you have created on your computer. The name of the selected file is shown in the white box next to the »Browse« button. If the file name shown is your desired test file, please click on »Evaluate« to start the analysis.

Step 3: Analysis and results

Robuscope now analyzes your data — this might take a few seconds. The application determines how reliable your results are by analyzing the AI algorithms in relation to different safety-related metrics. Based on this analysis, you will receive a report with suggestions on which common methods of uncertainty quantification might help you to improve your results. This could lead to a more reliable decision-making basis for your AI. If you wish to receive the report as a self-contained, interactive HTML file, click on »Download report« at the bottom of the page. If you wish to receive assistance in interpreting the results, or are interested in more detailed analysis, get in touch with us and we will arrange a non-binding meeting with you.

Get Help

In case you need help with preparing your data or about using Robuscope in general, don't hesitate to contact us.