Open In Colab View Notebook on GitHub

Newspaper classification [PyTorch]

Giskard is an open-source framework for testing all ML models, from LLMs to tabular models. Don’t hesitate to give the project a star on GitHub ⭐️ if you find it useful!

In this notebook, you’ll learn how to create comprehensive test suites for your model in a few lines of code, thanks to Giskard’s open-source Python library.

Use-case:

  • Multinomial classification of a newspaper’s topic

  • Model: Custom PyTorch text classification model.

  • Dataset

Outline:

  • Detect vulnerabilities automatically with Giskard’s scan

  • Automatically generate & curate a comprehensive test suite to test your model beyond accuracy-related metrics

Install dependencies

Make sure to install the giskard

[ ]:
%pip install giskard --upgrade

Import libraries

[1]:
import time

import torch
import numpy as np
import pandas as pd
from torch import nn
from torchtext.datasets import AG_NEWS
from torch.utils.data import DataLoader
from sklearn.metrics import accuracy_score
from torchtext.data.utils import get_tokenizer
from torch.utils.data.dataset import random_split
from torchtext.vocab import build_vocab_from_iterator
from torchtext.data.functional import to_map_style_dataset

from giskard import Model, Dataset, scan, testing

Define constants

[2]:
DEVICE = torch.device("cpu")

TARGET_MAP = {0: "World", 1: "Sports", 2: "Business", 3: "Sci/Tech"}
TARGET_COLUMN_NAME = "label"
FEATURE_COLUMN_NAME = "text"

LOADERS_BATCH_SIZE = 64

Dataset preparation

Load data

[3]:
train_data, test_data = AG_NEWS()

Wrap dataset with Giskard

To prepare for the vulnerability scan, make sure to wrap your dataset using Giskard’s Dataset class. More details here.

[ ]:
raw_data = pd.DataFrame({TARGET_COLUMN_NAME: TARGET_MAP[label_id - 1], FEATURE_COLUMN_NAME: text}
                        for label_id, text in test_data)
giskard_dataset = Dataset(
    df=raw_data,  # A pandas.DataFrame that contains the raw data (before all the pre-processing steps) and the actual ground truth variable
    name="Test Dataset",  # Ground truth variable
    target=TARGET_COLUMN_NAME,  # Optional
)

Prepare dataloaders for training and evaluation

[5]:
# Simple English tokenizer provided by torchtext.
tokenizer = get_tokenizer("basic_english")

# Build a vocabulary from all the tokens we can find in the train data.
vocab = build_vocab_from_iterator((tokenizer(text) for _, text in train_data), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])


def preprocess_text(raw_text):
    return vocab(tokenizer(raw_text))


def preprocess_label(raw_label):
    return int(raw_label) - 1


def collate_fn(batch):
    label_list, text_list, offsets = [], [], [0]

    for _label, _text in batch:
        label_list.append(preprocess_label(_label))
        processed_text = torch.tensor(preprocess_text(_text), dtype=torch.int64)
        text_list.append(processed_text)
        offsets.append(processed_text.size(0))

    label_list = torch.tensor(label_list, dtype=torch.int64)
    offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
    text_list = torch.cat(text_list)

    return label_list.to(DEVICE), text_list.to(DEVICE), offsets.to(DEVICE)


# Create the datasets
train_dataset = to_map_style_dataset(train_data)
test_dataset = to_map_style_dataset(test_data)

# We further divide the training data into a train and validation split.
train_split, valid_split = random_split(train_dataset, [0.95, 0.05])

# Prepare the data loaders
train_dataloader = DataLoader(train_split, batch_size=LOADERS_BATCH_SIZE, shuffle=True, collate_fn=collate_fn)
valid_dataloader = DataLoader(valid_split, batch_size=LOADERS_BATCH_SIZE, shuffle=True, collate_fn=collate_fn)
test_dataloader = DataLoader(test_dataset, batch_size=LOADERS_BATCH_SIZE, shuffle=True, collate_fn=collate_fn)

Model building

Define model

[7]:
class TextClassificationModel(nn.Module):
    def __init__(self, vocab_size, embed_dim, num_class):
        super(TextClassificationModel, self).__init__()
        self.embedding = nn.EmbeddingBag(vocab_size, embed_dim)
        self.fc = nn.Linear(embed_dim, num_class)
        self.init_weights()

    def init_weights(self):
        init_range = 0.5
        self.embedding.weight.data.uniform_(-init_range, init_range)
        self.fc.weight.data.uniform_(-init_range, init_range)
        self.fc.bias.data.zero_()

    def forward(self, text, offsets):
        embedded = self.embedding(text, offsets)
        return self.fc(embedded).softmax(axis=-1)


model = TextClassificationModel(vocab_size=len(vocab), embed_dim=64, num_class=4).to(DEVICE)

Train and evaluate model

[ ]:
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=5)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.1)


def train_epoch(dataloader):
    model.train()

    train_accuracy = total_count = 0
    for label, text, offset in dataloader:
        optimizer.zero_grad()
        predicted_label = model(text, offset)
        loss = criterion(predicted_label, label)
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
        optimizer.step()
        train_accuracy += (predicted_label.argmax(1) == label).sum().item()
        total_count += label.size(0)

    return train_accuracy / total_count


def validation_epoch(dataloader):
    model.eval()

    validation_accuracy = total_count = 0
    with torch.no_grad():
        for label, text, offsets in dataloader:
            predicted_label = model(text, offsets)
            validation_accuracy += (predicted_label.argmax(1) == label).sum().item()
            total_count += label.size(0)

    return validation_accuracy / total_count


total_accuracy = None
for epoch in range(1, 3):
    start_time = time.perf_counter()

    train_epoch(train_dataloader)
    accu_val = validation_epoch(valid_dataloader)

    if total_accuracy is not None and total_accuracy > accu_val:
        scheduler.step()
    else:
        total_accuracy = accu_val

    print("-" * 65)
    print(f"| end of epoch {epoch: .3f} | time: {time.perf_counter() - start_time :5.2f}s | valid accuracy {accu_val:8.3f} ")
    print("-" * 65)


test_accuracy = validation_epoch(test_dataloader)
print('Test accuracy {:8.3f}'.format(test_accuracy))

Wrap model with Giskard

To prepare for the vulnerability scan, make sure to wrap your model using Giskard’s Model class. You can choose to either wrap the prediction function (preferred option) or the model object. More details here.

[ ]:
def infer_predictions(_model: torch.nn.Module, _dataloader: DataLoader) -> np.ndarray:
    _model.eval()
    pred = list()

    for _, text, offsets in _dataloader:
        with torch.no_grad():
            probs = model(text, offsets).cpu().detach().numpy()

        pred.append(probs)

    pred = np.concatenate(pred, axis=0)
    return pred


def prediction_function(df) -> np.ndarray:
    # Placeholder for label.
    if df.shape[1] == 1:
        df.insert(0, TARGET_COLUMN_NAME, np.zeros(len(df)))

    data_iterator = df.itertuples(index=False)
    dataloader = DataLoader(to_map_style_dataset(data_iterator), batch_size=LOADERS_BATCH_SIZE, collate_fn=collate_fn)
    predictions = infer_predictions(model, dataloader)
    predictions = predictions

    return predictions


giskard_model = Model(
    model=prediction_function,  # A prediction function that encapsulates all the data pre-processing steps and that could be executed with the
    model_type="classification",  # Either regression, classification or text_generation.
    name="Simple News Classification Model",  # Optional.
    classification_labels=list(TARGET_MAP.values()),  # Their order MUST be identical to the prediction_function's output order.
    feature_names=["text"],  # Default: all columns of your dataset.
)

# Validate wrapped model.
wrapped_test_metric = accuracy_score(giskard_dataset.df[TARGET_COLUMN_NAME], giskard_model.predict(giskard_dataset).prediction)
print(f"Wrapped Test accuracy: {wrapped_test_metric:.3f}")

Detect vulnerabilities in your model

Scan your model for vulnerabilities with Giskard

Giskard’s scan allows you to detect vulnerabilities in your model automatically. These include performance biases, unrobustness, data leakage, stochasticity, underconfidence, ethical issues, and more. For detailed information about the scan feature, please refer to our scan documentation.

[ ]:
results = scan(giskard_model, giskard_dataset)
[11]:
display(results)

Generate comprehensive test suites automatically for your model

Generate test suites from the scan

The objects produced by the scan can be used as fixtures to generate a test suite that integrate all detected vulnerabilities. Test suites allow you to evaluate and validate your model’s performance, ensuring that it behaves as expected on a set of predefined test cases, and to identify any regressions or issues that might arise during development or updates.

[12]:
test_suite = results.generate_test_suite("My first test suite")
test_suite.run()
2024-05-29 14:00:13,219 pid:68530 MainThread giskard.datasets.base INFO     Casting dataframe columns from {'text': 'object'} to {'text': 'object'}
2024-05-29 14:00:13,225 pid:68530 MainThread giskard.utils.logging_utils INFO     Predicted dataset with shape (7600, 2) executed in 0:00:00.016051
2024-05-29 14:00:13,689 pid:68530 MainThread giskard.datasets.base INFO     Casting dataframe columns from {'text': 'object'} to {'text': 'object'}
2024-05-29 14:00:13,922 pid:68530 MainThread giskard.utils.logging_utils INFO     Predicted dataset with shape (7600, 2) executed in 0:00:00.253734
2024-05-29 14:00:13,928 pid:68530 MainThread giskard.utils.logging_utils INFO     Perturb and predict data executed in 0:00:00.729718
2024-05-29 14:00:13,930 pid:68530 MainThread giskard.utils.logging_utils INFO     Compare and predict the data executed in 0:00:00.000818
Executed 'Invariance to “Add typos”' with arguments {'model': <giskard.models.function.PredictionFunctionModel object at 0x32a0ec970>, 'dataset': <giskard.datasets.base.Dataset object at 0x30156d9f0>, 'transformation_function': <giskard.scanner.robustness.text_transformations.TextTypoTransformation object at 0x303729db0>, 'threshold': 0.95, 'output_sensitivity': 0.05}:
               Test failed
               Metric: 0.92
                - [INFO] 7591 rows were perturbed

2024-05-29 14:00:13,932 pid:68530 MainThread giskard.core.suite INFO     Executed test suite 'My first test suite'
2024-05-29 14:00:13,932 pid:68530 MainThread giskard.core.suite INFO     result: failed
2024-05-29 14:00:13,932 pid:68530 MainThread giskard.core.suite INFO     Invariance to “Add typos” ({'model': <giskard.models.function.PredictionFunctionModel object at 0x32a0ec970>, 'dataset': <giskard.datasets.base.Dataset object at 0x30156d9f0>, 'transformation_function': <giskard.scanner.robustness.text_transformations.TextTypoTransformation object at 0x303729db0>, 'threshold': 0.95, 'output_sensitivity': 0.05}): {failed, metric=0.917138716901594}
[12]:
close Test suite failed.
Test Invariance to “Add typos”
Measured Metric = 0.91714 close Failed
model Simple News Classification Model
dataset Test Dataset
transformation_function Add typos
threshold 0.95
output_sensitivity 0.05

Customize your suite by loading objects from the Giskard catalog

The Giskard open source catalog will enable to load:

  • Tests such as metamorphic, performance, prediction & data drift, statistical tests, etc

  • Slicing functions such as detectors of toxicity, hate, emotion, etc

  • Transformation functions such as generators of typos, paraphrase, style tune, etc

To create custom tests, refer to this page.

For demo purposes, we will load a simple unit test (test_f1) that checks if the test F1 score is above the given threshold. For more examples of tests and functions, refer to the Giskard catalog.

[ ]:
test_suite.add_test(testing.test_f1(model=giskard_model, dataset=giskard_dataset, threshold=0.7)).run()