Newspaper classification [PyTorch]ยถ
Giskard is an open-source framework for testing all ML models, from LLMs to tabular models. Donโt hesitate to give the project a star on GitHub โญ๏ธ if you find it useful!
In this notebook, youโll learn how to create comprehensive test suites for your model in a few lines of code, thanks to Giskardโs open-source Python library.
Use-case:
Multinomial classification of a newspaperโs topic
Model: Custom PyTorch text classification model.
Outline:
Detect vulnerabilities automatically with Giskardโs scan
Automatically generate & curate a comprehensive test suite to test your model beyond accuracy-related metrics
Upload your model to the Giskard Hub to:
Debug failing tests & diagnose issues
Compare models & decide which one to promote
Share your results & collect feedback from non-technical team members
Install dependenciesยถ
Make sure to install the giskard
[1]:
%pip install giskard --upgrade
Import librariesยถ
[1]:
import time
import torch
import numpy as np
import pandas as pd
from torch import nn
from torchtext.datasets import AG_NEWS
from torch.utils.data import DataLoader
from sklearn.metrics import accuracy_score
from torchtext.data.utils import get_tokenizer
from torch.utils.data.dataset import random_split
from torchtext.vocab import build_vocab_from_iterator
from torchtext.data.functional import to_map_style_dataset
from giskard import Model, Dataset, GiskardClient, scan, testing, Suite
Define constantsยถ
[2]:
DEVICE = torch.device("cpu")
TARGET_MAP = {0: "World", 1: "Sports", 2: "Business", 3: "Sci/Tech"}
TARGET_COLUMN_NAME = "label"
FEATURE_COLUMN_NAME = "text"
LOADERS_BATCH_SIZE = 64
Dataset preparationยถ
Load dataยถ
[3]:
train_data, test_data = AG_NEWS()
Wrap dataset with Giskardยถ
To prepare for the vulnerability scan, make sure to wrap your dataset using Giskardโs Dataset class. More details here.
[4]:
raw_data = pd.DataFrame({TARGET_COLUMN_NAME: TARGET_MAP[label_id - 1], FEATURE_COLUMN_NAME: text}
for label_id, text in test_data)
giskard_dataset = Dataset(
df=raw_data, # A pandas.DataFrame that contains the raw data (before all the pre-processing steps) and the actual ground truth variable
name="Test Dataset", # Ground truth variable
target=TARGET_COLUMN_NAME, # Optional
)
Prepare dataloaders for training and evaluationยถ
[5]:
# Simple English tokenizer provided by torchtext.
tokenizer = get_tokenizer("basic_english")
# Build a vocabulary from all the tokens we can find in the train data.
vocab = build_vocab_from_iterator((tokenizer(text) for _, text in train_data), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
def preprocess_text(raw_text):
return vocab(tokenizer(raw_text))
def preprocess_label(raw_label):
return int(raw_label) - 1
def collate_fn(batch):
label_list, text_list, offsets = [], [], [0]
for _label, _text in batch:
label_list.append(preprocess_label(_label))
processed_text = torch.tensor(preprocess_text(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(DEVICE), text_list.to(DEVICE), offsets.to(DEVICE)
# Create the datasets
train_dataset = to_map_style_dataset(train_data)
test_dataset = to_map_style_dataset(test_data)
# We further divide the training data into a train and validation split.
train_split, valid_split = random_split(train_dataset, [0.95, 0.05])
# Prepare the data loaders
train_dataloader = DataLoader(train_split, batch_size=LOADERS_BATCH_SIZE, shuffle=True, collate_fn=collate_fn)
valid_dataloader = DataLoader(valid_split, batch_size=LOADERS_BATCH_SIZE, shuffle=True, collate_fn=collate_fn)
test_dataloader = DataLoader(test_dataset, batch_size=LOADERS_BATCH_SIZE, shuffle=True, collate_fn=collate_fn)
Model buildingยถ
Define modelยถ
[6]:
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse_output=False)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
init_range = 0.5
self.embedding.weight.data.uniform_(-init_range, init_range)
self.fc.weight.data.uniform_(-init_range, init_range)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded).softmax(axis=-1)
model = TextClassificationModel(vocab_size=len(vocab), embed_dim=64, num_class=4).to(DEVICE)
Train and evaluate modelยถ
[ ]:
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=5)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.1)
def train_epoch(dataloader):
model.train()
train_accuracy = total_count = 0
for label, text, offset in dataloader:
optimizer.zero_grad()
predicted_label = model(text, offset)
loss = criterion(predicted_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
train_accuracy += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return train_accuracy / total_count
def validation_epoch(dataloader):
model.eval()
validation_accuracy = total_count = 0
with torch.no_grad():
for label, text, offsets in dataloader:
predicted_label = model(text, offsets)
validation_accuracy += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return validation_accuracy / total_count
total_accuracy = None
for epoch in range(1, 3):
start_time = time.perf_counter()
train_epoch(train_dataloader)
accu_val = validation_epoch(valid_dataloader)
if total_accuracy is not None and total_accuracy > accu_val:
scheduler.step()
else:
total_accuracy = accu_val
print("-" * 65)
print(f"| end of epoch {epoch: .3f} | time: {time.perf_counter() - start_time :5.2f}s | valid accuracy {accu_val:8.3f} ")
print("-" * 65)
test_accuracy = validation_epoch(test_dataloader)
print('Test accuracy {:8.3f}'.format(test_accuracy))
Wrap model with Giskardยถ
To prepare for the vulnerability scan, make sure to wrap your model using Giskardโs Model class. You can choose to either wrap the prediction function (preferred option) or the model object. More details here.
[ ]:
def infer_predictions(_model: torch.nn.Module, _dataloader: DataLoader) -> np.ndarray:
_model.eval()
pred = list()
for _, text, offsets in _dataloader:
with torch.no_grad():
probs = model(text, offsets).cpu().detach().numpy()
pred.append(probs)
pred = np.concatenate(pred, axis=0)
return pred
def prediction_function(df) -> np.ndarray:
# Placeholder for label.
if df.shape[1] == 1:
df.insert(0, TARGET_COLUMN_NAME, np.zeros(len(df)))
data_iterator = df.itertuples(index=False)
dataloader = DataLoader(to_map_style_dataset(data_iterator), batch_size=LOADERS_BATCH_SIZE, collate_fn=collate_fn)
predictions = infer_predictions(model, dataloader)
predictions = predictions
return predictions
giskard_model = Model(
model=prediction_function, # A prediction function that encapsulates all the data pre-processing steps and that could be executed with the
model_type="classification", # Either regression, classification or text_generation.
name="Simple News Classification Model", # Optional.
classification_labels=list(TARGET_MAP.values()), # Their order MUST be identical to the prediction_function's output order.
feature_names=["text"], # Default: all columns of your dataset.
)
# Validate wrapped model.
wrapped_test_metric = accuracy_score(giskard_dataset.df[TARGET_COLUMN_NAME], giskard_model.predict(giskard_dataset).prediction)
print(f"Wrapped Test accuracy: {wrapped_test_metric:.3f}")
Detect vulnerabilities in your modelยถ
Scan your model for vulnerabilities with Giskardยถ
Giskardโs scan allows you to detect vulnerabilities in your model automatically. These include performance biases, unrobustness, data leakage, stochasticity, underconfidence, ethical issues, and more. For detailed information about the scan feature, please refer to our scan documentation.
[ ]:
results = scan(giskard_model, giskard_dataset)
[11]:
display(results)
Generate comprehensive test suites automatically for your modelยถ
Generate test suites from the scanยถ
The objects produced by the scan can be used as fixtures to generate a test suite that integrate all detected vulnerabilities. Test suites allow you to evaluate and validate your modelโs performance, ensuring that it behaves as expected on a set of predefined test cases, and to identify any regressions or issues that might arise during development or updates.
[12]:
test_suite = results.generate_test_suite("My first test suite")
test_suite.run()
Executed 'Invariance to โAdd typosโ' with arguments {'model': <giskard.models.function.PredictionFunctionModel object at 0x151b384f0>, 'dataset': <giskard.datasets.base.Dataset object at 0x151ae6f20>, 'transformation_function': <giskard.scanner.robustness.text_transformations.TextTypoTransformation object at 0x14b462f80>, 'threshold': 0.95, 'output_sensitivity': 0.05}:
Test failed
Metric: 0.89
- [TestMessageLevel.INFO] 7587 rows were perturbed
[12]:
Customize your suite by loading objects from the Giskard catalogยถ
The Giskard open source catalog will enable to load:
Tests such as metamorphic, performance, prediction & data drift, statistical tests, etc
Slicing functions such as detectors of toxicity, hate, emotion, etc
Transformation functions such as generators of typos, paraphrase, style tune, etc
To create custom tests, refer to this page.
For demo purposes, we will load a simple unit test (test_f1) that checks if the test F1 score is above the given threshold. For more examples of tests and functions, refer to the Giskard catalog.
[ ]:
test_suite.add_test(testing.test_f1(model=giskard_model, dataset=giskard_dataset, threshold=0.7)).run()
Debug and interact with your tests in the Giskard Hubยถ
At this point, youโve created a test suite that is highly specific to your domain & use-case. Failing tests can be a pain to debug, which is why we encourage you to head over to the Giskard Hub.
Play around with a demo of the Giskard Hub on HuggingFace Spaces using this link.
More than just debugging tests, the Giskard Hub allows you to:
Compare models to decide which model to promote
Automatically create additional domain-specific tests through our automated model insights feature
Share your test results with team members and decision makers
The Giskard Hub can be deployed easily on HuggingFace Spaces.
Hereโs a sneak peek of automated model insights on a credit scoring classification model.
Upload your test suite to the Giskard Hubยถ
The entry point to the Giskard Hub is the upload of your test suite. Uploading the test suite will automatically save the model, dataset, tests, slicing & transformation functions to the Giskard Hub.
[ ]:
# Create a Giskard client after having install the Giskard server (see documentation)
api_key = "<Giskard API key>" #This can be found in the Settings tab of the Giskard hub
#hf_token = "<Your Giskard Space token>" #If the Giskard Hub is installed on HF Space, this can be found on the Settings tab of the Giskard Hub
client = GiskardClient(
url="http://localhost:19000", # Option 1: Use URL of your local Giskard instance.
# url="<URL of your Giskard hub Space>", # Option 2: Use URL of your remote HuggingFace space.
key=api_key,
# hf_token=hf_token # Use this token to access a private HF space.
)
project_key = "my_project"
my_project = client.create_project(project_key, "PROJECT_NAME", "DESCRIPTION")
# Upload to the project you just created
test_suite.upload(client, project_key)
Download a test suite from the Giskard Hubยถ
After curating your test suites with additional tests on the Giskard Hub, you can easily download them back into your environment. This allows you to:
Check for regressions after training a new model
Automate the test suite execution in a CI/CD pipeline
Compare several models during the prototyping phase
[ ]:
test_suite_downloaded = Suite.download(client, project_key, suite_id=...)
test_suite_downloaded.run()