Open In Colab View Notebook on GitHub

Movie Review Sentiment Classification with DISTILL-BERT [scikit-learn + torch preprocessing]ยถ

Giskard is an open-source framework for testing all ML models, from LLMs to tabular models. Donโ€™t hesitate to give the project a star on GitHub โญ๏ธ if you find it useful!

In this notebook, youโ€™ll learn how to create comprehensive test suites for your model in a few lines of code, thanks to Giskardโ€™s open-source Python library.

Use-case:

Outline:

  • Detect vulnerabilities automatically with Giskardโ€™s scan

  • Automatically generate & curate a comprehensive test suite to test your model beyond accuracy-related metrics

  • Upload your model to the Giskard Hub to:

    • Debug failing tests & diagnose issues

    • Compare models & decide which one to promote

    • Share your results & collect feedback from non-technical team members

Install dependenciesยถ

Make sure to install the giskard

[ ]:
%pip install giskard --upgrade

Import librariesยถ

[1]:
from pathlib import Path
from urllib.request import urlretrieve

import numpy as np
import pandas as pd
import torch
import transformers as ppb
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split

from giskard import Model, Dataset, scan, testing, Suite
from giskard.client.giskard_client import GiskardClient

Define constantsยถ

[2]:
# Constants.
TARGET_COLUMN = "label"
TEXT_COLUMN = "text"

PRETRAINED_WEIGHTS_NAME = "distilbert-base-uncased"

RANDOM_STATE = 0

# Paths.
DATA_URL = "ftp://sys.giskard.ai/pub/unit_test_resources/movie_review_sentiment_classification_dataset/train.jsonl"
DATA_PATH = Path.home() / ".giskard" / "movie_review_sentiment_classification_dataset" / "train.jsonl"

Dataset preparationยถ

Load dataยถ

[ ]:
def fetch_from_ftp(url: str, file: Path) -> None:
    if not file.parent.exists():
        file.parent.mkdir(parents=True, exist_ok=True)

    if not file.exists():
        print(f"Downloading data from {url}")
        urlretrieve(url, file)

    print(f"Data was loaded!")


def load_data(**kwargs) -> pd.DataFrame:
    """Load data."""
    fetch_from_ftp(DATA_URL, DATA_PATH)

    df = pd.read_json(DATA_PATH, lines=True, **kwargs)
    df = df.drop(columns="label_text")

    return df


reviews_df = load_data(nrows=2000)

Train-Test splitยถ

[4]:
train_df, test_df = train_test_split(reviews_df, random_state=RANDOM_STATE)

Wrap dataset with Giskardยถ

To prepare for the vulnerability scan, make sure to wrap your dataset using Giskardโ€™s Dataset class. More details here.

[5]:
giskard_dataset = Dataset(
    df=test_df,
    # A pandas.DataFrame that contains the raw data (before all the pre-processing steps) and the actual ground truth variable (target).
    target=TARGET_COLUMN,  # Ground truth variable.
    name="Movie reviews dataset"  # Optional.
)

Model buildingยถ

Define preprocessing stepsยถ

[ ]:
embedder = ppb.DistilBertModel.from_pretrained(PRETRAINED_WEIGHTS_NAME)
tokenizer = ppb.DistilBertTokenizer.from_pretrained(PRETRAINED_WEIGHTS_NAME)


def get_max_sequence_length(corpus: pd.Series) -> int:
    """Define a length of the longest tokenized document."""
    max_length = max(len(tokenizer.encode(document, add_special_tokens=True)) for document in corpus)
    return max_length


max_sequence_length = get_max_sequence_length(reviews_df[TEXT_COLUMN])


def tokenize_documents(corpus: pd.Series) -> torch.Tensor:
    """Tokenization step."""
    tokens_matrix = corpus.apply(lambda document: tokenizer.encode(document, add_special_tokens=True)).values
    tokens_matrix = torch.tensor(
        [tokens_row + [0] * (max_sequence_length - len(tokens_row)) for tokens_row in tokens_matrix])
    return tokens_matrix


def get_documents_embeddings(tokens_matrix: torch.Tensor) -> np.ndarray:
    """Calculate sentence embeddings using distill-BERT model."""
    attention_mask = torch.where(tokens_matrix != 0, 1, 0)

    embedder.eval()
    with torch.no_grad():
        tokens_representations = embedder(tokens_matrix, attention_mask=attention_mask)

    # Take just 'cls token' embeddings, which represent whole sentence embedding.
    documents_embeddings = tokens_representations[0][:, 0, :].numpy()
    return documents_embeddings


def preprocess_text(df: pd.DataFrame) -> np.ndarray:
    """Preprocessing function to be also used in 'giskard.Model'."""
    return get_documents_embeddings(tokenize_documents(df[TEXT_COLUMN]))


X_train, Y_train = preprocess_text(train_df), train_df.label
X_test, Y_test = preprocess_text(test_df), test_df.label

Build estimatorยถ

[ ]:
classifier = LogisticRegression()
classifier.fit(X_train, Y_train)

# Validate model.
train_score = classifier.score(X_train, Y_train)
print(f"Train accuracy: {train_score: .2f}")

test_score = classifier.score(X_test, Y_test)
print(f"Test accuracy: {test_score: .2f}")

Wrap model with Giskardยถ

To prepare for the vulnerability scan, make sure to wrap your model using Giskardโ€™s Model class. You can choose to either wrap the prediction function (preferred option) or the model object. More details here.

[ ]:
def prediction_function(df: pd.DataFrame) -> np.ndarray:
    x = preprocess_text(df)
    return classifier.predict_proba(x)


giskard_model = Model(
    model=prediction_function,
    # A prediction function that encapsulates all the data pre-processing steps and that could be executed with the dataset used by the scan.
    model_type="classification",  # Either regression, classification or text_generation.
    name="Movie reviews sentiment classifier",  # Optional.
    classification_labels=classifier.classes_.tolist(),
    # Their order MUST be identical to the prediction_function's output order.
    feature_names=[TEXT_COLUMN],  # Default: all columns of your dataset.
    # classification_threshold=0.5  # Default: 0.5.
)

Y_test_pred_wrapped = giskard_model.predict(giskard_dataset).prediction
wrapped_test_score = accuracy_score(Y_test, Y_test_pred_wrapped)
print(f"Wrapped test accuracy: {wrapped_test_score: .2f}")

Detect vulnerabilities in your modelยถ

Scan your model for vulnerabilities with Giskardยถ

Giskardโ€™s scan allows you to detect vulnerabilities in your model automatically. These include performance biases, unrobustness, data leakage, stochasticity, underconfidence, ethical issues, and more. For detailed information about the scan feature, please refer to our scan documentation.

[ ]:
results = scan(giskard_model, giskard_dataset)
[10]:
display(results)

Generate comprehensive test suites automatically for your modelยถ

Generate test suites from the scanยถ

The objects produced by the scan can be used as fixtures to generate a test suite that integrate all detected vulnerabilities. Test suites allow you to evaluate and validate your modelโ€™s performance, ensuring that it behaves as expected on a set of predefined test cases, and to identify any regressions or issues that might arise during development or updates.

[11]:
test_suite = results.generate_test_suite("My first test suite")
test_suite.run()
Executed 'Overconfidence on data slice โ€œ`avg_whitespace(text)` >= 0.172โ€' with arguments {'model': <giskard.models.function.PredictionFunctionModel object at 0x14aaccee0>, 'dataset': <giskard.datasets.base.Dataset object at 0x14aaa4550>, 'slicing_function': <giskard.slicing.text_slicer.MetadataSliceFunction object at 0x14d3fd870>, 'threshold': 0.4033333333333333, 'p_threshold': 0.5}:
               Test failed
               Metric: 0.43


Executed 'Precision on data slice โ€œ`text` contains "movie"โ€' with arguments {'model': <giskard.models.function.PredictionFunctionModel object at 0x14aaccee0>, 'dataset': <giskard.datasets.base.Dataset object at 0x14aaa4550>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x150f8c1f0>, 'threshold': 0.808091286307054}:
               Test failed
               Metric: 0.64


[11]:
close Test suite failed. To debug your failing test and diagnose the issue, please run the Giskard hub (see documentation)
Test Overconfidence on data slice โ€œ`avg_whitespace(text)` >= 0.172โ€
Measured Metric = 0.43103 close Failed
model 47a0c059-70ff-456b-802d-ac8faf08081f
dataset Movie reviews dataset
slicing_function `avg_whitespace(text)` >= 0.172
threshold 0.4033333333333333
p_threshold 0.5
Test Precision on data slice โ€œ`text` contains "movie"โ€
Measured Metric = 0.63636 close Failed
model 47a0c059-70ff-456b-802d-ac8faf08081f
dataset Movie reviews dataset
slicing_function `text` contains "movie"
threshold 0.808091286307054

Customize your suite by loading objects from the Giskard catalogยถ

The Giskard open source catalog will enable to load:

  • Tests such as metamorphic, performance, prediction & data drift, statistical tests, etc

  • Slicing functions such as detectors of toxicity, hate, emotion, etc

  • Transformation functions such as generators of typos, paraphrase, style tune, etc

To create custom tests, refer to this page.

For demo purposes, we will load a simple unit test (test_f1) that checks if the test F1 score is above the given threshold. For more examples of tests and functions, refer to the Giskard catalog.

[ ]:
test_suite.add_test(testing.test_f1(model=giskard_model, dataset=giskard_dataset, threshold=0.7)).run()

Debug and interact with your tests in the Giskard Hubยถ

At this point, youโ€™ve created a test suite that is highly specific to your domain & use-case. Failing tests can be a pain to debug, which is why we encourage you to head over to the Giskard Hub.

Play around with a demo of the Giskard Hub on HuggingFace Spaces using this link.

More than just debugging tests, the Giskard Hub allows you to:

  • Compare models to decide which model to promote

  • Automatically create additional domain-specific tests through our automated model insights feature

  • Share your test results with team members and decision makers

The Giskard Hub can be deployed easily on HuggingFace Spaces.

Hereโ€™s a sneak peek of automated model insights on a credit scoring classification model.

CleanShot 2023-09-26 at 18.38.09.png

CleanShot 2023-09-26 at 18.38.50.png

Upload your test suite to the Giskard Hubยถ

The entry point to the Giskard Hub is the upload of your test suite. Uploading the test suite will automatically save the model, dataset, tests, slicing & transformation functions to the Giskard Hub.

[ ]:
# Create a Giskard client after having install the Giskard server (see documentation)
api_key = "<Giskard API key>"  #This can be found in the Settings tab of the Giskard hub
#hf_token = "<Your Giskard Space token>" #If the Giskard Hub is installed on HF Space, this can be found on the Settings tab of the Giskard Hub

client = GiskardClient(
    url="http://localhost:19000",  # Option 1: Use URL of your local Giskard instance.
    # url="<URL of your Giskard hub Space>",  # Option 2: Use URL of your remote HuggingFace space.
    key=api_key,
    # hf_token=hf_token  # Use this token to access a private HF space.
)

project_key = "my_project"
my_project = client.create_project(project_key, "PROJECT_NAME", "DESCRIPTION")

# Upload to the project you just created
test_suite.upload(client, project_key)

Download a test suite from the Giskard Hubยถ

After curating your test suites with additional tests on the Giskard Hub, you can easily download them back into your environment. This allows you to:

  • Check for regressions after training a new model

  • Automate the test suite execution in a CI/CD pipeline

  • Compare several models during the prototyping phase

[ ]:
test_suite_downloaded = Suite.download(client, project_key, suite_id=...)
test_suite_downloaded.run()