Open In Colab View Notebook on GitHub

German credit scoring [scikit-learn]

Giskard is an open-source framework for testing all ML models, from LLMs to tabular models. Don’t hesitate to give the project a star on GitHub ⭐️ if you find it useful!

In this notebook, you’ll learn how to create comprehensive test suites for your model in a few lines of code, thanks to Giskard’s open-source Python library.

Use-case:

  • Binary classification. Whether to give a customer credit or not.

  • Model: LogisticRegression

  • Dataset

Outline:

  • Detect vulnerabilities automatically with Giskard’s scan

  • Automatically generate & curate a comprehensive test suite to test your model beyond accuracy-related metrics

  • Upload your model to the Giskard Hub to:

    • Debug failing tests & diagnose issues

    • Compare models & decide which one to promote

    • Share your results & collect feedback from non-technical team members

Install dependencies

Make sure to install the giskard

[22]:
%pip install giskard --upgrade

Import libraries

[1]:
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler

from giskard import Model, Dataset, scan, testing, GiskardClient, Suite

Define constants

[2]:
# Constants.
COLUMN_TYPES = {
    "account_check_status": "category",
    "duration_in_month": "numeric",
    "credit_history": "category",
    "purpose": "category",
    "credit_amount": "numeric",
    "savings": "category",
    "present_employment_since": "category",
    "installment_as_income_perc": "numeric",
    "sex": "category",
    "personal_status": "category",
    "other_debtors": "category",
    "present_residence_since": "numeric",
    "property": "category",
    "age": "category",
    "other_installment_plans": "category",
    "housing": "category",
    "credits_this_bank": "numeric",
    "job": "category",
    "people_under_maintenance": "numeric",
    "telephone": "category",
    "foreign_worker": "category",
}

TARGET_COLUMN_NAME = "default"

COLUMNS_TO_SCALE = [key for key in COLUMN_TYPES.keys() if COLUMN_TYPES[key] == "numeric"]
COLUMNS_TO_ENCODE = [key for key in COLUMN_TYPES.keys() if COLUMN_TYPES[key] == "category"]

# Paths.
DATA_URL = "https://raw.githubusercontent.com/Giskard-AI/giskard-examples/main/datasets/credit_scoring_classification_model_dataset/german_credit_prepared.csv"

Dataset preparation

Load data

[3]:
df = pd.read_csv(DATA_URL, keep_default_na=False, na_values=["_GSK_NA_"])

Train-test split

[4]:
X_train, X_test, Y_train, Y_test = train_test_split(df.drop(columns=TARGET_COLUMN_NAME), df[TARGET_COLUMN_NAME],
                                                    test_size=0.2, random_state=0, stratify=df[TARGET_COLUMN_NAME])

Wrap dataset with Giskard

To prepare for the vulnerability scan, make sure to wrap your dataset using Giskard’s Dataset class. More details here.

[5]:
raw_data = pd.concat([X_test, Y_test], axis=1)
giskard_dataset = Dataset(
    df=raw_data,
    # A pandas.DataFrame that contains the raw data (before all the pre-processing steps) and the actual ground truth variable (target).
    target=TARGET_COLUMN_NAME,  # Ground truth variable.
    name='German credit scoring dataset',  # Optional.
    cat_columns=COLUMNS_TO_ENCODE
    # List of categorical columns. Optional, but is a MUST if available. Inferred automatically if not.
)

Model building

Define preprocessing pipeline

[6]:
numeric_transformer = Pipeline(steps=[
    ("imputer", SimpleImputer(strategy="median")),
    ("scaler", StandardScaler())
])

categorical_transformer = Pipeline([
    ("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
    ("onehot", OneHotEncoder(handle_unknown="ignore", sparse_output=False)),
])

preprocessor = ColumnTransformer(transformers=[
    ("num", numeric_transformer, COLUMNS_TO_SCALE),
    ("cat", categorical_transformer, COLUMNS_TO_ENCODE),
])

Build estimator

[ ]:
pipeline = Pipeline(steps=[
    ("preprocessor", preprocessor),
    ("classifier", LogisticRegression(max_iter=100))
])

pipeline.fit(X_train, Y_train)

pred_train = pipeline.predict(X_train)
pred_test = pipeline.predict(X_test)

print(classification_report(Y_test, pred_test))

Wrap model with Giskard

To prepare for the vulnerability scan, make sure to wrap your model using Giskard’s Model class. You can choose to either wrap the prediction function (preferred option) or the model object. More details here.

[ ]:
giskard_model = Model(
    model=pipeline,
    # A prediction function that encapsulates all the data pre-processing steps and that could be executed with the dataset used by the scan.
    model_type="classification",  # Either regression, classification or text_generation.
    name="Credit scoring classifier",  # Optional.
    classification_labels=pipeline.classes_.tolist(),
    # Their order MUST be identical to the prediction_function's output order.
    feature_names=list(COLUMN_TYPES.keys()),  # Default: all columns of your dataset.
)

# Validate wrapped model.
print(classification_report(Y_test, pipeline.classes_[giskard_model.predict(giskard_dataset).raw_prediction]))

Detect vulnerabilities in your model

Scan your model for vulnerabilities with Giskard

Giskard’s scan allows you to detect vulnerabilities in your model automatically. These include performance biases, unrobustness, data leakage, stochasticity, underconfidence, ethical issues, and more. For detailed information about the scan feature, please refer to our scan documentation.

[ ]:
results = scan(giskard_model, giskard_dataset)
[10]:
display(results)

Generate comprehensive test suites automatically for your model

Generate test suites from the scan

The objects produced by the scan can be used as fixtures to generate a test suite that integrate all detected vulnerabilities. Test suites allow you to evaluate and validate your model’s performance, ensuring that it behaves as expected on a set of predefined test cases, and to identify any regressions or issues that might arise during development or updates.

[11]:
test_suite = results.generate_test_suite("My first test suite")
test_suite.run()
Executed 'Precision on data slice “`other_installment_plans` == "bank"”' with arguments {'model': <giskard.models.sklearn.SKLearnModel object at 0x12dc2e860>, 'dataset': <giskard.datasets.base.Dataset object at 0x12daddb10>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x143e15ea0>, 'threshold': 0.7540625}:
               Test failed
               Metric: 0.6


Executed 'Precision on data slice “`account_check_status` == "0 <= ... < 200 DM"”' with arguments {'model': <giskard.models.sklearn.SKLearnModel object at 0x12dc2e860>, 'dataset': <giskard.datasets.base.Dataset object at 0x12daddb10>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x143d7d600>, 'threshold': 0.7540625}:
               Test failed
               Metric: 0.6


Executed 'Precision on data slice “`present_employment_since` == "... < 1 year"”' with arguments {'model': <giskard.models.sklearn.SKLearnModel object at 0x12dc2e860>, 'dataset': <giskard.datasets.base.Dataset object at 0x12daddb10>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x1440253c0>, 'threshold': 0.7540625}:
               Test failed
               Metric: 0.65


Executed 'Recall on data slice “`personal_status` == "divorced"”' with arguments {'model': <giskard.models.sklearn.SKLearnModel object at 0x12dc2e860>, 'dataset': <giskard.datasets.base.Dataset object at 0x12daddb10>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x143d7f910>, 'threshold': 0.8617857142857143}:
               Test failed
               Metric: 0.8


Executed 'Precision on data slice “`duration_in_month` >= 16.500”' with arguments {'model': <giskard.models.sklearn.SKLearnModel object at 0x12dc2e860>, 'dataset': <giskard.datasets.base.Dataset object at 0x12daddb10>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x143d7fd60>, 'threshold': 0.7540625}:
               Test failed
               Metric: 0.71


Executed 'Precision on data slice “`property` == "if not A121/A122 : car or other, not in attribute 6"”' with arguments {'model': <giskard.models.sklearn.SKLearnModel object at 0x12dc2e860>, 'dataset': <giskard.datasets.base.Dataset object at 0x12daddb10>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x143d7d0f0>, 'threshold': 0.7540625}:
               Test failed
               Metric: 0.72


Executed 'Precision on data slice “`sex` == "female"”' with arguments {'model': <giskard.models.sklearn.SKLearnModel object at 0x12dc2e860>, 'dataset': <giskard.datasets.base.Dataset object at 0x12daddb10>, 'slicing_function': <giskard.slicing.slice.QueryBasedSliceFunction object at 0x143b84700>, 'threshold': 0.7540625}:
               Test failed
               Metric: 0.74


[11]:
close Test suite failed. To debug your failing test and diagnose the issue, please run the Giskard hub (see documentation)
Test Precision on data slice “`other_installment_plans` == "bank"”
Measured Metric = 0.6 close Failed
model a4d8f299-351e-4722-8176-9b37fa35a9e8
dataset German credit scoring dataset
slicing_function `other_installment_plans` == "bank"
threshold 0.7540625
Test Precision on data slice “`account_check_status` == "0 <= ... < 200 DM"”
Measured Metric = 0.60465 close Failed
model a4d8f299-351e-4722-8176-9b37fa35a9e8
dataset German credit scoring dataset
slicing_function `account_check_status` == "0 <= ... < 200 DM"
threshold 0.7540625
Test Precision on data slice “`present_employment_since` == "... < 1 year"”
Measured Metric = 0.65217 close Failed
model a4d8f299-351e-4722-8176-9b37fa35a9e8
dataset German credit scoring dataset
slicing_function `present_employment_since` == "... < 1 year"
threshold 0.7540625
Test Recall on data slice “`personal_status` == "divorced"”
Measured Metric = 0.80488 close Failed
model a4d8f299-351e-4722-8176-9b37fa35a9e8
dataset German credit scoring dataset
slicing_function `personal_status` == "divorced"
threshold 0.8617857142857143
Test Precision on data slice “`duration_in_month` >= 16.500”
Measured Metric = 0.7125 close Failed
model a4d8f299-351e-4722-8176-9b37fa35a9e8
dataset German credit scoring dataset
slicing_function `duration_in_month` >= 16.500
threshold 0.7540625
Test Precision on data slice “`property` == "if not A121/A122 : car or other, not in attribute 6"”
Measured Metric = 0.7193 close Failed
model a4d8f299-351e-4722-8176-9b37fa35a9e8
dataset German credit scoring dataset
slicing_function `property` == "if not A121/A122 : car or other, not in attribute 6"
threshold 0.7540625
Test Precision on data slice “`sex` == "female"”
Measured Metric = 0.73684 close Failed
model a4d8f299-351e-4722-8176-9b37fa35a9e8
dataset German credit scoring dataset
slicing_function `sex` == "female"
threshold 0.7540625

Customize your suite by loading objects from the Giskard catalog

The Giskard open source catalog will enable to load:

  • Tests such as metamorphic, performance, prediction & data drift, statistical tests, etc

  • Slicing functions such as detectors of toxicity, hate, emotion, etc

  • Transformation functions such as generators of typos, paraphrase, style tune, etc

To create custom tests, refer to this page.

For demo purposes, we will load a simple unit test (test_f1) that checks if the test F1 score is above the given threshold. For more examples of tests and functions, refer to the Giskard catalog.

[ ]:
test_suite.add_test(testing.test_f1(model=giskard_model, dataset=giskard_dataset, threshold=0.7)).run()

Debug and interact with your tests in the Giskard Hub

At this point, you’ve created a test suite that is highly specific to your domain & use-case. Failing tests can be a pain to debug, which is why we encourage you to head over to the Giskard Hub.

Play around with a demo of the Giskard Hub on HuggingFace Spaces using this link.

More than just debugging tests, the Giskard Hub allows you to:

  • Compare models to decide which model to promote

  • Automatically create additional domain-specific tests through our automated model insights feature

  • Share your test results with team members and decision makers

The Giskard Hub can be deployed easily on HuggingFace Spaces.

Here’s a sneak peek of automated model insights on a credit scoring classification model.

CleanShot 2023-09-26 at 18.38.09.png

CleanShot 2023-09-26 at 18.38.50.png

Upload your test suite to the Giskard Hub

The entry point to the Giskard Hub is the upload of your test suite. Uploading the test suite will automatically save the model, dataset, tests, slicing & transformation functions to the Giskard Hub.

[ ]:
# Create a Giskard client after having install the Giskard server (see documentation)
api_key = "<Giskard API key>"  #This can be found in the Settings tab of the Giskard hub
#hf_token = "<Your Giskard Space token>" #If the Giskard Hub is installed on HF Space, this can be found on the Settings tab of the Giskard Hub

client = GiskardClient(
    url="http://localhost:19000",  # Option 1: Use URL of your local Giskard instance.
    # url="<URL of your Giskard hub Space>",  # Option 2: Use URL of your remote HuggingFace space.
    key=api_key,
    # hf_token=hf_token  # Use this token to access a private HF space.
)

project_key = "my_project"
my_project = client.create_project(project_key, "PROJECT_NAME", "DESCRIPTION")

# Upload to the project you just created
test_suite.upload(client, project_key)

Download a test suite from the Giskard Hub

After curating your test suites with additional tests on the Giskard Hub, you can easily download them back into your environment. This allows you to:

  • Check for regressions after training a new model

  • Automate the test suite execution in a CI/CD pipeline

  • Compare several models during the prototyping phase

[ ]:
test_suite_downloaded = Suite.download(client, project_key, suite_id=...)
test_suite_downloaded.run()