Training a classification model with ResNet

In this tutorial we will download a public model from the marketplace to inference and train on custom data locally.
Here we will use a ResNet50 model.

Start by installing the following packages if you don't have them installed already. The model adapter will use them later.
torch
torchvision
imgaug
scikit-image<0.18

Then, import the modules required for the scripts in this tutorial.

Copy
Copied
# !pip install torch torchvision imgaug "scikit-image<0.18"
import matplotlib.pyplot as plt
from PIL import Image
import numpy as np
import json
import dtlpy as dl
from dtlpy.ml import train_utils

Clone the Public Model Into Your Project

First, we'll clone the Model entity to our project. You can view the public models in the public Dataloop Github.
You can view all publicly available models by using a Filter. Here we will use a ResNet50 model pretrained on the ImageNET dataset.

Copy
Copied
filters = dl.Filters(resource=dl.FiltersResource.MODEL, use_defaults=False)
filters.add(field='scope', values='public')
dl.models.list(filters=filters).print()
# get the public model
public_model = dl.models.get(model_name='pretrained-resnet50')
# clone to your project
project = dl.projects.get(project_name='<My proejct>')
model = public_model.clone(model_name='my-model',
                           project_id=project.id)

Run a pretrained model

We will then "build" a model adapter to get the package code locally and create an instance of the ModelAdapter class. Then we will load the pretrained model and weights into the model adapter.

Copy
Copied
package = dl.packages.get(package_id=model.package_id)
adapter = package.build(module_name='model-adapter')
# call the wrapper function
adapter.load_from_model(model_entity=model)

Predict on an item

Now we can get an item and inference on it with the predict method and upload the annotations. If you would like to see the item and predictions, you can view it locally or you can open the item on the platform and edit it directly there.

Copy
Copied
item = dl.items.get(item_id='611e174e4c09acc3c5bb81d3')
annotations = adapter.predict_items([item], with_upload=True)
image = np.asarray(Image.open(item.download()))
plt.imshow(item.annotations.show(image,
                                 thickness=5))
print('Classification: {}'.format(annotations[0][0].label))
item.open_in_web()

Train on new dataset

Here we will use a public dataset of sheep faces. We create a project and a dataset and upload the data with 4 labels of sheep.
NOTE: You might need to change the location of the items, which currently points to the root of the documentation repository. If you downloaded the dtlpy documentation repo locally, this should work as is.

Copy
Copied
project = dl.projects.create('Sheep Face - Model Mgmt')
dataset = project.datasets.create('Sheep Face')
dataset.to_df()
_ = dataset.items.upload(local_path='../../../../assets/sample_datasets/SheepFace/items/*',
                         local_annotations_path='../../../../assets/sample_datasets/SheepFace/json')
dataset.add_labels(label_list=['Merino', 'Poll Dorset', 'Suffolk', 'White Suffolk'])

Now we'll run the "prepare_dataset" method. This will clone and freeze the dataset so that we'll be able to reproduce the training with the same copy of the data. The cloned dataset will be split into subsets, either filtered using DQL or as percentages. In this example, we'll use an 80/20 train validation split.

Copy
Copied
pages = dataset.items.list()
num_items = pages.items_count
train_proportion = 0.8
val_proportion = 0.2
train_partitions = [0] * round(train_proportion * num_items)
val_partitions = [1] * round(val_proportion * num_items)
partitions = train_partitions + val_partitions
np.random.shuffle(partitions)
dataset.items.make_dir(directory='/train')
dataset.items.make_dir(directory='/val')
item_count = 0
for item in pages.all():
    if partitions[item_count] == 0:
        item.move(new_path='/train')
    elif partitions[item_count] == 1:
        item.move(new_path='/val')
    item_count += 1
subsets = {'train': dl.Filters(field='dir', values='/train'),
           'validation': dl.Filters(field='dir', values='/val')}
dataset.metadata['system']['subsets'] = {
    'train': json.dumps(dl.Filters(field='dir', values='/train').prepare()),
    'validation': json.dumps(dl.Filters(field='dir', values='/validation').prepare()),
}
dataset.update()
cloned_dataset = train_utils.prepare_dataset(dataset=dataset,
                                             filters=None,
                                             subsets=subsets)

After partitioning and cloning the data, we will clone the pretrained model to have a starting point for the fine-tuning. We create an artifact where we can save the model weights. We will also indicate the model's configuration will determine some runtime configurations, such as number of epochs. In this tutorial we will train for only 2 epochs.

Copy
Copied
new_model = model.clone(model_name='sheep-soft-augmentations',
                        dataset_id=cloned_dataset.id,
                        project_id=project.id,
                        labels=list(dataset.instance_map.keys()),
                        configuration={'batch_size': 16,
                                       'start_epoch': 0,
                                       'num_epochs': 2,
                                       'input_size': 256,
                                       'id_to_label_map': {(v - 1): k for k, v in
                                                           dataset.instance_map.items()}
                                       })

We'll load the new, untrained model into the adapter and prepare the local dataset to be used for training.

Copy
Copied
adapter.load_from_model(model=new_model)
root_path, data_path, output_path = '<local_path_to_store_data_locally>', '<local_path_to_store_outputs_locally>'

Start the training

The package, model, and data are now prepared. We are ready to train!

Copy
Copied
print("Training {!r} with model {!r} on data {!r}".format(package.name, new_model.id, data_path))
adapter.train(data_path=data_path,
              output_path=output_path)

Save the Model

We will save the locally-trained model and upload the trained weights to the Artifact Item. This ensures that everything is on the Dataloop platform and allows other developers to use our trained model.

Copy
Copied
adapter.save_to_model(local_path=output_path,
                      replace=True)

We can also list all Artifacts associated with this Package, and add more files that are needed to load or run the model.

Copy
Copied
adapter.model.artifacts.list_content()

Predict on our newly trained model

With everything in place, we will load our model and view an item's prediction.

Copy
Copied
item = dl.items.get(item_id='62b327f0da0d04bc7201e48a')
annotations = adapter.predict_items([item], with_upload=True)
image = Image.open(item.download())
plt.imshow(item.annotations.show(image,
                                 thickness=5))
print('Classification: {}'.format(annotations[0][0].label))