Deep Learning for Coders, Lesson 3
Questions
Provide an example of where the bear classification model might work poorly in production, due to structural or style differences in the training data. Hand-drawn bears, black and white images.
Where do text models currently have a major deficiency? Current text models can generate compelling text and context, but are unable to generate correct responses.
What are possible negative societal implications of text generation models? Use on social media to generate disinformation
In situations where a model might make mistakes, and those mistakes could be harmful, what is a good alternative to automating a process? Model and human user interact closely.
What kind of tabular data is deep learning particularly good at? Time series.
What’s a key downside of directly using a deep learning model for recommendation systems? Only recommend things someone would like, rather than anything helpful (e.g. user has probably already heard of certain author).
What are the steps of the Drivetrain Approach?
- Defined objective: what outcome am I trying to achieve?
- Levers: what inputs we can control
- Data: what data we can collect
- Models: how the levers influence the objective.
How do the steps of the Drivetrain Approach map to a recommendation system? The objective of a recommendation engine is to drive additional sales by surprising and delighting the customer with recommendations of items they would not have purchased without the recommendation. The lever is the ranking of the recommendations. New data must be collected to generate recommendations that will cause new sales. This will require conducting many randomised experiments in order to collect data about a wide range of recommendations for a wide range of customers. This is a step that few organisations take; but without it, you don’t have the information you need to actually optimise recommendations based on your true objective (more sales!).
What is DataLoaders? Having downloaded some data, we need to assemble it in a format suitable for training, by creating an object called DataLoaders. This is a fastai class that stores multiple DataLoader objects you pass to it, normally a train and a valid. The key functionality is provided with these lines of code:
class DataLoaders(GetAtrr):
def __init__(self, *loaders): self.loaders = loaders
def __getitem__(self, i): return self.loaders[i]
train, valid = add_props(lambda i, self: self[i])
What four things do we need to tell fastai to create DataLoaders?
- What kinds of data we are working with
- How to get the list of items
- How to label these items
- How to create the validation set
What does the splitter parameter to DataBlock do? Splits the training and validation sets. RandomSplitter does this randomly, and you can set the seed so that the same split is used each time.
How do we ensure a random split always gives the same validation set?
seed=42
, passed in as an argument.
What letters are often used to signify the independent and dependent variables?The independent variable is often referred to as x and the dependent variable as y.
What’s the difference between the crop, pad, and squish resize approaches? When might you choose one over the others? Pad fills out the images with zeros (black), resize crops the images to fit a square shape of the size requested, using the full width or height, and squish compacts them down. All are problematic; squish and stretch distort the image into unrealistic shapes which lowers accuracy. Also, crop removes some features which could allow us to perform recognition. Padding the images leads to empty space, meaning wasted computation and lower effective resolution.
What is data augmentation? Why is it needed? Data augmentation refers to creating random variations of our input data, such that they appear different, but do not actually change the meaning of the data. Examples include rotation, flipping, perspective warping and contrast changes. Because augmentations means the images are all the same size, we can batch them using the GPU.
What is the difference between item_tfms
and batch_tfms
? To tell fastai we want to use transforms on a batch, we use the batch_tfms
parameter. Item_tfms
run transformations on individual items, resizing the images to the same size.
What is a confusion matrix? Diagonal shows the images which were classified correctly, and the off-diagonal cells represent those which were classified incorrectly. It is calculated using the validation set.