This dataset suits our case well, since it contains images with a focus on human faces. Here is the 104MB gzipped tar file with the whole dataset of over 5,000 images. One database we used is Labeled Faces in the Wild from the University of Massachusetts. We experimented with two different datasets. We can take a dataset like this, paint a mask onto the faces and voilà - we have our image pairs. There are many datasets of human face images used mainly for the training of face detection algorithms. It is of course not practical to gather both input and output images of the same person with and without a mask. In this case, we need a lot of pairs of input and output images. To train the deep learning model, we need a lot of data. Finally, the last part of the pipeline is to find the best loss function and all necessary scripts to put everything together, so we can train and evaluate the model. Now that we have the dataset with pairs of images (with and without the mask), we can continue with defining the architecture of the ML model. It uses the landmarks to position the mask on the face. We start with a dataset of faces with precomputed face landmarks that are processed through the mask generator. The whole project is nicely represented as a high-level pipeline in the following image. Our ML model has one input - an image of a person with a face mask on and one output - an image of a person without the mask. For this particular project, we want to create an ML model that can show us what a person wearing a face mask looks like without that mask. Now that you have an environment with all necessary dependencies, let’s define our goals and objectives. And if you are familiar with Conda, you can go ahead and initialize the Conda environment by running the following commands from the directory where you cloned the GitHub project: conda env create -f environment.yml If you are not familiar with virtual environments or Conda, check out this nice article for some info. You can of course use any virtual environment you like, just make sure to install all necessary dependencies from environment.yml and requirements.txt. Let’s start with preparing the virtual environment for our Python project. With that settled, one more note before we start: Apart from this article, we’ve also prepared a GitHub account with everything you need already implemented, as well as the Jupyter Notebook `mask2face.ipynb` - where you can run everything mentioned here in just a few simple clicks, training your own neural network!įirst of all, if you’d like to follow all of the steps described in this article on your computer, you can clone this project from our GitHub. You can see examples of image inpainting below the input images had white gaps which were restored.Įxamples of image inpainting using Partial Convolutions. The problem we are trying to solve can be viewed as image inpainting, which is generally considered to be the process of restoring damaged images or filling in the missing parts. This article will guide you through the whole process of crafting the deep learning ML model - from the initial setup, data gathering and proper model selection to the training and fine-tuning.īefore we dive in, let’s define the nature of our task. Want to know how we crafted an ML tool that virtually removes the mask from a person’s face? And being the Machine Learning (ML) enthusiasts we are, we quickly realized that the answer to our question is closer than it may seem. You’re walking down the street, passing more and more people wearing face masks, and you’re starting to wonder: What do they all look like under the mask? At least we on the STRV Data Science team had that thought.
0 Comments
Leave a Reply. |