Date of Award


First Advisor

Dr. Daniel Kilbride


In 2006, Geoffry Hinton published a paper showing how to train a neural network capable of recognizing handwritten digits with greater than 98% accuracy, a technique he branded “deep learning.” Until this paper was published, training a deep neural network was widely considered impossible and most attempts were abandoned during the 1990’s (Hinton, 2007). As a result, this paper revived the interest of the scientific community and before long hundreds more papers proved that not only was deep learning possible, but capable of mind-blowing achievements that no other machine learning algorithm technique could even attempt. Since then, neural networks have become a part of everyday life and we have created models that drive cars, recognize our face and voice, read and write, and so much more that we take for granted. However, perhaps the most impressive branch of neural networks are convolutional neural networks or CNNs. These neural networks utilize a special component known as a conventional layer that they use to accomplish impressive feats of image recognition. These models are responsible for the unlocking function of our iPhones, the lane detection and autopilot features of modern cars, weather detection and prediction using satellite imagery, and so many other applications. The increasing availability of these models and their impressive capabilities opens a new research perspective in the interaction of neural networks and practically any visual medium. There have been a plethora of models trained to watch and write movies, identify characters and actors on TV and movies, and a multitude of models devoted to the creation and classification of artworks. Similarly, the goal of this project is to use these CNNs to classify and identify various art genres.