Deep Convolutional Networks for Image Classification Open Access
- Other title
- Type of item
- Degree grantor
University of Alberta
- Author or creator
- Supervisor and department
Dale Schuurmans (Computing Science)
- Examining committee member and department
Russell Greiner (Computing Science)
Nilanjan Ray (Computing Science)
Department of Computing Science
- Date accepted
- Graduation date
Master of Science
- Degree level
Image classification is an important problem in machine learning. Deep neural networks, particularly deep convolutional networks, have recently contributed great improvements to end-to-end learning quality for this problem. Such networks significantly reduce the need for human designed features in the image recognition process. In this thesis I address two questions: first, how best to design the architecture of a convolutional neural network for image classification; and second, how to improve the activation functions used in convolutional neural networks. I review the history of convolutional network architectures, then propose an efficient network structure named ”TinyNet” that reduces network size while preserving state of the art image classification performance. For the second question I propose a new kind of activation function, called the ”Randomized Leaky Rectified Linear Unit”, which improves the empirical generalization performance of the now widely used Rectified Linear Unit. Also, I make an explanation of the difficulty of training deep sigmoid network. The thesis culminates in a demonstration of the TinyNet architecture with Randomized Leaky Rectified Linear Units, which obtains state-of-art results on the CIFAR-10 image classification data set without any preprocessing. To further demonstrate the generality of the results, I apply the general convolutional neural network structure to a different image classification problem, with completely different textures and shapes, and again achieve state-of-art results on a data set from the National Data Science Bowl competition.
- This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for the purpose of private, scholarly or scientific research. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
- Citation for previous publication
Xu, B., Wang, N., Chen, T. and Li, M., 2015. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853.Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., Xiao, T., Xu, B., Zhang, C. and Zhang, Z., 2015. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. arXiv preprint arXiv:1512.01274.
- Date Uploaded
- Date Modified
- Audit Status
- Audits have not yet been run on this file.
File format: pdf (PDF/A)
Mime type: application/pdf
File size: 3440782
Last modified: 2016:06:16 16:58:26-06:00
Original checksum: 5743096342261b98bec781589b5ef071
Activity of users you follow