Abstract
We consider the problem of learning deep representation when target labels are available. In this paper, we show that there exists intrinsic relationship between target coding and feature representation learning in deep networks. Specifically, we found that distributed binary code with error correcting capability is more capable of encouraging discriminative features, in comparison to the 1-of-K coding that is typically used in supervised deep learning. This new finding reveals additional benefit of using error-correcting code for deep model learning, apart from its well-known error correcting property. Extensive experiments are conducted on popular visual benchmark datasets.
Original language | English |
---|---|
Title of host publication | Proceedings of the 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015 |
Publisher | AI Access Foundation |
Pages | 3848-3854 |
Number of pages | 7 |
ISBN (Electronic) | 9781577357032 |
Publication status | Published - Jun 1 2015 |
Externally published | Yes |
Event | 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015 - Austin, United States Duration: Jan 25 2015 → Jan 30 2015 |
Publication series
Name | Proceedings of the National Conference on Artificial Intelligence |
---|---|
Volume | 5 |
Conference
Conference | 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015 |
---|---|
Country/Territory | United States |
City | Austin |
Period | 1/25/15 → 1/30/15 |
Bibliographical note
Publisher Copyright:© Copyright 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
ASJC Scopus Subject Areas
- Software
- Artificial Intelligence