Publication

Export 21 results:
Filters: Keyword is Deep Learning  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
B
Y. Bengio, Practical recommendations for gradient-based training of deep architectures, in Neural networks: Tricks of the trade, Springer, 2012, pp. 437–478.
C
T. Chen, Du, Z. , Sun, N. , Wang, J. , Wu, C. , Chen, Y. , and Temam, O. , Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning, ACM Sigplan Notices, vol. 49. ACM, pp. 269–284, 2014.
Y. Chen, Luo, T. , Liu, S. , Zhang, S. , He, L. , Wang, J. , Li, L. , Chen, T. , Xu, Z. , Sun, N. , and , , Dadiannao: A machine-learning supercomputer, Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE Computer Society, pp. 609–622, 2014.
M. Courbariaux, Bengio, Y. , and David, J. - P. , Binaryconnect: Training deep neural networks with binary weights during propagations, Advances in Neural Information Processing Systems. pp. 3123–3131, 2015.
M. Courbariaux, Hubara, I. , Soudry, D. , El-Yaniv, R. , and Bengio, Y. , Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1, arXiv preprint arXiv:1602.02830, 2016.
F
J. Friedman, Hastie, T. , and Tibshirani, R. , The elements of statistical learning, vol. 1. Springer series in statistics New York, 2001.
G
S. Gupta, Agrawal, A. , Gopalakrishnan, K. , and Narayanan, P. , Deep learning with limited numerical precision, Proceedings of the 32nd International Conference on Machine Learning (ICML-15). pp. 1737–1746, 2015.
H
I. Hubara, Courbariaux, M. , Soudry, D. , El-Yaniv, R. , and Bengio, Y. , Binarized neural networks, Advances in neural information processing systems. pp. 4107–4115, 2016.
J
G. James, Witten, D. , Hastie, T. , and Tibshirani, R. , An introduction to statistical learning, vol. 112. Springer, 2013.
K
M. Kim and Smaragdis, P. , Bitwise neural networks, arXiv preprint arXiv:1601.06071, 2016.
A. Krizhevsky, Sutskever, I. , and Hinton, G. E. , Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems. pp. 1097–1105, 2012.
L
Y. LeCun, Bengio, Y. , and Hinton, G. , Deep learning, Nature, vol. 521, pp. 436–444, 2015.
O
J. Ouyang, Lin, S. , Qi, W. , Wang, Y. , Yu, B. , and Jiang, S. , SDA: Software-defined accelerator for large-scale DNN systems, Hot Chips 26 Symposium (HCS), 2014 IEEE. IEEE, pp. 1–23, 2014.
P
B. Pérez-Sánchez, Fontenla-Romero, O. , and Guijarro-Berdiñas, B. , A supervised learning method for neural networks based on sensitivity analysis with automatic regularization, International Work-Conference on Artificial Neural Networks. Springer, pp. 157–164, 2009.
A. Putnam, Caulfield, A. M. , Chung, E. S. , Chiou, D. , Constantinides, K. , Demme, J. , Esmaeilzadeh, H. , Fowers, J. , Gopal, G. Prashanth, Gray, J. , and , , A reconfigurable fabric for accelerating large-scale datacenter services, IEEE Micro, vol. 35, pp. 10–22, 2015.
R
M. Rastegari, Ordonez, V. , Redmon, J. , and Farhadi, A. , Xnor-net: Imagenet classification using binary convolutional neural networks, European Conference on Computer Vision. Springer, pp. 525–542, 2016.
S
J. Schmidhuber, Deep learning in neural networks: An overview, Neural networks, vol. 61, pp. 85–117, 2015.
C. Szegedy, Liu, W. , Jia, Y. , Sermanet, P. , Reed, S. , Anguelov, D. , Erhan, D. , Vanhoucke, V. , and Rabinovich, A. , Going deeper with convolutions, Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1–9, 2015.
Z
C. Zhang, Li, P. , Sun, G. , Guan, Y. , Xiao, B. , and Cong, J. , Optimizing fpga-based accelerator design for deep convolutional neural networks, Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, pp. 161–170, 2015.
M. Zinkevich, Weimer, M. , Li, L. , and Smola, A. J. , Parallelized stochastic gradient descent, Advances in neural information processing systems. pp. 2595–2603, 2010.