Prof. Jacek M. Zurada
Towards Better Understanding of Data: Sparse Coding, Additive Features, Perceptrons and Constrained Autoencoders


Abstract: Convoluted mappings and cancellations of terms performed within feedforward neural networks (NN) make the mappings less than transparent. However, NN learning with meaningful constraints and especially for sparse coding allows for extraction of more meaningful and sparse discriminative features. These features can be understood as parts of original sets of objects and are generated through sparse basis vectors. Further, sparse basis functions (or receptive fields or filters) prove more useful when they can be decomposed, and then superimposed and reconstructed with as low a reconstruction error as possible.

Techniques discussed that meet the criteria are (1) Nonnegative Matrix Factorization is one of the efficient techniques that reduces the number of basis vectors and allows for extraction of latent features that are additive and hence interpretable for humans. (2) A classic EBP architecture can also be trained under the constraints of non-negativity and sparseness. The resulting classifiers allow for identification of parts of the objects encoded as receptive fields developed by weights of hidden neurons. The results are illustrated with MNIST handwritten digits classifiers and Reuters-21578 text categorization. (3) A constrained learning of sparse non-negative weights in autoencoders also allows for discovery of additive latent factors. Our experiments with MNIST, ORL face and NORB object datasets compare the auto-encoding accuracy for various training conditions. They indicate an enhanced interpretability and insights through identification of parts of complex input objects traded-off for a small reduction of recognition accuracy or classification error. The models discussed cover shallow networks, yet their training strategies parallel those used in multi-layer deep learning.


Biographical note: Jacek M. Zurada (IEEE Life Fellow) serves as a Professor with the Electrical and Computer Engineering Department at the University of Louisville, Kentucky. He has published 390 journal and conference papers in the areas of neural networks, computational intelligence, data mining, image processing and VLSI circuits, authore/co-authored three books and co-edited a number of volumes. His work was cited over 10600 times.

From 1998 to 2003 he was the Editor-in-Chief of IEEE Transactions on Neural Networks. He is now an Associate Editor of Neurocomputing, Schedae Informaticae, International Journal of Applied Mathematics and Computer Science, Advisory Editor of Int'l Journal of Information Technology and Intelligent Computing, and Editor of Springer Natural Computing Book Series.

He has served the profession in various capacities, including as IEEE VP-Technical Activities (TAB Chair 2013-15), President of IEEE Computational Intelligence Society in 2004-05 and its ADCOM member in 2009-14, 2016. He chaired the IEEE TAB Periodicals Review and Advisory Committee and the IEEE TAB Periodicals Committee in 2010-13. In 2011 he was Vice-Chair of PSPB and member of PSPB Strategic Planning Committee in 2010. He is a Candidate for 2018 IEEE President-Elect (2019 President).

Dr. Zurada has received a number of awards for distinction in research, teaching, and service from IEEE and other organizations. He holds the Title of National Professor of Poland, received four Honorary Professorships from Chinese universities and is a Member of the Polish Academy of Sciences. He is a Distinguished Speaker of IEEE SMC.

NEWS