Deep randomized neural networks
Abstract:
Randomized Neural Networks explore the behavior of neural processing
systems where the majority of connections are fixed, either in a
stochastic or a deterministic fashion. Typical examples of such systems
consist of multi-layered neural network architectures where the
connections to the hidden layer(s) are left untrained after
initialization. Limiting the training algorithms to operate on a reduced
set of weights inherently characterizes the class of Randomized Neural
Networks with a number of intriguing features. Among them, the extreme
efficiency of the resulting learning processes is undoubtedly a striking
advantage with respect to fully trained architectures. Besides, despite
the involved simplifications, randomized neural systems possess
remarkable properties both in practice, achieving state-of-the-art
results in multiple domains, and theoretically, allowing to analyze
intrinsic properties of neural architectures (e.g. before training of
the hidden layers connections). In recent years, the study of
Randomized Neural Networks has been extended towards deep architectures,
opening new research directions to the design of effective yet extremely
efficient deep learning models in vectorial as well as in more complex
data domains.
This tutorial will cover all the major aspects regarding the design and
analysis of Randomized Neural Networks, and some of the key results with
respect to their approximation capabilities. In particular, the tutorial
will first introduce the fundamentals of randomized neural models in the
context of feedforward networks (i.e., Random Vector Functional Link and
equivalent models), convolutional filters, and recurrent systems (i.e.,
Reservoir Computing networks). Then, it will focus specifically on
recent results in the domain of deep randomized systems, and their
application to structured domains.
Biographical notes:
Claudio Gallicchio is Assistant Professor at the Department of Computer Science, University of Pisa. He is Chair of the IEEE CIS Task Force on Reservoir Computing, and member of IEEE CIS Data Mining and Big Data Analytics Technical Committee, and of the IEEE CIS Task Force on Deep Learning. Claudio Gallicchio has organized several events (special sessions and workshops) in major international conferences (including IJCNN/WCCI, ESANN, ICANN) on themes related to Randomized Neural Networks. He serves as member of several program committees of conferences and workshops in Machine Learning and Artificial Intelligence. He has been invited speaker for several national and international conference. His research interests include Machine Learning, Deep Learning, Randomized Neural Networks, Reservoir Computing, Recurrent and Recursive Neural Networks, Graph Neural Networks.
Simone Scardapane is Assistant Professor at the "Sapienza" University of Rome. He is active as co-organizer of special sessions and special issues on themes related to Randomized Neural Networks and Randomized Machine Learning approaches. His research interests include Machine Learning, Neural Networks, Reservoir Computing and Randomized Neural Networks, Distributed and Semi-supervised Learning, Kernel Methods, and Audio Classification. Simone Scardapane is an Honorary Research Fellow with the CogBID Laboratory, University of Stirling, Stirling, U.K. Simone Scardapane is the co-organizer of the Rome Machine Learning & Data Science Meetup, that organizes monthly events in Rome, and a member of the advisory board for Codemotion Italy. He is also a co-founder of the Italian Association for Machine Learning, a not-for-profit organization with the aim of promoting machine learning concepts in the public. In 2017 he has been certified as a Google Developer expert for machine learning. Currently, he is the track director for the CNR sponsored "Advanced School of AI" (https://as-ai.org/governance/).