On December 19th 2018 was interesting workshop related to Artificial Intelligence.
interesting topic was about deep and narrow neural networks. Dr. Oseledets using Tensor Train proof that Deep Neural Network outperform shallow ones. He show for class of Multiplicative RNNs (Y. Wu, S. Zhang, Y. Zhang, Y. Bengio, R. Salakhutdinov., “On Multiplicative Integration with Recurrent Neural Networks”,2016) that Multiplicative RNNs are equivalent Tensor Train and shallow net univalent canonical decomposition. Using two side reduction he proof theorem that “a random d-dimensional TT-tensor with probability 1.0 has exponentially large CP-rank”, or from Neural Network point of view an RNN (of the form discussed earlier) with random weights can be exactly mimicked with a shallow net only of exponentially larger width (layers). Moreover they announce analytical method to compress neural networks using low-rank approximation.
Extended version: https://goo.gl/vNBxcv