Using Recurrent Networks for Dimensionality Reduction
Author(s)
Jones, Michael J.
DownloadAITR-1396.ps (2.066Mb)
Additional downloads
Metadata
Show full item recordAbstract
This report explores how recurrent neural networks can be exploited for learning high-dimensional mappings. Since recurrent networks are as powerful as Turing machines, an interesting question is how recurrent networks can be used to simplify the problem of learning from examples. The main problem with learning high-dimensional functions is the curse of dimensionality which roughly states that the number of examples needed to learn a function increases exponentially with input dimension. This thesis proposes a way of avoiding this problem by using a recurrent network to decompose a high-dimensional function into many lower dimensional functions connected in a feedback loop.
Date issued
1992-09-01Other identifiers
AITR-1396
Series/Report no.
AITR-1396