上個學期的neuroinformatik(神經信息學)中有一個章節介紹的是hopfield網絡。
a Hopeld net is a neural network with feedback, i.e. the output of the net at time t becomes the input of the net at time t + 1. e.g.The output of neuron j at time t + 1 is given by where theta is the threshold of neuron j, N is the number of neurons in the Hopeld net.
If the weights are initialized suitably, the Hopeld net can be used as an autoassociative memory that recognizes a certain number of patterns. When presented with an initial input, the net will converge to the learned pattern that most closely resembles that input. To achieve this, the weights need to be initialized as follows: Where vector x are the patterns to be learned.
Hopfield nets have a scalar value associated with each state of the network referred to as the "energy", E, of the network, where:

簡單的說就是,所學習的”知識“并非存儲在神經元內,而是保存在神經元間的連接中。而某個神經元的輸出則取決于與其相連的神經元的輸入。這和我們人類的記憶機能是很類似的。
附,突然想到一個比方:科學不是全局最優解,上帝是。科學只是目前的局部最優解,是在有限的時間有限的空間下所能找到的最優解。在我們不是上帝的時候,科學提供的答案正如這局部最優解,是你只能接受的現實。而要想跳出局限性,我們需要不時的隨機重新設定搜索的出發點。
|