neurons networks:representation

history:

  • Origins: Algorithms that try to mimic the brain
  • It was widely used in 80s and early 90s
  • Recent resurgence:State-of-the-art technique for many applications

Neuron Model:

logistic unit

  1. sigmoid(logistic) activation function

  2. x is the in put and h /theta (x) is the output

  3. /theta is the “weights”

  4. g(z) ‘s functional image is as follows:

    屏幕快照 2017-04-19 下午2.35.27

    屏幕快照 2017-04-19 上午12.06.24

  5. X0 and X0(2) is the bios unit.

  6. Every layers expect input layer and output layer are hidden layer.

    屏幕快照 2017-04-19 上午12.09.56

  7. Pay attention to the expression. Pay extra special attention to the meaning of ai(j) and theta (j)

    屏幕快照 2017-04-19 上午12.19.35

  8. theta (j) will be of dimension s(j+1)*(sj + 1).

Forward propagation: Vectorized implementation

屏幕快照 2017-04-19 下午2.06.44

  1. a^(1) = x
  2. We have to add a0 before calculation of z(a) in every layers
  3. Let’s look at the picture shown above as an example. a(1) = x = 4 1 . theta (1) = 3 4 . z(2) = 31 . a(2) = 3 1;

Examples and intuitions

屏幕快照 2017-04-19 下午2.41.59

屏幕快照 2017-04-19 下午2.42.05

屏幕快照 2017-04-19 下午2.42.19

  1. theta determines the function
  2. Pay attention to the g(z)
  3. We can conbinge some units togethor to build more complex problems