
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
|
|
$$ W = left[begin{matrix}
j & k & l \
m & n & o \
p & q & r
end{matrix}right];;; X = left[begin{matrix}
a & b & c \
d & e & f \
g & h & i
end{matrix} right];;; b =left[begin{matrix}
s \
t \
u
end{matrix}right]$$
Then $WX + b$ will be:
$$ WX + b=
left[
begin{matrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s \
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t \
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
end{matrix}
right] $$
$$left[
begin{matrix}
V_A \
V_B \
V_C \
end{matrix}
right] =
left[
begin{matrix}
1 & 0 & L \
-cosψ & sinψ & L \
-cosψ & -sinψ & L
end{matrix}
right]
left[
begin{matrix}
V_x \
V_y \
W \
end{matrix}
right] $$




近期评论