Erfan Hamdi 1401
erfan.hamdi@gmail.com
Deep Learning |
PiNN |
Deep Learning | PINN |
---|---|
DL with Python, Chollet | Maziar Raissi Blog |
DL with PyTorch, Stevens | PINN Part I |
PyTorch Docs | PINN part II |
Stratify : For imbalanced Dataset, the same distribution of training would be used for testing
|
\[\begin{aligned} f(x) & = (a+b)(e+1) \\ c & = a + b\\ d &= b + 1 \\ f &= c \cdot d \end{aligned} \]
\[\begin{aligned} \frac{\partial e}{\partial a} &= \frac{\partial e}{\partial c} \cdot \frac{\partial c}{\partial a}= 2\cdot 1 = 2\end{aligned}\]
Everything is a Tensor
The Statistics of the input to each layer changes from each one to the other. So we have to normalize the input to each layer as we did in the first input layer.
If not done properly you won't even converge!
Accelerating Training by Reducing Internal Covaraince shift.
To avoid overfitting, we can randomly dropout some of the neurons in the layer.
Most used Loss Functions: