Changes

Jump to: navigation, search

A-Team

28 bytes added, 16:31, 8 March 2019
Neural Network
After the initial profile it is obvious that the dot product function consumes 97.94% of our run time. Additionally, the transpose function also consumes 1.45% which seems messily, however during back propagation transpose is also called, as well as two rectifiers(activation functions), reluPrime and relu. Where reluPrime is a binary activation function.
=======Relu = f(x) = {0 for x < > 0, x otherwise}=======
=======ReluPrime = f(x) = {0 for x < > 0, 1 otherwise}=======
// Back propagation
113
edits

Navigation menu