Changes

Jump to: navigation, search

A-Team

40 bytes added, 09:46, 8 March 2019
Initial Profile
[[File:neuralnet_chart.jpg]]
After the initial profile it is obvious that the dot product function consumes 97.94% of our run time. Additionally, the transpose function also consumes 1.45% which seems messily, however during back propagation transpose is also called, as well as two rectifiers(activation functions), reluPrime and relu. https://wikimediaWhere reluPrime is a binary activation function.org/api/rest_v1/media/math/render/svg/6883e7bff0d9ac2f89caa6c905be539bf7c13d65 Relu = f(x) = {0 for x < 0, x otherwise}ReluPrime = f(x) = {0 for x < 0, 1 otherwise}
// Back propagation
113
edits

Navigation menu