top of page
Search
  • Writer's pictureMayuri Kale

ReLU Activation Function





Introduction


The Rectified Linear Unit is the most generally applied activation function in deep learning models. The function returns 0 if it receives any negative input, but for any positive value x it returns that value back. So it can be written as f (x) = maximum (0, x).



In a neural network, the activation function is accountable for making over the added weighted input from the knot into the activation of the knot or output for that input.


The rectified linear activation function or ReLU for low is a piecewise direct function that will output the input direct if it's positive, else, it'll product zero. It has come the failure activation function for multiple types of neural networks because a model that uses it's cheap to train and frequently achieves better version.



The Rectified Linear Unit is the most generally utilized activation function in deep learning models. The function returns 0 if it receives any negative input, but for any positive value x it returns that value back.


It's amazing that such a simple function (and one prepared of two direct pieces) can permit your model to regard for non-linearities and relations so well. But the relu activation function works good in utmost operations, and it's veritably extensively applied as a result.


Why It Works


Introducing Interactions and Non-linearities


Activation functions serve two direct purposes 1) Help a model account for interaction things.

What's an interactive effect? It's when one variable A affects a forecast else depending on the value of B. For illustration, if my model wanted to know whether a some body weight meant an increased threat of diabetes, it would have to know an existent's height. Some bodyweights indicate uplifted threats for short people, while indicating good health for high people. So, the effect of body weight on diabetes threat depends on height, and we'd say that weight and height have an commerce effect.


2) Help a model account for non-linear goods. This just means that if I graph a changeable on the horizontal axis, and my prognostications on the perpendicular axis, it is not a linear line. Or said another way, the effect of adding the predictor by one is different at different values of that predictor.


In this article ,we explained rectified linear activation function or ReLU and how its works. Also learn more about Artificial intelligence .

12 views0 comments

Comments


Post: Blog2_Post
bottom of page