Linear vs nonlinear neural network?

前端 未结 7 1200
余生分开走
余生分开走 2021-01-31 17:26

I\'m new to machine learning and neural networks. I know how to build a nonlinear classification model, but my current problem has a continuous output. I\'ve been searching for

相关标签:
7条回答
  • 2021-01-31 17:32

    I'm sorry but the current answers are either incorrect or incomplete. The activation function is NOT necessarily what makes a neural network non-linear

    To understand this we need to realize that we are talking about non-linearity in the parameters. For example, notice that the following regression predicted values are considered linear predictions, despite non-linear transformations of the inputs because the parameters are linear:

    formula

    Now for simplicity, let us consider a single neuron, single layer neural network:

    formula

    If the transfer function is linear then:

    formula

    As you have already probably noticed, this is basically a linear regression. Even if we were to add multiple inputs and neurons, each with a linear activation function, we would now only have an ensemble of regressions (all linear in their parameters and therefore this simple neural network is linear):

    formula

    Now going back to (3), let's add a layer, so that we have a neural network with 2 layers, one neuron each (both with linear activation functions):

    formula (first layer)

    formula (second layer)

    Now notice:

    formula

    Reduces to:

    formula

    Where formula and formula

    Which means that our two layered network (each with a single neuron) is not linear in its parameters despite every activation function in the network being linear.

    Thus the answer to your question, "what makes a neural network non-linear" is: non-linearity in the parameters.

    This non-linearity in the parameters comes about two ways: 1) having more than one layer with neurons in your network (as exhibited above), and 2) having non-linear activation functions.

    For an example on non-linearity coming about through non-linear activation functions, suppose our input space, weights, and biases are all constrained such that they are all strictly positive (for simplicity). Now using (2) (single layer, single neuron) and the activation function formula, we have the following:

    formula

    Which Reduces to:

    formula

    Where formula, formula, and formula

    Now, ignoring what issues this neural network has, it should be clear, that at the very least, it is non-linear in the parameters and that non-linearity has been introduced solely by choice of the activation function.

    Finally, yes neural networks can model complex data structures that cannot be modeled by using linear models (e.g. simple regression). For an example on this see the classic XOr problem: https://medium.com/@jayeshbahire/the-xor-problem-in-neural-networks-50006411840b

    0 讨论(0)
  • 2021-01-31 17:38

    For starters, a neural network can model any function (not just linear functions) Have a look at this - http://neuralnetworksanddeeplearning.com/chap4.html.

    A Neural Network has got non linear activation layers which is what gives the Neural Network a non linear element.

    The function for relating the input and the output is decided by the neural network and the amount of training it gets. If you supply two variables having a linear relationship, then your network will learn this as long as you don't overfit. Similarly, a complex enough neural network can learn any function.

    0 讨论(0)
  • 2021-01-31 17:38

    I had the same struggle, most online courses use ANNs for classification, but you never actually solve a regression problem with them in the courses.

    What does make an ANN non-linear? The activation function.

    Even if you have an ANN with thousands of perceptrons and hidden units, if all the activations are linear (or not activated at all) you are just training a plain linear regression.

    But be careful, some activations functions (like sigmoid), have a range of values that act as a linear function and you may get stuck with a linear model even with non-linear activations.

    How to predict continuous output with an ANN? The same way as when you classify.

    It is the same problem, you just backpropagate the error (label - prediction) and update the weights. But don't forget to CHANGE THE ACTIVATION FUNCTION of the output layer to a continuous function (maybe ReLu if all labels are positive or don't activate the output at all), the intermediate hidden layers can be activated however you wish.

    For small regression problems with ANNs you may need to start with a veeeeeery small learning rate since there will be lots of variance since the error will be "unbounded" at first.

    Hope this helps :)

    0 讨论(0)
  • 2021-01-31 17:39

    Any non-linearity from the input to output makes the network non-linear. In the way we usually think about and implement neural networks, those non-linearities come from activation functions.

    If we are trying to fit non-linear data and only have linear activation functions, our best approximation to the non-linear data will be linear since that's all we can compute. You can see an example of a neural network trying to fit non-linear data with only linear activation functions here.

    However, if we change the linear activation function to something non-linear like ReLu, then we can see a better non-linear fitting of the data. You can see that here.

    0 讨论(0)
  • 2021-01-31 17:43

    When it comes to nonlinear regression, this is referring to how the weights affect the output. If a function is not linear with respect to the weights, then your problem is a nonlinear regression problem. So for example, let's look at a Feedforward Neural Network with one hidden layer where the activation functions in the hidden layer are some function and the output layer has linear activation functions. Given this, the mathematical representation can be:

    where we assume can operator on scalars and vectors with this notation to make it easy. , , , and are the weight you are aiming to estimate with the regression. If this was linear regression, would equal z, because that would make y linearly dependent on & . But if is nonlinear, say like , then now y is nonlinearly dependent on the weights .

    Now provided you understand all that, I am surprised you haven't seen discussion of the nonlinear case because that's pretty much all people talk about in textbooks and research. The use of things like stochastic gradient descent, Nonlinear Conjugate Gradient, RProp, and other methods are to help find local minima (and hopefully good local minima) for these nonlinear regression problems, even though a global optimum is not typically guaranteed.

    0 讨论(0)
  • 2021-01-31 17:44

    Because the activation is w*x, which is linear operation, so you need to have extra elements to make it non-linear.

    0 讨论(0)
提交回复
热议问题