Can someone explain to me the difference between a cost function and the gradient descent equation in logistic regression?

前端 未结 5 722
再見小時候
再見小時候 2021-01-29 19:28

I\'m going through the ML Class on Coursera on Logistic Regression and also the Manning Book Machine Learning in Action. I\'m trying to learn by implementing everything in Pytho

相关标签:
5条回答
  • 2021-01-29 19:58

    A cost function is something you want to minimize. For example, your cost function might be the sum of squared errors over your training set. Gradient descent is a method for finding the minimum of a function of multiple variables. So you can use gradient descent to minimize your cost function. If your cost is a function of K variables, then the gradient is the length-K vector that defines the direction in which the cost is increasing most rapidly. So in gradient descent, you follow the negative of the gradient to the point where the cost is a minimum. If someone is talking about gradient descent in a machine learning context, the cost function is probably implied (it is the function to which you are applying the gradient descent algorithm).

    0 讨论(0)
  • 2021-01-29 20:00

    Cost function is something is like at what cost you are building your model for a good model that cost should be minimum. To find the minimum cost function we use gradient descent method. That give value of coefficients to determine minimum cost function

    0 讨论(0)
  • 2021-01-29 20:12

    Let's take an example of logistic regression model for binary classification. Output(Predicted Value) of the model for any given input will be offset(deviation) with respect to the actual output(Expected Value) while training. So, the model needs to be trained with minimal error(loss) so that model can perform well with high accuracy.

    The function used to find the parameters(m and c in case of linear equation, y = mx+c) value at which the minimal error(loss) occurs is called Cost Function/Loss Function. Loss function is a term used to find the loss for single row/record of the training sample and Cost function is a term used to find the loss for the entire training dataset.

    Now, How do we find the parameter(m and c in our case) values at which the minimum loss occurs? Its by using gradient descent algorithm using the equation, which helps us to find the points at which the minimum loss occurs and the parameters values at this points are considered for model building (let say y = 0.5x + 2) where m=.5 and c=2 are the points at which the loss is minimum.

    0 讨论(0)
  • 2021-01-29 20:19

    It's strange to think about it, but there is more than one measure for how "accurately" a line fits to data points.

    To access how accurately a line fits the data, we have a "cost" function which which can compare predicted vs. actual values and provide a "penalty" for how wrong it is.

    penalty = cost_funciton(predicted, actual)

    A naive cost function might just take the difference between the predicted and actual.

    More sophisticated functions will square the value, since we'd rather have many small errors than one large error.

    Additionally, each point has a different "sensitivity" to moving the line. Some points react very strongly to movement. Others react less strongly.

    Often, you can make a tradeoff, and move TOWARD a point that is sensitive, and AWAY from a point that is NOT sensitive. In that scenario , you get more than you give up.

    The "gradient" is a way of measuring how sensitive each point is to moving the line.

    This article does a good job of describing WHY there is more than one measure, and WHY some points are more sensitive than others:

    https://towardsdatascience.com/wrapping-your-head-around-gradient-descent-with-pictures-3fbd810235f5?source=friends_link&sk=7117e5de8c66bd4a4c2bb2a87a928773

    0 讨论(0)
  • 2021-01-29 20:24

    Whenever you train a model with your data, you are actually producing some new values (predicted) for a specific feature. However, that specific feature already has some values which are real values in the dataset. We know the closer the predicted values to their corresponding real values, the better the model.

    Now, we are using cost function to measure how close the predicted values are to their corresponding real values.

    We also should consider that the weights of the trained model are responsible for accurately predicting the new values. Imagine that our model is y = 0.9*X + 0.1, the predicted value is nothing but (0.9*X+0.1) for different Xs. [0.9 and 0.1 in the equation are just random values to understand.]

    So, by considering Y as real value corresponding to this x, the cost formula is coming to measure how close (0.9*X+0.1) is to Y.

    We are responsible for finding the better weight (0.9 and 0.1) for our model to come up with a lowest cost (or closer predicted values to real ones).

    Gradient descent is an optimization algorithm (we have some other optimization algorithms) and its responsibility is to find the minimum cost value in the process of trying the model with different weights or indeed, updating the weights.

    We first run our model with some initial weights and gradient descent updates our weights and find the cost of our model with those weights in thousands of iterations to find the minimum cost.

    One point is that gradient descent is not minimizing the weights, it is just updating them. This algorithm is looking for minimum cost.

    0 讨论(0)
提交回复
热议问题