mxnet gradient descent for linear regression, variable types error

前端 未结 1 1498
攒了一身酷
攒了一身酷 2021-01-28 17:43

I\'m trying to implement a simple gradient descent for linear regression.

It works normally if I compute the gradient manually (by using the analytical expression), but

相关标签:
1条回答
  • 2021-01-28 18:07

    MXNet doesn't uses Numpy NDArray, but mxnet NDArray, which has very similar functionality and API but a different backend; mxnet NDArray is written in C++, uses asynchronous execution, is GPU-compatible and supports automatic differentiation. It also works on CPU, where it usually faster than default (OpenBLAS-backed) Numpy.

    So to fix your error I recommend to make sure you don't use numpy in your code, but mxnet NDArray everywhere. It is actually very easy to change because the API is super similar to numpy. And if need be, you can convert to and from numpy, for example:

    from mxnet import nd
    
    # Assuming A is an numpy ndarray and B an mxnet ndarray
    
    # from numpy to mxnet
    mxnet_array = nd.array(A)
    
    
    # from mxnet to numpy
    np_array = B.asnumpy()
    

    Regarding your specific interest in linear regression, see here 2 mxnet demos in python:

    • Linear regression in MXNet from scratch
    • Linear regression in MXNet with gluon (gluon is the name of the python imperative frontend, a bit like what keras is to TF)

    Using those NDArrays is one of the reasons MXNet is so fast, because it makes your code fully asynchronous and lets the engine find optimizations. Those NDArrays is one of the things that make MXNet so awesome, try them and you'll love them :)

    0 讨论(0)
提交回复
热议问题