MXNet

How to do weighted softmax output custom op in mxnet?

余生长醉 提交于 2019-12-21 18:30:21
问题 I want to replace mx.symbol.SoftmaxOutput with the weighted version (assign different weight respect to label's frequency in the whole dataset) The original function works well like below: cls_prob = mx.symbol.SoftmaxOutput(data=data, label=label, multi_output=True, normalization='valid', use_ignore=True, ignore_label=-1, name='cls_prob') The current code I wrote as below. The code can run without errors, but the loss quickly explode to nan. I am dealing with detection problem, RCNNL1 loss

Why MXNet is reporting the incorrect validation accuracy?

那年仲夏 提交于 2019-12-11 17:36:47
问题 I am new to MXNet and want to solve a simple example that uses 1 layer network to solve the digit classification problem. My program goes as follows: import math import numpy as np import mxnet as mx import matplotlib.pyplot as plt import logging logging.getLogger().setLevel(logging.DEBUG) #============================================================ with np.load("notMNIST.npz") as data: images, labels = data["images"], data["labels"] # Reshape the images from 28x28 into 784 1D-array and

Unable to get AWS SageMaker to read RecordIO files

不想你离开。 提交于 2019-12-11 17:33:51
问题 I'm trying to convert an object detection lst file to a rec file and train with it in SageMaker. My list looks something like this: 10 2 5 9.0000 1008.0000 1774.0000 1324.0000 1953.0000 3.0000 2697.0000 3340.0000 948.0000 1559.0000 0.0000 0.0000 0.0000 0.0000 0.0000 IMG_1091.JPG 58 2 5 11.0000 1735.0000 2065.0000 1047.0000 1300.0000 6.0000 2444.0000 2806.0000 1194.0000 1482.0000 1.0000 2975.0000 3417.0000 1739.0000 2139.0000 IMG_7000.JPG 60 2 5 12.0000 1243.0000 1861.0000 1222.0000 1710.0000

how to use customized loss function with mxnet?

不问归期 提交于 2019-12-11 06:00:40
问题 I try to learn how to use customized loss function with mxnet. Bellow is a minimal (not) working example of linear regression. When I set 'use_custom = False' everything work fine, rather than custom loss wan't work. What I'm doing wrong? import mxnet as mx import logging logging.basicConfig(level='DEBUG') use_custom = False mx.random.seed(1) A = mx.nd.random.uniform(-1, 1, (5, 1)) B = mx.nd.random.uniform(-1, 1) X = mx.nd.random.uniform(-1, 1, (100, 5)) y = mx.nd.dot(X, A) + B iter = mx.io

RuntimeError: Cannot find the MXNet library

↘锁芯ラ 提交于 2019-12-11 05:57:36
问题 I want to create executable for my code where i'm using mxnet with pyinstaller. I got this error File "mxnet/libinfo.py", line 74, in find_lib_path RuntimeError: Cannot find the MXNet library. List of candidates: /home/rit/test/exe/dist/test/libmxnet.so /home/rit/test/exe/dist/test/libmxnet.so /home/rit/test/exe/dist/test/mxnet/libmxnet.so /home/rit/test/exe/dist/test/mxnet/../../lib/libmxnet.so /home/rit/test/exe/dist/test/mxnet/../../build/libmxnet.so Added libmxnet.so though spec file but

Issues installing mxnet GPU R package for Amazon deep learning AMI

给你一囗甜甜゛ 提交于 2019-12-11 04:37:00
问题 I am having trouble installing mxnet GPU for R on Amazon deep learning linux AMI. The environment variables are such a mess that it’s a nightmare for any non-expert sys-admin to figure out. Step 1: install the ridiculous amount of missing/broken programs and R packages sudo yum install R sudo yum install libxml2-devel sudo yum install cairo-devel sudo yum install giflib-devel sudo yum install libXt-devel sudo R install.packages("devtools") library(devtools) install_github("igraph/rigraph")

MXNet: nn.Activation vs nd.relu?

喜夏-厌秋 提交于 2019-12-10 18:28:22
问题 I am new to MXNet (I am using it in Python3) Their tutorial series encourages you define your own gluon blocks. So lets say this is your block (a common convolution structure): class CNN1D(mx.gluon.Block): def __init__(self, **kwargs): super(CNN1D, self).__init__(**kwargs) with self.name_scope(): self.cnn = mx.gluon.nn.Conv1D(10, 1) self.bn = mx.gluon.nn.BatchNorm() self.ramp = mx.gluon.nn.Activation(activation='relu') def forward(self, x): x = mx.nd.relu(self.cnn(x)) x = mx.nd.relu(self.bn(x

How to save a model when using MXnet

旧巷老猫 提交于 2019-12-10 15:32:04
问题 I am using MXnet for training a CNN (in R) and I can train the model without any error with the following code: model <- mx.model.FeedForward.create(symbol=network, X=train.iter, ctx=mx.gpu(0), num.round=20, array.batch.size=batch.size, learning.rate=0.1, momentum=0.1, eval.metric=mx.metric.accuracy, wd=0.001, batch.end.callback=mx.callback.log.speedometer(batch.size, frequency = 100) ) But as this process is time-consuming, I run it on a server during the night and I want to save the model

In Deep Learning “mxnet”, restrict number of core (cpu)

时间秒杀一切 提交于 2019-12-08 12:56:28
问题 The command "ctx=mx.cpu()" is taking all available CPU. How to restrict to use a certain number only - say 6 out of 8 core 回答1: Unfortunately - no. Even though the cpu context has int as an input argument: def cpu(device_id=0): """Returns a CPU context. according to the official documentation: Parameters ---------- device_id : int, optional The device id of the device. `device_id` is not needed for CPU. This is included to make interface compatible with GPU. However, in theory, it might be

Why normalizing labels in MxNet makes accuracy close to 100%?

泄露秘密 提交于 2019-12-08 11:37:50
问题 I am training a model using multi-label logistic regression on MxNet (gluon api) as described here: multi-label logit in gluon My custom dataset has 13 features and one label of shape [,6]. My features are normalized from original values to [0,1] I use simple dense neural net with 2 hidden layers. I noticed when I don't normalize labels (which take discrete values of 1,2,3,4,5,6 and are purely my choice to map categorical values to these numbers), my training process slowly converges to some