pycaffe

How to modify batch normalization layers (DeconvNet) to be able to run with caffe?

江枫思渺然 提交于 2019-12-20 04:52:13
问题 I wanted to run the Deconvnet on my data, however it seemd it has been written for another version of caffe . Does anyone know how to change batch_params ? The one that is in Deconvnet layers { bottom: 'conv1_1' top: 'conv1_1' name: 'bn1_1' type: BN bn_param { scale_filler { type: 'constant' value: 1 } shift_filler { type: 'constant' value: 0.001 } bn_mode: INFERENCE } } And the one that Caffe provides for cifar10 example: layer { name: "bn1" type: "BatchNorm" bottom: "pool1" top: "bn1" batch

How to classify images using Spark and Caffe

烈酒焚心 提交于 2019-12-19 10:38:10
问题 I am using Caffe to do image classification, can I am using MAC OS X, Pyhton. Right now I know how to classify a list of images using Caffe with Spark python, but if I want to make it faster, I want to use Spark. Therefore, I tried to apply the image classification on each element of an RDD, the RDD created from a list of image_path. However, Spark does not allow me to do so. Here is my code: This is the code for image classification: # display image name, class number, predicted label def

Caffe: how to get the phase of a Python layer?

我们两清 提交于 2019-12-18 04:06:28
问题 I created a "Python" layer "myLayer" in caffe, and use it in the net train_val.prototxt I insert the layer like this: layer { name: "my_py_layer" type: "Python" bottom: "in" top: "out" python_param { module: "my_module_name" layer: "myLayer" } include { phase: TRAIN } # THIS IS THE TRICKY PART! } Now, my layer only participates in the TRAIN ing phase of the net, how can I know that in my layer's setup function?? class myLayer(caffe.Layer): def setup(self, bottom, top): # I want to know here

Multiple category classification in Caffe

雨燕双飞 提交于 2019-12-17 15:59:06
问题 I thought we might be able to compile a Caffeinated description of some methods of performing multiple category classification . By multi category classification I mean: The input data containing representations of multiple model output categories and/or simply being classifiable under multiple model output categories. E.g. An image containing a cat & dog would output (ideally) ~1 for both the cat & dog prediction categories and ~0 for all others. Based on this paper, this stale and closed PR

Caffe HDF5 not learning

谁都会走 提交于 2019-12-13 18:05:38
问题 I'm fine-tuning the GoogleNet network with Caffe to my own dataset. If I use IMAGE_DATA layers as input learning takes place. However, I need to switch to an HDF5 layer for further extensions that I require. When I use HDF5 layers no learning takes place. I am using the exact same input images, and the labels match also. I have also checked to ensure that the data in .h5 files can be loaded correctly. It does, and Caffe is also able to find the number of examples I feed it as well as the

Implement Bhattacharyya loss function using python layer Caffe

爷,独闯天下 提交于 2019-12-12 22:25:28
问题 Trying to implement my custom loss layer using python layer,caffe. I've used this example as the guide and have wrote the forward function as follow: def forward(self,bottom,top): score = 0; self.mult[...] = np.multiply(bottom[0].data,bottom[1].data) self.multAndsqrt[...] = np.sqrt(self.mult) top[0].data[...] = -math.log(np.sum(self.multAndsqrt)) However, the second task, that is implementing the backward function is kinda much difficult for me as I'm totally unfamiliar with python. So please

Caffe's transformer.preprocessing takes too long to complete

大兔子大兔子 提交于 2019-12-12 19:22:46
问题 I wrote a simple script to test a model using PyCaffe , but I noticed it is extremely slow! even on GPU! My test set has 82K samples of size 256x256 and when I ran the code which is given below, it takes hours to complete. I even used batches of images instead of individual ones, yet nothing changes. Currently, it has been running for the past 5 hours, and only 50K samples are processed! What should I do to make it faster? Can I completely avoid using transformer.preprocessing ? if so how?

How to run py-faster-rcnn with X11 forwarding

元气小坏坏 提交于 2019-12-12 19:21:54
问题 I'm running py-faster-rcnn with cuDNN enabled on a g2.8xlarge EC-2 instance with Ubuntu 14.04 operating system. Everything's compiled and seems to be working fine. I log in to the remote instance via: ssh -X -i "<key.pem>" ubuntu@<IP address> I also enter the command: export DISPLAY=:0 Running ./tools/demo.py the output looks good: Loaded network /home/ubuntu/py-faster-rcnn/data/faster_rcnn_models/VGG16_faster_rcnn_final.caffemodel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Demo for data/demo/000456

How can I get layer type in pycaffe?

a 夏天 提交于 2019-12-12 19:03:27
问题 Is it possible at all to get each layer's type (e.g: Convolution, Data, etc) in pycaffe? I searched the examples provided, but I couldn't find anything. currently I'm using layers name to do my job which is extremely bad and limiting . 回答1: It's easy! import caffe net = caffe.Net('/path/to/net.prototxt', '/path/to/weights.caffemodel', caffe.TEST) # get type of 5-th layer print "type of 5-th layer is ", net.layers[5].type To map between layer names and indices you can use this simple trick:

Fixing a subset weights in Neural network during training

浪子不回头ぞ 提交于 2019-12-12 16:49:46
问题 Recently, I am considering creating a customized neural network. The basic structure is the same as usually, while I want to truncate the connections between layers. For example, if I construct a network with two hidden layers, I would like to delete some weights and keep the others, like the picture below: Structure of customized neural networks Sorry I cannot embed pictures here, only links. This is not a dropout to avoid overfitting. Actually, the remained weights(connections) are