lmdb

Write numpy arrays to lmdb

守給你的承諾、 提交于 2019-12-02 02:36:48
I'm trying to write some numpy arrays in python to lmdb: import numpy as np import lmdb def write_lmdb(filename): lmdb_env = lmdb.open(filename, map_size=int(1e9)) lmdb_txn = lmdb_env.begin(write=True) X= np.array([[1.0, 0.0], [0.1, 2.0]]) y= np.array([1.4, 2.1]) #Put first pair of arrays lmdb_txn.put('X', X) lmdb_txn.put('y', y) #Put second pair of arrays lmdb_txn.put('X', X+1.6) lmdb_txn.put('y', y+1.2) def read_lmdb(filename): lmdb_env = lmdb.open(filename) lmdb_txn = lmdb_env.begin() lmdb_cursor = lmdb_txn.cursor() for key, value in lmdb_cursor: print type(key) print type(value) print key

Check failed: mdb_status == 0 (2 vs. 0) No such file or directory

瘦欲@ 提交于 2019-12-01 23:18:38
I received the following error while I was training the data. I have tried all the solutions given on the internet and nothing seems to work for me. I have checked paths and size of the lmdb files are non-zero. But the problem still exists. I have no idea how to solve this issue. pooling_ I0411 12:42:53.114141 21769 layer_factory.hpp:77] Creating layer data I0411 12:42:53.114586 21769 net.cpp:91] Creating Layer data I0411 12:42:53.114604 21769 net.cpp:399] data -> data I0411 12:42:53.114645 21769 net.cpp:399] data -> label F0411 12:42:53.114650 21772 db_lmdb.hpp:14] Check failed: mdb_status ==

How to read image from numpy array into PIL Image?

痴心易碎 提交于 2019-12-01 17:28:10
I am trying to read an image from a numpy array using PIL, by doing the following: from PIL import Image import numpy as np #img is a np array with shape (3,256,256) Image.fromarray(img) and am getting the following error: File "...Image.py", line 2155, in fromarray raise TypeError("Cannot handle this data type") I think this is because fromarray expects the shape to be (height, width, num_channels) however the array I have is in the shape (num_channels, height, width) as it is stored in this was in an lmdb database. How can I reshape the Image so that it is compatible with Image.fromarray ?

Writing data to LMDB with Python very slow

半城伤御伤魂 提交于 2019-12-01 15:11:36
问题 Creating datasets for training with Caffe I both tried using HDF5 and LMDB. However, creating a LMDB is very slow even slower than HDF5. I am trying to write ~20,000 images. Am I doing something terribly wrong? Is there something I am not aware of? This is my code for LMDB creation: DB_KEY_FORMAT = "{:0>10d}" db = lmdb.open(path, map_size=int(1e12)) curr_idx = 0 commit_size = 1000 for curr_commit_idx in range(0, num_data, commit_size): with in_db_data.begin(write=True) as in_txn: for i in

Error in creating LMDB database file in Python for Caffe

不羁岁月 提交于 2019-11-29 07:25:40
I'm trying to create an LMDB data base file in Python to be used with Caffe according to this tutorial. The commands import numpy as np and import caffe run perfectly fine. However, when I try to run import lmdb and import deepdish as dd , I'm getting the following errors: >>> import lmdb Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named lmdb >>> import deepdish as dd Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named deepdish I'm running Python 2.7.9 through Anaconda 2.2.0 (64-bit) on Ubuntu 14

Is it possible to run caffe models on the data-set which is not stored in data-source like LMDB?

雨燕双飞 提交于 2019-11-28 14:22:35
I have 2 sets of image patches data i.e. training and testing sets. Both of these have been written to LMDB files. I am running convolutional neurall network on this data using Caffe. The problem is that the data stored on hard disk is occupying considerable amount of space and is hampering my efforts to introduce more training data with deliberate noise addition to make my model more robust. Is there a way where I can send image patches from my program directly to the CNN (in Caffe) without storing them in LMDB? I am currently using python to generate patches from the images for the training

Caffe学习系列(12):训练和测试自己的图片

☆樱花仙子☆ 提交于 2019-11-28 02:06:31
学习caffe的目的,不是简单的做几个练习,最终还是要用到自己的实际项目或科研中。因此,本文介绍一下,从自己的原始图片到lmdb数据,再到训练和测试模型的整个流程。 一、准备数据 有条件的同学,可以去imagenet的官网 http://www.image-net.org/download-images ,下载imagenet图片来训练。但是我没有下载,一个原因是注册账号的时候,验证码始终出不来(听说是google网站的验证码,而我是上不了google的)。第二个原因是数据太大了。。。 我去网上找了一些其它的图片来代替,共有500张图片,分为大巴车、恐龙、大象、鲜花和马五个类,每个类100张。需要的同学,可到我的网盘下载: http://pan.baidu.com/s/1nuqlTnN 编号分别以3,4,5,6,7开头,各为一类。我从其中每类选出20张作为测试,其余80张作为训练。因此最终训练图片400张,测试图片100张,共5类。我将图片放在caffe根目录下的data文件夹下面。即训练图片目录:data/re/train/ ,测试图片目录: data/re/test/ 二、转换为lmdb格式 具体的转换过程,可参见我的前一篇博文: Caffe学习系列(11):图像数据转换成db(leveldb/lmdb)文件 首先,在examples下面创建一个myfile的文件夹

Error in creating LMDB database file in Python for Caffe

不想你离开。 提交于 2019-11-28 01:12:04
问题 I'm trying to create an LMDB data base file in Python to be used with Caffe according to this tutorial. The commands import numpy as np and import caffe run perfectly fine. However, when I try to run import lmdb and import deepdish as dd , I'm getting the following errors: >>> import lmdb Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named lmdb >>> import deepdish as dd Traceback (most recent call last): File "<stdin>", line 1, in <module>

Caffe: Reading LMDB from Python

南楼画角 提交于 2019-11-27 19:08:19
I've extracted features using caffe, which generates a .mdb file. Then I'm trying to read it using Python and display it as a readable number. import lmdb lmdb_env = lmdb.open('caffefeat') lmdb_txn = lmdb_env.begin() lmdb_cursor = lmdb_txn.cursor() for key, value in lmdb_cursor: print str(value) This prints out a very long line of unreadable, broken characters. Then I tried printing int(value), which returns the following: ValueError: invalid literal for int() with base 10: '\x08\x80 \x10\x01\x18\x015\x8d\x80\xad?5' float(value) gives the following: ValueError: could not convert string to

Is it possible to run caffe models on the data-set which is not stored in data-source like LMDB?

前提是你 提交于 2019-11-27 08:41:24
问题 I have 2 sets of image patches data i.e. training and testing sets. Both of these have been written to LMDB files. I am running convolutional neurall network on this data using Caffe. The problem is that the data stored on hard disk is occupying considerable amount of space and is hampering my efforts to introduce more training data with deliberate noise addition to make my model more robust. Is there a way where I can send image patches from my program directly to the CNN (in Caffe) without