sparse

Linux内核入门(七)—— 必要的编译知识

丶灬走出姿态 提交于 2019-12-04 17:50:02
http://blog.csdn.net/yunsongice/article/details/5538416 所有的内核代码,基本都包含了include/linux/compile.h这个文件,所以它是基础,涵盖了分析内核所需要的一些列编译知识,本博就分析分析这个文件里的代码: #ifndef __LINUX_COMPILER_H #define __LINUX_COMPILER_H #ifndef __ASSEMBLY__ 首先印入眼帘的是对__ASSEMBLY__这个宏的判断,这个变量实际是在编译汇编代码的时候,由编译器使用-D这样的参数加进去的,gcc会把这个宏定义为1。用在这里,是因为汇编代码里,不会用到类似于__user这样的属性(关于 __user这样的属性是怎么回子事,本博后面会提到),因为这样的属性是在定义函数参数的时候加的,这样避免不必要的宏在编译汇编代码时候的引用。 #ifdef __CHECKER__ 接下来是一个对__CHECKER__这个宏的判断,这里需要讲的东西比较多,是本博的重点。 当编译内核代码的时候,使用make C=1或C=2的时候,会调用一个叫Sparse的工具,这个工具对内核代码进行检查,怎么检查呢,就是靠对那些声明过Sparse这个工具所能识别的特性的内核函数或是变量进行检查。在调用Sparse这个工具的同时,在Sparse代码里,会加上

fdisk、mkfs.ext4、make_ext4fs、img2simg、simg2img

坚强是说给别人听的谎言 提交于 2019-12-03 23:48:40
一个典型的嵌入式系统是由uboot+kernel+rootfs组成的,其中uboot和kernel都是二进制,rootfs存在文件系统。 二进制在烧录的时候比较简单,将二进制数据写入存储设备固定地址;rootfs存在文件系统,所以需要对存储设备进行分区,然后在分区上建立文件系统。 对存储介质分区可以直接挂载,进行分区、挂载,然后将rootfs内容写入,即完成了rootfs更新。 或者创建一个普通image文件,然后将文件当成一个分区,在其上创建rootfs。创建的普通image文件,可以使raw image也可以是sparse image。 1. 二进制文件烧录 二进制文件更新比较简单,直接使用DD即可。 dd if=spl.bin of=/dev/sdc bs=1024 seek=33 2. fdisk创建分区 通过fdisk可以在一个物理设备上创建分区表,也可以在一个image文件上创建分区。 fdisk -l device将输出指定设备的分区信息,不指定device则显示系统所有设备的分区信息。 AME fdisk - manipulate disk partition table SYNOPSIS fdisk [options] device fdisk -l [device...] fdisk device之后,输入m显示帮助信息。n新增一个分区表,p显示分区信息

Sparse Matrix: ValueError: matrix type must be 'f', 'd', 'F', or 'D'

匿名 (未验证) 提交于 2019-12-03 08:54:24
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I want to do SVD on a sparse matrix by using scipy: from svd import compute_svd print("The size of raw matrix: "+str(len(raw_matrix))+" * "+str(len(raw_matrix[0]))) from scipy.sparse import dok_matrix dok = dok_matrix(raw_matrix) matrix = compute_svd( dok ) The function compute_svd is my customized module like this: def compute_svd( matrix ): from scipy.sparse import linalg from scipy import dot, mat # e.g., matrix = [[2,1,0,0], [4,3,0,0]] # matrix = mat( matrix ); # print "Original matrix:" # print matrix U, s, V = linalg.svds( matrix )

Converting python sparse matrix dict to scipy sparse matrix

匿名 (未验证) 提交于 2019-12-03 08:46:08
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am using python scikit-learn for document clustering and I have a sparse matrix stored in a dict object: For example: doc_term_dict = { ('d1','t1'): 12, \ ('d2','t3'): 10, \ ('d3','t2'): 5 \ } # from mysql data table <type 'dict'> I want to use scikit-learn to do the clustering where the input matrix type is scipy.sparse.csr.csr_matrix Example: (0, 2164) 0.245793088885 (0, 2076) 0.205702177467 (0, 2037) 0.193810934784 (0, 2005) 0.14547028437 (0, 1953) 0.153720023365 ... <class 'scipy.sparse.csr.csr_matrix'> I can't find a way to convert

Iterating through a scipy.sparse vector (or matrix)

匿名 (未验证) 提交于 2019-12-03 08:30:34
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm wondering what the best way is to iterate nonzero entries of sparse matrices with scipy.sparse. For example, if I do the following: from scipy.sparse import lil_matrix x = lil_matrix( (20,1) ) x[13,0] = 1 x[15,0] = 2 c = 0 for i in x: print c, i c = c+1 the output is 0 1 2 3 4 5 6 7 8 9 10 11 12 13 (0, 0) 1.0 14 15 (0, 0) 2.0 16 17 18 19 so it appears the iterator is touching every element, not just the nonzero entries. I've had a look at the API http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html and

Memory Error at Python while converting to array

匿名 (未验证) 提交于 2019-12-03 03:04:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: My code is shown below: from sklearn.datasets import load_svmlight_files import numpy as np perm1 =np.random.permutation(25000) perm2 = np.random.permutation(25000) X_tr, y_tr, X_te, y_te = load_svmlight_files(("dir/file.feat", "dir/file.feat")) #randomly shuffle data X_train = X_tr[perm1,:].toarray()[:,0:2000] y_train = y_tr[perm1]>5 #turn into binary problem The code works fine until here, but when I try to convert one more object to an array, my program returns a memory error. Code: X_test = X_te[perm2,:].toarray()[:,0:2000] Error: ------

scipy.sparse.coo_matrix how to fast find all zeros column, fill with 1 and normalize

匿名 (未验证) 提交于 2019-12-03 02:42:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: For a matrix, i want to find columns with all zeros and fill with 1s, and then normalize the matrix by column. I know how to do that with np.arrays [[0 0 0 0 0] [0 0 1 0 0] [1 0 0 1 0] [0 0 0 0 1] [1 0 0 0 0]] | V [[0 1 0 0 0] [0 1 1 0 0] [1 1 0 1 0] [0 1 0 0 1] [1 1 0 0 0]] | V [[0 0.2 0 0 0] [0 0.2 1 0 0] [0.5 0.2 0 1 0] [0 0.2 0 0 1] [0.5 0.2 0 0 0]] But how can I do the same thing when the matrix is in scipy.sparse.coo.coo_matrix form, without converting it back to np.arrays. how can I achieve the same thing? 回答1: This will be a lot

Sklearn - Cannot use encoded data in Random forest classifier

匿名 (未验证) 提交于 2019-12-03 02:38:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I'm new to scikit-learn. I'm trying use preprocessing. OneHotEncoder to encode my training and test data. After encoding I tried to train Random forest classifier using that data. But I get the following error when fitting. (Here the error trace) 99 model . fit ( X_train , y_train ) 100 preds = model . predict_proba ( X_cv )[:, 1 ] 101 C : \Python27\lib\site - packages\sklearn\ensemble\forest . pyc in fit ( self , X , y , sample_weight ) 288 289 # Precompute some data --> 290 X , y = check_arrays ( X , y , sparse_format = "dense" )

Block operations on Sparse Matrices- Eigen Toolbox- C++

匿名 (未验证) 提交于 2019-12-03 01:58:03
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Block operations for sparse matrices - Eigen Toolbox - C++ #include "Eigen/Dense" #include "Eigen/Sparse" #include using namespace std; using namespace Eigen; int main() { MatrixXd silly(6, 3); silly sparse_silly,temp; sparse_silly= Eigen::SparseMatrix (6, 3); temp = Eigen::SparseMatrix (6, 3); sparse_silly = silly.sparseView(); std::cout In the above code, the block operations for sparse matrices are not working using Eigen toolbox. I want a assign a block from a sparse_silly to a block in temp matrix. The output printed is zero for the

Concatenate sparse matrices in Python using SciPy/Numpy

匿名 (未验证) 提交于 2019-12-03 01:57:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: What would be the most efficient way to concatenate sparse matrices in Python using SciPy/Numpy? Here I used the following: >>> np.hstack((X, X2)) array([ ' with 1135520 stored elements in Compressed Sparse Row format>, ' with 1135520 stored elements in Compressed Sparse Row format>], dtype=object) I would like to use both predictors in a regression, but the current format is obviously not what I'm looking for. Would it be possible to get the following: ' with 2271040 stored elements in Compressed Sparse Row format> It is too large to be