CNN

Error in element wise weighted averaging between 2 layers in keras cnn

半世苍凉 提交于 2020-08-10 19:35:09
问题 I am getting error in element wise weighted averaging between 2 layers in cnn My base model is model_base = Sequential() # Conv Layer 1 model_base.add(layers.SeparableConv2D(32, (9, 9), activation='relu', input_shape=input_shape)) model_base.add(layers.MaxPooling2D(2, 2)) # model.add(layers.Dropout(0.25)) # Conv Layer 2 model_base.add(layers.SeparableConv2D(64, (9, 9), activation='relu')) model_base.add(layers.MaxPooling2D(2, 2)) # model.add(layers.Dropout(0.25)) # Conv Layer 3 model_base.add

2D convolution with padding=same via Toeplitz matrix multiplication

倾然丶 夕夏残阳落幕 提交于 2020-08-06 06:06:54
问题 I'm trying to achieve the Block Toeplitz's matrix for a 2D convolution with padding=same (similar to keras). I saw, read and search a lot info, but I don't get an implementation of it. Some references I have taken (also I'm reading papers, but anyone talks about convd with padding, only full or valid): McLawrence's answer: answer. He says literally: "his is for padding = 0 but can easily be adjusted by changing h_blocks and w_blocks and W_conv[i+j, :, j, :]." But i dont know how implement

Modify some values in the weight file (.h5) of VGG-16

让人想犯罪 __ 提交于 2020-07-23 06:17:04
问题 I have the weight and bias values for each layer of the VGG model saved as a .h5 file. I got the file from: https://github.com/fchollet/deep-learning-models/releases/tag/v0.1 Now let's say I want to change a few values in that file. With help from How to overwrite array inside h5 file using h5py, I am trying to do the same as follows: import h5py file_name = "vgg.h5" f = h5py.File(file_name, 'r+') # List all groups print("Keys: %s" % f.keys()) # Get the data data = (f['block2_conv1']['block2

Modify some values in the weight file (.h5) of VGG-16

空扰寡人 提交于 2020-07-23 06:15:10
问题 I have the weight and bias values for each layer of the VGG model saved as a .h5 file. I got the file from: https://github.com/fchollet/deep-learning-models/releases/tag/v0.1 Now let's say I want to change a few values in that file. With help from How to overwrite array inside h5 file using h5py, I am trying to do the same as follows: import h5py file_name = "vgg.h5" f = h5py.File(file_name, 'r+') # List all groups print("Keys: %s" % f.keys()) # Get the data data = (f['block2_conv1']['block2

Modify some values in the weight file (.h5) of VGG-16

↘锁芯ラ 提交于 2020-07-23 06:14:11
问题 I have the weight and bias values for each layer of the VGG model saved as a .h5 file. I got the file from: https://github.com/fchollet/deep-learning-models/releases/tag/v0.1 Now let's say I want to change a few values in that file. With help from How to overwrite array inside h5 file using h5py, I am trying to do the same as follows: import h5py file_name = "vgg.h5" f = h5py.File(file_name, 'r+') # List all groups print("Keys: %s" % f.keys()) # Get the data data = (f['block2_conv1']['block2

With ResNet50 the validation accuracy and loss is not changing

泪湿孤枕 提交于 2020-06-12 05:51:04
问题 I am trying to do image recognition with ResNet50 in Python ( keras ). I tried to do the same task with VGG16 , and I got some results like these (which seem okay to me): resultsVGG16 . The training and validation accuracy/loss functions are getting better with each step, so the network must learn. However, with ResNet50 the training functions are betting better, while the validation functions are not changing: resultsResNet I've used the same code and data in both of the times, only the

trying to add an input layer to CNN model in keras

删除回忆录丶 提交于 2020-06-01 07:41:27
问题 I tried to add input to a parallel path cnn, to make a residual architecture, but I am getting dimension mismatch. from keras import layers, Model input_shape = (128,128,3) # Change this accordingly my_input = layers.Input(shape=input_shape) # one input def parallel_layers(my_input, parallel_id=1): x = layers.SeparableConv2D(32, (9, 9), activation='relu', name='conv_1_'+str(parallel_id))(my_input) x = layers.MaxPooling2D(2, 2)(x) x = layers.SeparableConv2D(64, (9, 9), activation='relu', name=

'NoneType' object has no attribute '_inbound_nodes' error

。_饼干妹妹 提交于 2020-05-30 08:03:17
问题 I have to take the output of last conv layer of EfficientNet and then calculate H = wT*x+b. My w is [49,49]. After that I have to apply softmax on H and then do elementwise multiplication Xì = Hi*Xi. This is my code: common_input = layers.Input(shape=(224, 224, 3)) x=model0(common_input) #model0 terminate with last conv layer of EfficientNet (7,7,1280) x = layers.BatchNormalization()(x) W = tf.Variable(tf.random_normal([49,49], seed=0), name='weight') b = tf.Variable(tf.random_normal([49],

model.fit.generator for dual path cnn

旧巷老猫 提交于 2020-05-14 09:07:45
问题 I am trying to run parallel path CNN, which is concatenated with a dense layer. I have named the first path as model1 and second part as model2 and the concatenated model containing parallel pats as model. I have compiled the model and the model summary also is working. Now I have to train the model. For that I have given the input to the CNN model is given as model.fit.generator. I am using keras 2.1.6 version. base_model1 = model.fit_generator(["train_generator","train_generator"], steps

深入理解CNN的细节

点点圈 提交于 2020-04-11 19:57:11
数据预处理(Data Preprocessing) 零均值化(Mean subtraction) 为什么要零均值化? 人们对图像信息的摄取通常不是来自于像素色值的高低,而是来自于像素之间的相对色差。零均值化并没有消除像素之间的相对差异(交流信息),仅仅是去掉了直流信息的影响。 数据有过大的均值也可能导致参数的梯度过大。 如果有后续的处理,可能要求数据零均值,比如PCA。 假设数据存放在一个矩阵 X 中,X 的形状为(N,D),N 是样本个数,D 是样本维度,零均值化操作可用 python 的 numpy 来实现: X -= numpy.mean(X, axis=0) 即 X 的每一列都减去该列的均值。 对于灰度图像,也可以减去整张图片的均值: X -= numpy.mean(X) 对于彩色图像,将以上操作在3个颜色通道内分别进行即可。 归一化(Normalization) 为什么要归一化? 归一化是为了让不同纬度的数据具有相同的分布规模。 假如二维数据数据(x1,x2)两个维度都服从均值为零的正态分布,但是x1方差为100,x2方差为1。可以想像对 (x1,x2)进行随机采样并在而为坐标系中标记后的图像,应该是一个非常狭长的椭圆形。 对这些数据做特征提取会用到以下形式的表达式: S = w1*x1 + w2*x2 + b 那么: dS / dw1 = x1 dS / dw2 =