conv-neural-network

How to plot epoch vs. val_acc and epoch vs. val_loss graph in CNN?

混江龙づ霸主 提交于 2021-01-29 09:27:09
问题 I used a convolutional neural network (CNN) for training a dataset. Here I get epoch, val_loss, val_acc, total loss, training time, etc. as a history. If I want to calculate the average of accuracy, then how to access val_acc, and how to plot epoch vs. val_acc and epoch vs. val_loss graph? convnet = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 3], name='input') convnet = conv_2d(convnet, 32, 3, activation='relu') convnet = max_pool_2d(convnet, 3) convnet = conv_2d(convnet, 64, 3, activation=

How to compute number of weights of CNN?

孤街醉人 提交于 2021-01-29 08:55:01
问题 How can we compute number of weights considering a convolutional neural network that is used to classify images into two classes : INPUT: 100x100 gray-scale images. LAYER 1: Convolutional layer with 60 7x7 convolutional filters (stride=1, valid padding). LAYER 2: Convolutional layer with 100 5x5 convolutional filters (stride=1, valid padding). LAYER 3: A max pooling layer that down-samples Layer 2 by a factor of 4 (e.g., from 500x500 to 250x250) LAYER 4: Dense layer with 250 units LAYER 5:

Compile error on keras sequential model with custom loss function

ε祈祈猫儿з 提交于 2021-01-29 07:54:48
问题 Trying to compile CNN model of ~16K parameters on GPU in google colab for mnist dataset. With standard loss 'categorical_crossentropy', it is working fine. But with custom_loss it is giving error. lamda=0.01 m = X_train.shape[0] def reg_loss(lamda): model_layers = custom_model.layers # type list where each el is Conv2D obj etc. reg_wts = 0 for idx, layer in enumerate(model_layers): layer_wts = model_layers[idx].get_weights() # type list if len(layer_wts) > 0: # activation, dropout layers do

Decreasing training loss, stable validation loss - is the model overfitting?

假装没事ソ 提交于 2021-01-29 07:40:20
问题 Does my model overfit? I would be sure it overfitted, if the validation loss increased heavily, while the training loss decreased. However the validation loss is nearly stable, so I am not sure. Can you please help? 回答1: I assume that you're using different hyperparameters? Perhaps save the parameters and resume with a different set of hyperparameters. This comment really depends on how you're doing hyperparameter optimization. Try with different training/test splits. It might be

How to feed a conv2d net with a large npy file without overhelming the RAM memory?

丶灬走出姿态 提交于 2021-01-29 07:36:25
问题 I have a large dataset in a .npy format of size (500000,18). In order to feed it in a conv2D net using a generator I slipt in in X and y and reshape it in the format (-1, 96, 10, 10, 17) and (-1, 1), respectively. However, when I feed it inside the model I get and memory error: 2020-08-26 14:37:03.691425: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks of size 462080 totalling 451.2KiB 2020-08-26 14:37:03.691432: I tensorflow/core/common_runtime/bfc_allocator.cc:812] 1 Chunks

Why does my CNN not predict labels as expected?

▼魔方 西西 提交于 2021-01-29 05:21:05
问题 I am new to the concept of Similarity Learning. I am currently doing a face recognition model using Siamese Neural Network for the Labelled Faces in the Wild Dataset. Code for Siamese Network Model (Consider each code snippets to be a cell in Colab): from keras.applications.inception_v3 import InceptionV3 from keras.applications.mobilenet_v2 import MobileNetV2 from keras.models import Model from keras.layers import Input,Flatten def return_inception_model(): input_vector=Input((224,224,3))

CNN pytorch : How are parameters selected and flow between layers

寵の児 提交于 2021-01-29 04:23:49
问题 I'm pretty new to CNN and have been following the below code. I'm not able to understand how and why have we selected the each argument of Conv2d() and nn.Linear () as they are i.e. the output, filter, channels, weights,padding and stride. I do understand the meaning of each though. Can someone very succinctly explain the flow for each layer? (Input Image Size is 32*32*3) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__(

CNN pytorch : How are parameters selected and flow between layers

↘锁芯ラ 提交于 2021-01-29 04:23:28
问题 I'm pretty new to CNN and have been following the below code. I'm not able to understand how and why have we selected the each argument of Conv2d() and nn.Linear () as they are i.e. the output, filter, channels, weights,padding and stride. I do understand the meaning of each though. Can someone very succinctly explain the flow for each layer? (Input Image Size is 32*32*3) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__(

Can I import tensorflow and keras in Maya , Blender

▼魔方 西西 提交于 2021-01-28 18:52:06
问题 I am participating in a workshop , where we need to automatically rig characters . Perhaps , we will use deep learning methods . The task is to recognize body parts . My question : Is there a way for connecting tensorflow and keras , or other neural networks with 3D software? 回答1: For blender you can follow this tutorial, https://www.youtube.com/watch?v=J7Iu1rfwbds I tested it in Blender 2.81 and Python 3.7 by importing pytorch, opencv, sklearn etc. Also the test code provided in the video

Keras dimension mismatch with ImageDataGenerator

折月煮酒 提交于 2021-01-28 07:13:41
问题 I am attempting to 'flow' my data into a neural network with Keras. I am using the .flow_from_directory method and the process is giving me fits. I am using the basic example from the keras documentation (I am using tensorflow): ROWS = 64 COLS = 64 CHANNELS = 3 from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( 'train', target_size=