deep-learning

TypeError: tuple indices must be integers or slices, not list - While loading a model Keras

£可爱£侵袭症+ 提交于 2021-02-16 22:03:14
问题 In short, i have 2 trained models, one trained on 2 classes, the other on 3 classes. My code loads a model, loads an image, and predicts a classification result. finetune_model = tf.keras.models.load_model(modelPath) model = load_model(my_file) img = image.load_img(img_path, target_size=(img_width, img_height)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) preds = model.predict(x) The model file is of .h5 type. When loading the 2-class trained model, it

Proper usage of `tf.scatter_nd` in tensorflow-r1.2

▼魔方 西西 提交于 2021-02-16 19:08:09
问题 Given indices with shape [batch_size, sequence_len] , updates with shape [batch_size, sequence_len, sampled_size] , to_shape with shape [batch_size, sequence_len, vocab_size] , where vocab_size >> sampled_size , I'd like to use tf.scatter to map the updates to a huge tensor with to_shape , such that to_shape[bs, indices[bs, sz]] = updates[bs, sz] . That is, I'd like to map the updates to to_shape row by row. Please note that sequence_len and sampled_size are scalar tensors, while others are

Proper usage of `tf.scatter_nd` in tensorflow-r1.2

╄→гoц情女王★ 提交于 2021-02-16 19:07:29
问题 Given indices with shape [batch_size, sequence_len] , updates with shape [batch_size, sequence_len, sampled_size] , to_shape with shape [batch_size, sequence_len, vocab_size] , where vocab_size >> sampled_size , I'd like to use tf.scatter to map the updates to a huge tensor with to_shape , such that to_shape[bs, indices[bs, sz]] = updates[bs, sz] . That is, I'd like to map the updates to to_shape row by row. Please note that sequence_len and sampled_size are scalar tensors, while others are

Keras: batch training for multiple large datasets

孤者浪人 提交于 2021-02-16 04:28:50
问题 this question regards the common problem of training on multiple large files in Keras which are jointly too large to fit on GPU memory. I am using Keras 1.0.5 and I would like a solution that does not require 1.0.6. One way to do this was described by fchollet here and here: # Create generator that yields (current features X, current labels y) def BatchGenerator(files): for file in files: current_data = pickle.load(open("file", "rb")) X_train = current_data[:,:-1] y_train = current_data[:,-1]

Keras model summary incorrect

删除回忆录丶 提交于 2021-02-11 15:51:05
问题 I am doing data augmentation using data_gen=image.ImageDataGenerator(rotation_range=20,width_shift_range=0.2,height_shift_range=0.2, zoom_range=0.15,horizontal_flip=False) iter=data_gen.flow(X_train,Y_train,batch_size=64) data_gen.flow() needs a rank 4 data matrix, so the shape of X_train is (60000, 28, 28, 1) . We need to pass the same shape i.e (60000, 28, 28, 1) while defining the architecture of the model as follows; model=Sequential() model.add(Dense(units=64,activation='relu',kernel

Keras model summary incorrect

﹥>﹥吖頭↗ 提交于 2021-02-11 15:48:44
问题 I am doing data augmentation using data_gen=image.ImageDataGenerator(rotation_range=20,width_shift_range=0.2,height_shift_range=0.2, zoom_range=0.15,horizontal_flip=False) iter=data_gen.flow(X_train,Y_train,batch_size=64) data_gen.flow() needs a rank 4 data matrix, so the shape of X_train is (60000, 28, 28, 1) . We need to pass the same shape i.e (60000, 28, 28, 1) while defining the architecture of the model as follows; model=Sequential() model.add(Dense(units=64,activation='relu',kernel

Google BERT and antonym detection

旧巷老猫 提交于 2021-02-11 15:10:55
问题 I recently learned about the following phenomenon: Google BERT word embeddings of well-known state-of-the-art models seem to ignore the measure of semantical contrast between antonyms in terms of the natural distance(norm2 or cosine distance) between the corresponding embeddings. For example: The measure is the "cosine distance" (as oppose to the "cosine similarity"), that means closer vectors are supposed to have smaller distance between them. As one can see, BERT states "weak" and "powerful

Why is Tensorflow Official CNN example stuck at 10 percent accuracy (= random prediction) on my machine?

岁酱吖の 提交于 2021-02-11 14:18:26
问题 I am running the CNN example from Tensorflow Official website - (https://www.tensorflow.org/tutorials/images/cnn) I have run the notebook as it is without any modifications whatsoever. My accuracy (training accuracy) is stuck at 10%. I tried to overfit by only using the first 10 (image, label) pairs, but the result is still the same. The network just does not learn. Here is my model.summary() - Model: "sequential" _________________________________________________________________ Layer (type)

Keras not running in multiprocessing

不打扰是莪最后的温柔 提交于 2021-02-11 14:18:15
问题 I'm trying to run my keras model using multiprocessing due to GPU OOM issue. I loaded all libraries and set up the model within the function for multiprocessing as below: When I execute this code, it gets stuck at history = q.get() , which is multiprocessing.Queue.get() . And when I remove all the code related to multiprocessing.Queue() , the code execution ends as soon as I execute the code, which I suspect that the code is not working. Even a simple print() function didn't show an output.

I'm reading images on the disk, how to plot like the first 50 images?

给你一囗甜甜゛ 提交于 2021-02-11 13:13:03
问题 I am reading images on the disk, how to make a for and where to put it to plot the first 50 images on the screen, I want to make sure, that I'm reading the right images, it's for deep learning. def load_clef_database(): img_data_list = [] dataset_dir = "/Users/PlantCLEF2015" root = os.path.join(dataset_dir, 'train') filenames = [] # files class_species = [] class_species_unique = [] class_species_unique_id = [] class_familys = [] class_geni = [] class_ids = [] class_contents = [] metadata = [