tensorflow

how to convert logits to probability in binary classification in tensorflow?

余生长醉 提交于 2021-02-18 10:59:10
问题 logits= tf.matmul(inputs, weight) + bias After matmul operation, the logits are two values derive from the MLP layer. My target is binary classification, how to convert the two values, logits, into probabilities, which include positive prob and negative prob and the sum of them is 1 ? 回答1: predictions = tf.nn.softmax(logits) 回答2: I am writing this answer for anyone who needs further clarifications: If it is a binary classification, it should be: prediction = tf.round(tf.nn.sigmoid(logit)) If

C++ use of Eigen in tensorflow

試著忘記壹切 提交于 2021-02-18 10:54:16
问题 What is the relation between tensorflow and Eigen, particularly regarding the tensor datastructures? There are some older quotations (e.g. here) which state that tensorflow is using Eigen extensively (afaik a tensorflow guy has extended the Eigen code). More recent tensorflow documentation, however, seems to not explicitly refer to Eigen. Are the two tensor structures identical? Are they being updated concurrently? Is there any (possibly future) disadvantage in using the Eigen::tensor over

How to print full (not truncated) tensor in tensorflow?

狂风中的少年 提交于 2021-02-18 10:29:09
问题 Whenever I try printing I always get truncated results import tensorflow as tf import numpy as np np.set_printoptions(threshold=np.nan) tensor = tf.constant(np.ones(999)) tensor = tf.Print(tensor, [tensor]) sess = tf.Session() sess.run(tensor) As you can see I've followed a guide I found on Print full value of tensor into console or write to file in tensorflow But the output is simply ...\core\kernels\logging_ops.cc:79] [1 1 1...] I want to see the full tensor, thanks. 回答1: This is solved

How do you send arguments to a generator function using tf.data.Dataset.from_generator()?

最后都变了- 提交于 2021-02-18 10:24:05
问题 I would like to create a number of tf.data.Dataset using the from_generator() function. I would like to send an argument to the generator function ( raw_data_gen ). The idea is that the generator function will yield different data depending on the argument sent. In this way I would like raw_data_gen to be able to provide either training, validation or test data. training_dataset = tf.data.Dataset.from_generator(raw_data_gen, (tf.float32, tf.uint8), ([None, 1], [None]), args=([1])) validation

How to build a Neural Network with sentence embeding concatenated to pre-trained CNN

醉酒当歌 提交于 2021-02-18 08:48:40
问题 I want to build a neural network that will take the feature map from the last layer of a CNN (VGG or resnet for example), concatenate an additional vector (for example , 1X768 bert vector) , and re-train the last layer on classification problem. So the architecture should be like in: but I want to concat an additional vector to each feature vector (I have a sentence to describe each frame). I have 5 possible labels , and 100 frames in the input frames. Can someone help me as to how to

How to do Cohen Kappa Quadratic Loss in Tensorflow 2.0?

痴心易碎 提交于 2021-02-18 08:16:14
问题 I'm trying to create the loss function according to: How can I specify a loss function to be quadratic weighted kappa in Keras? But in tensorflow 2.0: tf.contrib.metrics.cohen_kappa No longer exists. Is there an alternative? 回答1: def kappa_loss(y_pred, y_true, y_pow=2, eps=1e-10, N=4, bsize=256, name='kappa'): """A continuous differentiable approximation of discrete kappa loss. Args: y_pred: 2D tensor or array, [batch_size, num_classes] y_true: 2D tensor or array,[batch_size, num_classes] y

Get output from a non final keras model layer

a 夏天 提交于 2021-02-18 06:53:15
问题 I am using ubuntu with python 3 and keras over tensorflow, I am trying to create a model using transfer learning from a pre trained keras model as explained here: I am using the following code import numpy as np from keras.applications import vgg16, inception_v3, resnet50, mobilenet from keras import Model a = np.random.rand(1, 224, 224, 3) + 0.001 a = mobilenet.preprocess_input(a) mobilenet_model = mobilenet.MobileNet(weights='imagenet') mobilenet_model.summary() inputLayer = mobilenet_model

Keras - Autoencoder accuracy stuck on zero

限于喜欢 提交于 2021-02-18 03:21:41
问题 I'm trying to detect fraud using autoencoder and Keras. I've written the following code as a Notebook: import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn.preprocessing import StandardScaler from keras.layers import Input, Dense from keras.models import Model import matplotlib.pyplot as plt data = pd.read_csv('../input/creditcard.csv') data['normAmount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))

Keras EarlyStopping: Which min_delta and patience to use?

流过昼夜 提交于 2021-02-17 19:13:56
问题 I am new to deep learning and Keras and one of the improvement I try to make to my model training process is to make use of Keras's keras.callbacks.EarlyStopping callback function. Based on the output from training my model, does it seem reasonable to use the following parameters for EarlyStopping ? EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=5, verbose=0, mode='auto') Also, why does it appear to be stopped sooner than it should if it was to wait for 5 consecutive epochs

Keras EarlyStopping: Which min_delta and patience to use?

﹥>﹥吖頭↗ 提交于 2021-02-17 19:12:39
问题 I am new to deep learning and Keras and one of the improvement I try to make to my model training process is to make use of Keras's keras.callbacks.EarlyStopping callback function. Based on the output from training my model, does it seem reasonable to use the following parameters for EarlyStopping ? EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=5, verbose=0, mode='auto') Also, why does it appear to be stopped sooner than it should if it was to wait for 5 consecutive epochs