tensorflow-datasets

Is there a way to pass dictionary in tf.data.Dataset w/ tf.py_func?

老子叫甜甜 提交于 2019-12-08 07:51:14
问题 I'm using tf.data.Dataset in data processing and I want to do apply some python code with tf.py_func. BTW, I found that in tf.py_func, I cannot return a dictionary. Is there any way to do it or workaround? I have code which looks like below def map_func(images, labels): """mapping python function""" # do something # cannot be expressed as a tensor graph return { 'images': images, 'labels': labels, 'new_key': new_value} def tf_py_func(images, labels): return tf.py_func(map_func, [images,

Tensorflow Dataset API restore Iterator after completing one epoch

▼魔方 西西 提交于 2019-12-08 07:26:24
问题 I have 190 features and labels,My batch size is 20 but after 9 iterations tf.reshape is returning exception Input to reshape is a tensor with 21 values,but the requested shape has 60 and i know it is due to Iterator.get_next() .How do i restore my Iterator so that it will again start serving batches from the beginning? 回答1: If you want to restart a tf.data.Iterator from the beginning of its Dataset , consider using an initializable iterator, which has an operation you can run to re-initialize

Is it required to have predefined Image size to use transfer learning in tensorflow?

怎甘沉沦 提交于 2019-12-08 02:39:51
问题 I intend to use pre-trained model like faster_rcnn_resnet101_pets for Object Detection in Tensorflow environment as described here I have collected several images for training and testing set. All these images are of varying size. Do I have to resize them to a common size ? faster_rcnn_resnet101_pets uses resnet with input size 224x224x3. Does this mean I have to resize all my images before sending for training ? Or It is taken care automatically by TF. python train.py --logtostderr --train

Replacing Queue-based input pipelines with tf.data

故事扮演 提交于 2019-12-08 02:13:27
问题 I am reading Ganegedara‘s NLP with Tensorflow. The introduction to input pipieline has the following example import tensorflow as tf import numpy as np import os # Defining the graph and session graph = tf.Graph() # Creates a graph session = tf.InteractiveSession(graph=graph) # Creates a session # The filename queue filenames = ['test%d.txt'%i for i in range(1,4)] filename_queue = tf.train.string_input_producer(filenames, capacity=3, shuffle=True,name='string_input_producer') # check if all

I get an error when importing tensorflow_datasets

白昼怎懂夜的黑 提交于 2019-12-07 17:35:27
问题 I want to use in Jupyter (version 6.0.0) with Python3 tensorflow_datasets. Doing that results in an error message I cannot seem to fathom what the problem is. I made a new kernel for Python which should utilize the tensorflow_datasets. The following steps were taken (In anaconda using my administrator option). 1. conda info --envs 2. conda create --name py3-TF2.0 python=3 3. conda activate py3-TF2.0 4. pip install matplotlib 5. pip install tensorflow==2.0.0-alpha0 6. pip install ipykernel 7.

How to use properly Tensorflow Dataset with batch?

可紊 提交于 2019-12-07 14:04:39
问题 I am new to Tensorflow and deep learning, and I am struggling with the Dataset class. I tried a lot of things and I can’t find a good solution. What I am trying I have a large amount of images (500k+) to train my DNN with. This is a denoising autoencoder so I have a pair of each image. I am using the dataset class of TF to manage the data, but I think I use it really badly. Here is how I load the filenames in a dataset: class Data: def __init__(self, in_path, out_path): self.nb_images = 512

Interleaving multiple TensorFlow datasets together

随声附和 提交于 2019-12-07 03:05:36
问题 The current TensorFlow dataset interleave functionality is basically a interleaved flat-map taking as input a single dataset. Given the current API, what's the best way to interleave multiple datasets together? Say they have already been constructed and I have a list of them. I want to produce elements from them alternatively and I want to support lists with more than 2 datasets (i.e., stacked zips and interleaves would be pretty ugly). Thanks! :) @mrry might be able to help. 回答1: EDIT 2: See

Is it required to have predefined Image size to use transfer learning in tensorflow?

浪子不回头ぞ 提交于 2019-12-06 09:26:01
I intend to use pre-trained model like faster_rcnn_resnet101_pets for Object Detection in Tensorflow environment as described here I have collected several images for training and testing set. All these images are of varying size. Do I have to resize them to a common size ? faster_rcnn_resnet101_pets uses resnet with input size 224x224x3. Does this mean I have to resize all my images before sending for training ? Or It is taken care automatically by TF. python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_resnet101_pets.config In general, is it a good

Neural Machine Translation model predictions are off-by-one

别说谁变了你拦得住时间么 提交于 2019-12-06 07:19:57
问题 Problem Summary In the following example, my NMT model has high loss because it correctly predicts target_input instead of target_output . Targetin : 1 3 3 3 3 6 6 6 9 7 7 7 4 4 4 4 4 9 9 10 10 10 3 3 10 10 3 10 3 3 10 10 3 9 9 4 4 4 4 4 3 10 3 3 9 9 3 6 6 6 6 6 6 10 9 9 10 10 4 4 4 4 4 4 4 4 4 4 4 4 9 9 9 9 3 3 3 6 6 6 6 6 9 9 10 3 4 4 4 4 4 4 4 4 4 4 4 4 9 9 10 3 10 9 9 3 4 4 4 4 4 4 4 4 4 10 10 4 4 4 4 4 4 4 4 4 4 9 9 10 3 6 6 6 6 3 3 3 10 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 9 9 3 3 10 6 6 6 6

Getting free text features into Tensorflow Canned Estimators with Dataset API via feature_columns

自古美人都是妖i 提交于 2019-12-06 06:30:25
问题 I'm trying to build a model that gives reddit_score = f('subreddit','comment') Mainly this is as an example i can then build on for a work project. My code is here. My problem is that i see that canned estimators e.g. DNNLinearCombinedRegressor must have feature_columns that are part of FeatureColumn class. I have my vocab file and know that if i was to just limit to the first word of a comment i could just do something like tf.feature_column.categorical_column_with_vocabulary_file( key=