我正在尝试将Iris教程(https://www.tensorflow.org/get_started/estimator)转换为从.png文件而不是.csv中读取训练数据。它可以使用,numpy_input_fn但是当我用制作时却不能Dataset。我认为input_fn()返回的是错误的类型,但实际上并不清楚它应该是什么以及如何制作它。错误是:
numpy_input_fn
Dataset
input_fn()
File "iris_minimal.py", line 27, in <module> model_fn().train(input_fn(), steps=1) ... raise TypeError('unsupported callable') from ex TypeError: unsupported callable
TensorFlow版本为1.3。完整的代码:
import tensorflow as tf from tensorflow.contrib.data import Dataset, Iterator NUM_CLASSES = 3 def model_fn(): feature_columns = [tf.feature_column.numeric_column("x", shape=[4])] return tf.estimator.DNNClassifier([10, 20, 10], feature_columns, "tmp/iris_model", NUM_CLASSES) def input_parser(img_path, label): one_hot = tf.one_hot(label, NUM_CLASSES) file_contents = tf.read_file(img_path) image_decoded = tf.image.decode_png(file_contents, channels=1) image_decoded = tf.image.resize_images(image_decoded, [2, 2]) image_decoded = tf.reshape(image_decoded, [4]) return image_decoded, one_hot def input_fn(): filenames = tf.constant(['images/image_1.png', 'images/image_2.png']) labels = tf.constant([0,1]) data = Dataset.from_tensor_slices((filenames, labels)) data = data.map(input_parser) iterator = data.make_one_shot_iterator() features, labels = iterator.get_next() return features, labels model_fn().train(input_fn(), steps=1)
我注意到您的代码段中存在一些错误:
train
input_fn
{'x': features}
DNNClassifier
SparseSoftmaxCrossEntropyWithLogits
请尝试以下代码:
import tensorflow as tf from tensorflow.contrib.data import Dataset NUM_CLASSES = 3 def model_fn(): feature_columns = [tf.feature_column.numeric_column("x", shape=[4], dtype=tf.float32)] return tf.estimator.DNNClassifier([10, 20, 10], feature_columns, "tmp/iris_model", NUM_CLASSES) def input_parser(img_path, label): file_contents = tf.read_file(img_path) image_decoded = tf.image.decode_png(file_contents, channels=1) image_decoded = tf.image.resize_images(image_decoded, [2, 2]) image_decoded = tf.reshape(image_decoded, [4]) label = tf.reshape(label, [1]) return image_decoded, label def input_fn(): filenames = tf.constant(['input1.jpg', 'input2.jpg']) labels = tf.constant([0,1], dtype=tf.int32) data = Dataset.from_tensor_slices((filenames, labels)) data = data.map(input_parser) data = data.batch(1) iterator = data.make_one_shot_iterator() features, labels = iterator.get_next() return {'x': features}, labels model_fn().train(input_fn, steps=1)