我的问题是关于如何从多个(或分片的)tfrecords获取批处理输入。我已经阅读了示例https://github.com/tensorflow/models/blob/master/inception/inception/image_processing.py#L410。基本的管道,把培训作为集为例,(1)首先产生一系列tfrecords(例如,train-000-of-005,train-001-of-005,…),从这些文件名(2),生成一个列表并将其塞进了tf.train.string_input_producer获得队列,(3)同时生成一个tf.RandomShuffleQueue做其他事情,(4)tf.train.batch_join用于生成批处理输入。
train-000-of-005
train-001-of-005
tf.train.string_input_producer
tf.RandomShuffleQueue
tf.train.batch_join
我认为这很复杂,我不确定此过程的逻辑。就我而言,我有一个.npy文件列表,我想生成分片的tfrecords(多个分开的tfrecords,而不仅仅是一个大文件)。每个.npy文件都包含不同数量的正样本和负样本(2类)。一种基本方法是生成一个大的tfrecord文件。但是文件太大(~20Gb)。所以我求助于tfrecords。有没有更简单的方法可以做到这一点?谢谢。
.npy
~20Gb
整个过程使用简化Dataset API。这是两个部分:(1): Convert numpy array to tfrecords和(2,3,4): read the tfrecords to generate batches。
Dataset API
(1): Convert numpy array to tfrecords
(2,3,4): read the tfrecords to generate batches
def npy_to_tfrecords(...): # write records to a tfrecords file writer = tf.python_io.TFRecordWriter(output_file) # Loop through all the features you want to write for ... : let say X is of np.array([[...][...]]) let say y is of np.array[[0/1]] # Feature contains a map of string to feature proto objects feature = {} feature['X'] = tf.train.Feature(float_list=tf.train.FloatList(value=X.flatten())) feature['y'] = tf.train.Feature(int64_list=tf.train.Int64List(value=y)) # Construct the Example proto object example = tf.train.Example(features=tf.train.Features(feature=feature)) # Serialize the example to a string serialized = example.SerializeToString() # write the serialized objec to the disk writer.write(serialized) writer.close()
# Creates a dataset that reads all of the examples from filenames. filenames = ["file1.tfrecord", "file2.tfrecord", ..."fileN.tfrecord"] dataset = tf.contrib.data.TFRecordDataset(filenames) # for version 1.5 and above use tf.data.TFRecordDataset # example proto decode def _parse_function(example_proto): keys_to_features = {'X':tf.FixedLenFeature((shape_of_npy_array), tf.float32), 'y': tf.FixedLenFeature((), tf.int64, default_value=0)} parsed_features = tf.parse_single_example(example_proto, keys_to_features) return parsed_features['X'], parsed_features['y'] # Parse the record into tensors. dataset = dataset.map(_parse_function) # Shuffle the dataset dataset = dataset.shuffle(buffer_size=10000) # Repeat the input indefinitly dataset = dataset.repeat() # Generate batches dataset = dataset.batch(batch_size) # Create a one-shot iterator iterator = dataset.make_one_shot_iterator() # Get batch X and y X, y = iterator.get_next()