• <bdo id='2yciy'></bdo><ul id='2yciy'></ul>
      <tfoot id='2yciy'></tfoot><legend id='2yciy'><style id='2yciy'><dir id='2yciy'><q id='2yciy'></q></dir></style></legend>
    1. <i id='2yciy'><tr id='2yciy'><dt id='2yciy'><q id='2yciy'><span id='2yciy'><b id='2yciy'><form id='2yciy'><ins id='2yciy'></ins><ul id='2yciy'></ul><sub id='2yciy'></sub></form><legend id='2yciy'></legend><bdo id='2yciy'><pre id='2yciy'><center id='2yciy'></center></pre></bdo></b><th id='2yciy'></th></span></q></dt></tr></i><div id='2yciy'><tfoot id='2yciy'></tfoot><dl id='2yciy'><fieldset id='2yciy'></fieldset></dl></div>

      1. <small id='2yciy'></small><noframes id='2yciy'>

      2. 如何在训练和推理中使用 tf.Dataset 设计?

        时间:2023-09-29
          • <bdo id='EKyN9'></bdo><ul id='EKyN9'></ul>
          • <small id='EKyN9'></small><noframes id='EKyN9'>

          • <i id='EKyN9'><tr id='EKyN9'><dt id='EKyN9'><q id='EKyN9'><span id='EKyN9'><b id='EKyN9'><form id='EKyN9'><ins id='EKyN9'></ins><ul id='EKyN9'></ul><sub id='EKyN9'></sub></form><legend id='EKyN9'></legend><bdo id='EKyN9'><pre id='EKyN9'><center id='EKyN9'></center></pre></bdo></b><th id='EKyN9'></th></span></q></dt></tr></i><div id='EKyN9'><tfoot id='EKyN9'></tfoot><dl id='EKyN9'><fieldset id='EKyN9'></fieldset></dl></div>

              <legend id='EKyN9'><style id='EKyN9'><dir id='EKyN9'><q id='EKyN9'></q></dir></style></legend>

                  <tbody id='EKyN9'></tbody>
                <tfoot id='EKyN9'></tfoot>

                  本文介绍了如何在训练和推理中使用 tf.Dataset 设计?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  说,我们输入了x和标签y:

                  Say, we have input x and label y:

                  iterator = tf.data.Iterator.from_structure((x_type, y_type), (x_shape, y_shape))
                  tf_x, tf_y = iterator.get_next()
                  

                  现在我使用 generate 函数来创建数据集:

                  Now I use generate function to create dataset:

                  def gen():
                      for ....: yield (x, y)
                  ds = tf.data.Dataset.from_generator(gen, (x_type, y_type), (x_shape, y_shape))
                  

                  在我的图表中,我使用 tf_xtf_y 进行训练,这很好.但现在我想做参考,我没有标签 y.我提出的一种解决方法是伪造一个 y(如 tf.zeros(y_shape)),然后使用占位符来初始化迭代器.

                  In my graph, I use tf_x and tf_y to do training, that is fine. But now I want to do referring, where I don't have label y. One workaround I made is to fake a y (like tf.zeros(y_shape)), then I use a placeholder to init the iterator.

                  x_placeholder = tf.placeholder(...)
                  y_placeholder = tf.placeholder(...)
                  ds = tf.data.Dataset.from_tensors((x_placeholder, y_placeholder))
                  ds_init_op = iterator.make_initializer(ds)
                  sess.run(ds_init_op, feed_dict={x_placeholder=x, y_placeholder=fake(y))})
                  

                  我的问题是,有没有更清洁的方法来做到这一点?在推断期间没有伪造 y?

                  My question is, is there a cleaner way to do that? without fake a y during inferring time?

                  更新:

                  我实验了一下,貌似少了一个数据集操作unzip:

                  I experiment a little bit, looks like there is one dataset operation unzip missing:

                  import numpy as np
                  import tensorflow as tf
                  
                  
                  x_type = tf.float32
                  y_type = tf.float32
                  x_shape = tf.TensorShape([None, 128])
                  y_shape = tf.TensorShape([None, 10])
                  x_shape_nobatch = tf.TensorShape([128])
                  y_shape_nobatch = tf.TensorShape([10])
                  
                  iterator_x = tf.data.Iterator.from_structure((x_type,), (x_shape,))
                  iterator_y = tf.data.Iterator.from_structure((y_type,), (y_shape,))
                  
                  
                  def gen1():
                      for i in range(100):
                          yield np.random.randn(128)
                  ds1 = tf.data.Dataset.from_generator(gen1, (x_type,), (x_shape_nobatch,))
                  ds1 = ds1.batch(5)
                  ds1_init_op = iterator_x.make_initializer(ds1)
                  
                  
                  def gen2():
                      for i in range(80):
                          yield np.random.randn(128), np.random.randn(10)
                  ds2 = tf.data.Dataset.from_generator(gen2, (x_type, y_type), (x_shape_nobatch, y_shape_nobatch))
                  ds2 = ds2.batch(10)
                  
                  # my ds2 has two tensors in one element, now the problem is
                  # how can I unzip this dataset so that I can apply them to iterator_x and iterator_y?
                  # such as:
                  ds2_x, ds2_y = tf.data.Dataset.unzip(ds2)  #?? missing this unzip operation!
                  ds2_x_init_op = iterator_x.make_initializer(ds2_x)
                  ds2_y_init_op = iterator_y.make_initializer(ds2_y)
                  
                  
                  tf_x = iterator_x.get_next()
                  tf_y = iterator_y.get_next()
                  

                  推荐答案

                  数据集 API 的目的是避免将值直接提供给会话(因为这会导致数据首先流向客户端,然后流向设备).

                  The purpose of datasets API is to avoid feeding the values directly to session (because that causes the data to flow first to the client, then to a device).

                  我见过的所有使用数据集 API 的示例也使用 estimator API,您可以在其中可以为训练和推理提供不同的输入函数.

                  All examples I've seen that use datasets API also use estimator API, where you can provide different input functions for training and inference.

                  def train_dataset(data_dir):
                    """Returns a tf.data.Dataset yielding (image, label) pairs for training."""
                    data = input_data.read_data_sets(data_dir, one_hot=True).train
                    return tf.data.Dataset.from_tensor_slices((data.images, data.labels))
                  
                  def infer_dataset(data_dir):
                    """Returns a tf.data.Dataset yielding images for inference."""
                    data = input_data.read_data_sets(data_dir, one_hot=True).test
                    return tf.data.Dataset.from_tensors((data.images,))
                  
                  ...
                  
                  def train_input_fn():
                    dataset = train_dataset(FLAGS.data_dir)
                    dataset = dataset.shuffle(buffer_size=50000).batch(1024).repeat(10)
                    (images, labels) = dataset.make_one_shot_iterator().get_next()
                    return (images, labels)
                  
                  mnist_classifier.train(input_fn=train_input_fn)
                  
                  ...
                  
                  def infer_input_fn():
                    return infer_dataset(FLAGS.data_dir).make_one_shot_iterator().get_next()
                  
                  mnist_classifier.predict(input_fn=infer_input_fn)
                  

                  这篇关于如何在训练和推理中使用 tf.Dataset 设计?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  上一篇:将 SPSS 数据集导入 Python 下一篇:使用推文的rest_api不会转储到python中的文件中

                  相关文章

                  <legend id='0WrjD'><style id='0WrjD'><dir id='0WrjD'><q id='0WrjD'></q></dir></style></legend>
                    • <bdo id='0WrjD'></bdo><ul id='0WrjD'></ul>

                    <i id='0WrjD'><tr id='0WrjD'><dt id='0WrjD'><q id='0WrjD'><span id='0WrjD'><b id='0WrjD'><form id='0WrjD'><ins id='0WrjD'></ins><ul id='0WrjD'></ul><sub id='0WrjD'></sub></form><legend id='0WrjD'></legend><bdo id='0WrjD'><pre id='0WrjD'><center id='0WrjD'></center></pre></bdo></b><th id='0WrjD'></th></span></q></dt></tr></i><div id='0WrjD'><tfoot id='0WrjD'></tfoot><dl id='0WrjD'><fieldset id='0WrjD'></fieldset></dl></div>

                  1. <small id='0WrjD'></small><noframes id='0WrjD'>

                    <tfoot id='0WrjD'></tfoot>