原文:https://www.cnblogs.com/nowornever-L/p/6991295.html

1. TensorFlow  生成的  .ckpt 和  .pb 都有什么用?

The .ckpt is the model given by tensorflow which includes all the
weights/parameters in the model. The .pb file stores the computational
graph. To make tensorflow work we need both the graph and the
parameters. There are two ways to get the graph:
(1) use the python program that builds it in the first place (tensorflowNetworkFunctions.py).
(2) Use a .pb file (which would have to be generated by tensorflowNetworkFunctions.py). .ckpt file is were all the intelligence is.

2. TensorFlow saving into/loading a graph from a file

正好看到 StackOverflow 上有位仁兄问过相关的问题,整理的不错

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

From what I've gathered so far, there are several different ways of dumping a TensorFlow graph
into a file and then loading it into another program, but I haven't been able to find clear examples/information on how they work. What I already know is this:

  1. Save the model's variables into a checkpoint file (.ckpt) using a tf.train.Saver() and restore them later (source)
  2. Save a model into a .pb file and load it back in using tf.import_graph_def() (source)
  3. Load in a model from a .pb file, retrain it, and dump it into a new .pb file using Bazel (source)
  4. Freeze the graph to save the graph and weights together (source)
  5. Use as_graph_def() to save the model, and for weights/variables, map them into constants (source)

However, I haven't been able to clear up several questions regarding these different methods:

  1. Regarding checkpoint files, do they only save the trained weights of a model? Could checkpoint files be loaded into a new program, and be used to run the model, or do they simply serve as ways to save the weights in a model at a certain time/stage?
  2. Regarding Regarding
    Bazel, can it only save into/load from .pb files for retraining? Is
    there a simple Bazel command just to dump a graph into a .pb?
  3. Regarding freezing, can a frozen graph be loaded in using The
    Android demo for TensorFlow loads in Google's Inception model from a
    .pb file. If I wanted to substitute my own .pb file, how would I go
    about doing that? Would I need to change
    any native code/methods?
  4. In general, what exactly is the difference between all these methods? Or more broadly, what is the difference between 
    In short, what I'm looking for is a method to save both a graph (as in,
    the various operations and such) and its weights/variables into a file,
    which can then be used to load the graph and weights into another
    program, for use (not necessarily continuing/retraining).

    Documentation about this topic isn't very straightforward, so any answers/information would be greatly appreciated.

 


There are many ways to approach the problem of saving a model in TensorFlow, which can make it a bit confusing. The documentation on this topic is taking shape, but doesn't cover all of the details in your question. Taking each of your sub-questions in turn:

  1. The checkpoint files (produced e.g. by calling saver.save() on a tf.train.Saver object) contain only the weights, and any other variables defined in the same program. To use them in another program, you must re-create the associated graph structure (e.g. by running code to build it again, or calling saver.save() also produces a file containing athe
    tutorial
     for more details.


  2. tf.train.write_graph() only writes the graph structure; not the weights.

  3. Bazel is unrelated to reading or writing TensorFlow graphs. (Perhaps I misunderstand your question: feel free to clarify it in a comment.)

  4. A frozen graph can be loaded using


    The main change would be to update the names of the tensor(s) that are
    fed into the model, and the names of the tensor(s) that are fetched from
    the model. In the TensorFlow
    Android demo, this would correspond to the outputName strings
    that are passed to 


    The GraphDef is
    the program structure, which typically does not change through the
    training process. The checkpoint is a snapshot of the state of a
    training process, which typically changes at every step of the training
    process. As a result, TensorFlow uses different storage
    formats for these types of data, and the low-level API provides
    different ways to save and load them. Higher-level libraries, such as
    the 
    Keras,
    and skflow build on these mechanisms to provide
    more convenient ways to save and restore an entire model.


answered Aug 15 at 6:07
mrry

28.6k35999
 
   
Does this mean that the C++ API documentation lies, when it says that you can load the graph saved withtf.train.write_graph() and then execute it? – mnicky yesterday 
   
The C++ API documentation does not lie, but it is missing a few details. The most important detail is that, in addition to the GraphDef saved by mrry yesterday

最新文章

  1. 告别我的OI生涯
  2. html中input标签的tabindex属性
  3. hashmap先按照value从大到小排序,value相等时按照key从小到大排序
  4. Logstash学习1-logstash的简单例子
  5. android大项目运行中出现问题汇总
  6. PHP中我经常容易混淆的三组函数
  7. 设计模式24---设计模式之职责链模式(Chain of Responsibility)(行为型)
  8. Linux中的task,process, thread 简介
  9. 静态方法List
  10. jQuery的区别:$().click()和$(document).on('click','要选择的元素',function(){})的不同
  11. Gulp 前端优化
  12. java将秒转换为时分秒工具类
  13. SparkSql处理嵌套json数据
  14. OneMap Client API
  15. fragment The specified child already has a parent. You must call removeView()
  16. DP-动态规划算法实例:拿糖果问题
  17. Day 07 文件的相关操作
  18. sqlserver修改为windows验证登陆, 程序的调整
  19. 基于SpringBoot搭建应用开发框架(一) —— 基础架构
  20. uva 1456

热门文章

  1. cocos2d JS 利用定时器实现-倒计时功能
  2. 数据加密之MD5加密
  3. 阿里云云盾服务证书免费CA证书申请与配置 (原)
  4. python--教你做个最简单的tcp通信。。
  5. django 网站的搭建(2)
  6. 数据模型Model(I)
  7. 什么是ASCII
  8. log4j日志输出使用_1
  9. Value '0000-00-00 00:00:00' can not be represented as java.sql.Timestamp
  10. (Review cs231n) Gradient Calculation and Backward