This post is about understanding how a self driving deep learning network decides to steer the wheel.

NVIDIA published a very interesting paper(https://arxiv.org/pdf/1604.07316.pdf), that describes how a deep learning network can be trained to steer a wheel, given a 200x66 RGB image from the front of a car.
This repository(https://github.com/SullyChen/Nvidia-Autopilot-TensorFlow) shared
a Tensorflow implementation of the network described in the paper, and
(thankfully!) a dataset of image / steering angles collected from a
human driving a car.
The dataset is quite small, and there are much larger datasets available like in the udacity challenge.
However
it is great for quickly experimenting with these kind of networks, and
visualizing when the network is overfitting is also interesting.
I ported the code to Keras, trained a (very over-fitting) network based on the NVIDIA paper, and made visualizations.

I
think that if eventually this kind of a network will find use in a real
world self driving car, being able to debug it and understand its
output will be crucial.
Otherwise the first time the network decides
to make a very wrong turn, critics will say that this is just a black
box we don’t understand, and it should be replaced!

First attempt : Treating the network as a black box - occlusion maps


The
first thing we will try, won’t require any knowledge about the network,
and in fact we won’t peak inside the network, just look at the output.
We”l
create an occlusion map for a given image, where we take many windows
in the image, mask them out, run the network, and see how the regressed
angle changed.
If the angle changed a lot - that window contains information that was important for the network decision.
We then can assign each window a score based on how the angle changed!

We
need to take many windows, with different sizes - since we don’t know
in advance the sizes of important features in the image.

Now we can make nice effects like filtering the occlusion map, and displaying the focused area on top of a blurred image:

链接(需FQ):
http://jacobcv.blogspot.jp/2016/10/visualizations-for-regressing-wheel.html

代码链接:
https://github.com/jacobgil/keras-steering-angle-visualizations

原文链接:
http://weibo.com/5501429448/EeBRKc9pl?ref=collection&type=comment

最新文章

  1. Uiautomator 2.0之BySelector类学习小记
  2. Microsoft Azure 的负载平衡器的Session Sticky
  3. 关于BS响应式的网站建设
  4. Spring AOP 简单理解
  5. 关于ajax请求数据后,数据本身的js失效的一些想法
  6. js代码大全
  7. WPF之给使用了模板的MenuItem添加快捷操作
  8. IOS 支付
  9. Java基础知识强化之集合框架笔记53:Map集合之Map集合的遍历 键值对对象找键和值
  10. [置顶] AFNetworking 2.0 新特性讲解之AFHTTPSessionManager
  11. spring框架源码编译
  12. iOS_应用程序的生命周期
  13. HTA基础
  14. js获取元素的滚动高度,和距离顶部的高度
  15. 笔记:I/O流-文件操作
  16. 高通android开发摘要
  17. scrapy顺序执行多个爬虫
  18. http.lua里的装饰器
  19. Canvas画空心正五角星-扩展DEMO为五星红旗
  20. 【Java】【事件处理机制】

热门文章

  1. onsubmit ajax return false 无效
  2. Maven 之多模块构建
  3. 黄聪:PHP代码获取客户端IP地址经纬度及所在城市
  4. 关于java分布式系统开关功能设计(服务升降级)
  5. go学习day2
  6. ubuntu安装最新的mercurial
  7. ASP.NET重写Render 加载CSS样式文件和JS文件(切换CSS换皮肤)
  8. 运维人员word优化
  9. C#列的一些操作
  10. for 练习