最近在看机器学习,看能否根据已有的历史来预测Hardware的故障发生概率。下文是一篇很有意思的文章,转自 http://numenta.org/htm.html。

NuPIC是一个开源项目,用来实现HTM.

-------------------

There are many things humans find easy to do that computers are currently unable to do. Tasks such as visual pattern recognition, understanding spoken language, recognizing and manipulating objects by touch, and navigating in a complex world are easy for humans. Yet despite decades of research, we have few viable algorithms for achieving human-like performance on a computer.

In humans, these capabilities are largely performed by the neocortex. Hierarchal Temporal Memory (HTM) is a technology modeled on how the neocortex performs these functions. It offers the groundwork for building machines that approach or exceed human level performance for many cognitive tasks. HTM is implemented within the NuPIC open source project.

Online Learning

Most machine learning techniques are relatively static. A model is constructed from a training data set, verified on a testing data set, and then applied to real-world data. However the patterns and structure in the world changes over time. Therefore previously accurate models must be regularly retrained with new data, repeating the time and expense of the original process.

HTM on the other hand is an online learning system. It does not require conventional training and testing data sets. Instead, HTM learns continuously with each new data point. HTM is constantly making predictions which are continually verified as more data arrives. As the underlying patterns in the data change HTM adjusts accordingly. An online learning system such as HTM forces you to think about many things differently than you do with algorithms that rely on static training data sets.

Sparse Distributed Representations

Computers store information in “dense” representations such as a 32 bit word where all combinations of 1s and 0s are possible.

By contrast, brains use sparse distributed representations. The human neocortex has roughly 100 billion neurons, but at any given time only a small percent are active. The activity of neurons are like bits in a computer, and therefore the representation is sparse. HTM also uses SDRs. A typical implementation of HTM might have 2048 columns and 64K artificial neurons where as few as 40 might be active at once. There are many mathematical advantages of using SDRs. HTM and the brain could not work otherwise.

This diagram represents sparsity: two thousand circles with a small number of red circles active.

This diagram represents a sparse distributed representation: two thousand circles with a small number of red circles active.

In SDRs, unlike in a dense representations, each bit has meaning. This means that if two vectors have 1s in the same position they are semantically similar in that attribute. SDRs are how brains solve the problem of knowledge representation that has plagued AI for decades.

For more details about SDRs, watch this excerpt from a talk given by Jeff Hawkins.

最新文章

  1. 丰富eclipse注解的内容
  2. git 简单使用
  3. poj1655 树的重心 树形dp
  4. Js获取后台集合List的值和下标的方法
  5. Windows Store App 全球化:在后台代码中引用字符串资源
  6. html空格小结
  7. 智能硬件+App移动新生态【10.24北京站】
  8. Oracle学习笔记之数据类型
  9. webbreswer 转成ie11
  10. Mysql监控工具小集合
  11. C# 自定义排序
  12. sql - 获取日期中的年
  13. css新属性box-sizing应用
  14. C#调用GDAL算法进度信息传递
  15. javaweb项目环境搭建,jdk,tomcat,myeclipse,sqlserver安装 配置
  16. java_泛型2
  17. 处理 NCBI taxonomy tree
  18. Javascript Engine, Java VM, Python interpreter, PyPy – a glance
  19. [转]php模拟post提交请求,调用接口
  20. Spring通过SchedulerFactoryBean实现调度任务的配置(定时器)

热门文章

  1. 嵌入式Linux学习方法——给那些彷徨者(上)
  2. Java throw与throws
  3. 移植tslib库出现selected device is not a touchscreen I understand的解决方法
  4. 让ie支持css3的一些htc文件
  5. ibatis 中 $与#的区别
  6. HDOJ5020【几何】
  7. bzoj 2251: [2010Beijing Wc]外星联络【SA】
  8. bzoj 1031: [JSOI2007]字符加密Cipher【后缀数组】
  9. 公司内网,无法使用yum在线下载,肿么办?
  10. codeforces 149D Coloring Brackets (区间DP + dfs)