小结:

1、不必要的唤醒

惊群效应

https://github.com/benoitc/gunicorn/issues/792#issuecomment-46718939

https://www.citi.umich.edu/u/cel/linux-scalability/reports/accept.html

http://stackoverflow.com/questions/12494914/how-does-the-operating-system-load-balance-between-multiple-processes-accepting/12502808#12502808
https://www.citi.umich.edu/u/cel/linux-scalability/reports/accept.html

Introduction

Network servers that use TCP/IP to communicate with their clients are rapidly increasing their offered loads. A service may elect to create multiple threads or processes to wait for increasing numbers of concurrent incoming connections. By pre-creating these multiple threads, a network server can handle connections and requests at a faster rate than with a single thread.

In Linux, when multiple threads call accept() on the same TCP socket, they get put on the same wait queue, waiting for an incoming connection to wake them up. In the Linux 2.2.9 kernel (and earlier), when an incoming TCP connection is accepted, the wake_up_interruptible() function is invoked to awaken waiting threads. This function walks the socket's wait queue and awakens everybody. All but one of the threads, however, will put themselves back on the wait queue to wait for the next connection. This unnecessary awakening is commonly referred to as a "thundering herd" problem and creates scalability problems for network server applications.

This report explores the effects of the "thundering herd" problem associated with theaccept() system call as implemented in the Linux kernel. In the rest of this paper, we discuss the nature of the problem and how it affects the scalability of network server applications running on Linux. Finally, we will benchmark the solutions and give the results and description of the benchmark. All benchmarks and patches are against the Linux 2.2.9 kernel.

Conclusion

By thoroughly studying this "thundering herd" problem, we have shown that it is indeed a bottleneck in high-load server performance, and that either patch significantly improves the performance of a high-load server. Even though both patches performed well in the testing, the "wake one" patch is cleaner and easier to incorporate into new or existing code. It also has the advantage of not committing a task to "exclusive" status before it is awakened, so extra code doesn't have to be incorporated for special cases to completely empty the wait-queue. The "wake one" patch can also solve any "thundering herd" problems locally, while the "task exclusive" method may require changes in multiple places where the programmer is responsible for making sure that all adjustments are made. This makes the "wake one" solution easily extensible to all parts of the kernel.

最新文章

  1. 73.Android之SparseArray替代HashMap
  2. PHP获取当前日期和时间及格式化方法参数
  3. Android开发利器 - Charles + Genymotion 调试网络应用程序
  4. 学习simple.data之进阶篇
  5. APIO2015题解
  6. Cloudera Impala Guide
  7. string中find函数的使用
  8. html.css随便记
  9. json对象和json字符串转换方法
  10. i = i++ 在java字节码层面的分析
  11. Codeforces 489A SwapSort
  12. iOS 单元測试之XCTest具体解释(一)
  13. 【SignalR学习系列】3. SignalR实时高刷新率程序
  14. xml生成方式二(Xml序列化器XmlSerializer)
  15. struts2(三)之表单参数自动封装与参数类型自动转换
  16. Java中获取本地某一个目录下的所有文件和文件夹
  17. 8.3-8.4NOIP模拟题总结
  18. 实验六 MapReduce实验:二次排序
  19. Android SDK离线安装更新方法
  20. (第4篇)hadoop之魂--mapreduce计算框架,让收集的数据产生价值

热门文章

  1. stm32 窗口看门狗 WWDG
  2. [基础累积] C#计算时间差
  3. 网络OSI 7层模型
  4. python基础应用---格式化输出
  5. Windows 下 mysql 安装
  6. Object类实现的方法
  7. linux实操_组管理
  8. pandas实现hive的lag和lead函数 以及 first_value和last_value函数
  9. 【JUC系列第一篇】-Volatile关键字及内存可见性
  10. windows版idea 2018.3.5版 永久激活教程