• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>

            大龍的博客

            常用鏈接

            統(tǒng)計(jì)

            最新評論

            Linux kernel scaling: Ports and port Cycling --- 轉(zhuǎn)(http://blog.csdn.net/zgl_dm/article/details/6593661)

            NOTE: The content of this article is subject to change as we are still investigating the issue While attempting to benchmark redis a coworker (Kal McFate) and I were hitting a 28k limit on concurrent connections from a client machine to our redis server. After investigating we found the following: The default setting for the ephemeral port range on linux (net.ipv4.ip_local_port_range) is not ideal for scale. Default: 32768-61000 Recommended for scale: 1025-65000 Additionally even after changing this setting we were limited by sockets staying open in the TIME_WAIT state. Most of the poor documentation on the internet suggests setting the following in order to address the issue: net.ipv4.tcp_tw_recycle = 1 and net.ipv4.tcp_tw_reuse = 1 This is in fact incorrect. First you should choose one setting or the other not both. tcp_tw_recycle should be considered unsafe for load balancers and other customer facing devices that communicate over a higher latency network and or utilize failover services. This is due to the fact that TIME_WAIT is required in order to deal with packets that arrive for a connection after the same packet has been previously accepted via a retransmit. Setting net.ipv4.tcp_tw_reuse = 1 appears to have resolved our issue. This has passed the limiting factor from the client to the redis server. This issue is difficult to debug due to the fact that while incoming port exhaustion (socket -> accept) will produce a kernel level logged error, ephemeral local port exhaustion creates an application level rather generic could not connect error. We are now investigating other areas this change might benefit! A better solution as far as client -> redis communication is concerned is probably pipelining requests via a single persistent connection. We are looking into this as well. UPDATE: Data is still applicable to concurrency issues, however the root cause here ended up being that the client code was throwing the socket away before properly hanging up on the server. So the socket was left in TIME_WAIT until the timeout period expired. LESSON: When it comes to sockets in TIME_WAIT the issue is most likely caused by crappy TCP socket handling Additionally enabling net.ipv4.tcp_tw_reuse on a development system may cover up poorly implemented protocol and TCP socket level handling :/ http://www.lakitu.us/2011/04/linux-kernel-scaling-ports-and-port-cycling/

            posted on 2013-02-18 09:51 大龍 閱讀(337) 評論(0)  編輯 收藏 引用

            国产婷婷成人久久Av免费高清| 欧洲成人午夜精品无码区久久| 久久国产精品免费一区二区三区| 久久AAAA片一区二区| 亚洲va中文字幕无码久久| 青青草国产精品久久| 久久国产免费直播| 国产精品九九久久免费视频| 亚洲AV无码久久精品成人| 国产L精品国产亚洲区久久| 一本久久a久久精品vr综合| 久久久免费观成人影院| AA级片免费看视频久久| 久久99国产精品久久99| 久久狠狠高潮亚洲精品| A级毛片无码久久精品免费| 欧美777精品久久久久网| 热RE99久久精品国产66热| 99精品久久精品一区二区| 久久久久久曰本AV免费免费| 亚洲中文字幕伊人久久无码 | 99精品国产99久久久久久97| 97久久国产露脸精品国产| 久久精品国产99国产精偷| 99久久国产精品免费一区二区 | 无码AV中文字幕久久专区| 久久青草国产精品一区| 亚洲?V乱码久久精品蜜桃 | 久久天天躁狠狠躁夜夜躁2014| 精品久久久久久中文字幕大豆网| 久久精品国产91久久麻豆自制| 无码人妻久久一区二区三区蜜桃| 亚洲精品国精品久久99热一| 久久久久亚洲AV无码观看| 老司机午夜网站国内精品久久久久久久久 | 久久精品无码av| 久久成人国产精品免费软件| 亚洲中文字幕久久精品无码喷水| 婷婷久久五月天| 国产精品美女久久久久AV福利| 青青草国产97免久久费观看|