青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

lxyfirst

C++博客 首頁 新隨筆 聯系 聚合 管理
  33 Posts :: 3 Stories :: 27 Comments :: 0 Trackbacks

#

     摘要: keytool是java提供的管理密鑰和簽名的工具。數據儲存在keystore文件中,即jks文件 。1.創建rsa密鑰對(公鑰和私鑰)并儲存在keystore文件中: Normal 0 7.8 磅 0 2 false false false EN-US ZH-CN X-NONE ...  閱讀全文
posted @ 2011-04-15 14:53 star 閱讀(8616) | 評論 (1)編輯 收藏


來自http://www.usenix.org/events/osdi10/tech/full_papers/Geambasu.pdf
分布式key-value存儲系統有很多,但comet是分布式“活動”key-value存儲系統,特點如下:
1.在普通的key-value上做了一些回調機制,當存取key-value對象時,回調相應handler函數,從而實現邏輯控制。
  已實現的回調有onGet,onPut,onUpdate,onTimer 。
2.handler函數是由lua語言實現,comet內部集成了lua解析器的簡化版,加了很多限制,形成lua代碼運行的安全沙箱。
其他方面可參考論文。




posted @ 2011-03-30 15:56 star 閱讀(394) | 評論 (0)編輯 收藏


http://highscalability.com/numbers-everyone-should-know

Numbers Everyone Should Know

Google AppEngine Numbers

This group of numbers is from Brett Slatkin in Building Scalable Web Apps with Google App Engine.

Writes are expensive!

  • Datastore is transactional: writes require disk access
  • Disk access means disk seeks
  • Rule of thumb: 10ms for a disk seek
  • Simple math: 1s / 10ms = 100 seeks/sec maximum
  • Depends on:
    * The size and shape of your data
    * Doing work in batches (batch puts and gets)

    Reads are cheap!

  • Reads do not need to be transactional, just consistent
  • Data is read from disk once, then it's easily cached
  • All subsequent reads come straight from memory
  • Rule of thumb: 250usec for 1MB of data from memory
  • Simple math: 1s / 250usec = 4GB/sec maximum
    * For a 1MB entity, that's 4000 fetches/sec

    Numbers Miscellaneous

    This group of numbers is from a presentation Jeff Dean gave at a Engineering All-Hands Meeting at Google.

  • L1 cache reference 0.5 ns
  • Branch mispredict 5 ns
  • L2 cache reference 7 ns
  • Mutex lock/unlock 100 ns
  • Main memory reference 100 ns
  • Compress 1K bytes with Zippy 10,000 ns
  • Send 2K bytes over 1 Gbps network 20,000 ns
  • Read 1 MB sequentially from memory 250,000 ns
  • Round trip within same datacenter 500,000 ns
  • Disk seek 10,000,000 ns
  • Read 1 MB sequentially from network 10,000,000 ns
  • Read 1 MB sequentially from disk 30,000,000 ns
  • Send packet CA->Netherlands->CA 150,000,000 ns

    The Lessons

  • Writes are 40 times more expensive than reads.
  • Global shared data is expensive. This is a fundamental limitation of distributed systems. The lock contention in shared heavily written objects kills performance as transactions become serialized and slow.
  • Architect for scaling writes.
  • Optimize for low write contention.
  • Optimize wide. Make writes as parallel as you can.

    The Techniques

    Keep in mind these are from a Google AppEngine perspective, but the ideas are generally applicable.

    Sharded Counters

    We always seem to want to keep count of things. But BigTable doesn't keep a count of entities because it's a key-value store. It's very good at getting data by keys, it's not interested in how many you have. So the job of keeping counts is shifted to you.

    The naive counter implementation is to lock-read-increment-write. This is fine if there a low number of writes. But if there are frequent updates there's high contention. Given the the number of writes that can be made per second is so limited, a high write load serializes and slows down the whole process.

    The solution is to shard counters. This means:
  • Create N counters in parallel.
  • Pick a shard to increment transactionally at random for each item counted.
  • To get the real current count sum up all the sharded counters.
  • Contention is reduced by 1/N. Writes have been optimized because they have been spread over the different shards. A bottleneck around shared state has been removed.

    This approach seems counter-intuitive because we are used to a counter being a single incrementable variable. Reads are cheap so we replace having a single easily read counter with having to make multiple reads to recover the actual count. Frequently updated shared variables are expensive so we shard and parallelize those writes.

    With a centralized database letting the database be the source of sequence numbers is doable. But to scale writes you need to partition and once you partition it becomes difficult to keep any shared state like counters. You might argue that so common a feature should be provided by GAE and I would agree 100 percent, but it's the ideas that count (pun intended).
  • Paging Through Comments

    How can comments be stored such that they can be paged through
    in roughly the order they were entered?

    Under a high write load situation this is a surprisingly hard question to answer. Obviously what you want is just a counter. As a comment is made you get a sequence number and that's the order comments are displayed. But as we saw in the last section shared state like a single counter won't scale in high write environments.

    A sharded counter won't work in this situation either because summing the shared counters isn't transactional. There's no way to guarantee each comment will get back the sequence number it allocated so we could have duplicates.

    Searches in BigTable return data in alphabetical order. So what is needed for a key is something unique and alphabetical so when searching through comments you can go forward and backward using only keys.

    A lot of paging algorithms use counts. Give me records 1-20, 21-30, etc. SQL makes this easy, but it doesn't work for BigTable. BigTable knows how to get things by keys so you must make keys that return data in the proper order.

    In the grand old tradition of making unique keys we just keep appending stuff until it becomes unique. The suggested key for GAE is: time stamp + user ID + user comment ID.

    Ordering by date is obvious. The good thing is getting a time stamp is a local decision, it doesn't rely on writes and is scalable. The problem is timestamps are not unique, especially with a lot of users.

    So we can add the user name to the key to distinguish it from all other comments made at the same time. We already have the user name so this too is a cheap call.

    Theoretically even time stamps for a single user aren't sufficient. What we need then is a sequence number for each user's comments.

    And this is where the GAE solution turns into something totally unexpected. Our goal is to remove write contention so we want to parallelize writes. And we have a lot available storage so we don't have to worry about that.

    With these forces in mind, the idea is to create a counter per user. When a user adds a comment it's added to a user's comment list and a sequence number is allocated. Comments are added in a transactional context on a per user basis using Entity Groups. So each comment add is guaranteed to be unique because updates in an Entity Group are serialized.

    The resulting key is guaranteed unique and sorts properly in alphabetical order. When paging a query is made across entity groups using the ID index. The results will be in the correct order. Paging is a matter of getting the previous and next keys in the query for the current page. These keys can then be used to move through index.

    I certainly would have never thought of this approach. The idea of keeping per user comment indexes is out there. But it cleverly follows the rules of scaling in a distributed system. Writes and reads are done in parallel and that's the goal. Write contention is removed.

    posted @ 2011-03-24 14:01 star 閱讀(432) | 評論 (0)編輯 收藏

    在linux下開發的多線程系統中,每個線程的調試和監控一直比較麻煩,無法精準定位,現在有了解決辦法了。
    linux下的prctl庫自kernel 2.6.9后支持PR_SET_NAME選項,用于設置進程名字,linux的進程一般使用lwp,所以這個函數可以設置線程名字。
    api定義如下
    int prctl( int option,unsigned long arg2,unsigned long arg3,unsigned long arg4,unsigned long arg5); 

    PR_SET_NAME (since Linux 
    2.6.9
    Set the process name 
    for the calling process, using the value in the location pointed to by (char *) arg2. The name can be up to 16 bytes long, and should be null-terminated if it contains fewer bytes.

    PR_GET_NAME (since Linux 
    2.6.11
    Return the process name 
    for the calling process, in the buffer pointed to by (char *) arg2. The buffer should allow space for up to 16 bytes; the returned string will be null-terminated if it is shorter than that.


    簡單實現代碼:

    int set_thread_title(const char* fmt, )
    {
        
    char title [16={0};
        va_list ap;
        va_start(ap, fmt);
        vsnprintf (title, 
    sizeof (title) , fmt, ap);
        va_end (ap);

       
    return prctl(PR_SET_NAME,title) ;

    }

    現在能夠為線程設置名字了,那么如何看到呢
    ps -eL -o pid,user,lwp,comm
    top 
    -



    posted @ 2011-03-07 16:11 star 閱讀(7822) | 評論 (2)編輯 收藏

    bitcask是一個key-value存儲系統,其特點是使用內存儲存索引數據,使用硬盤儲存實際數據。
    1.所有的key數據放在內存中,通過hashmap組織,便于快速查找,內存中同時存放了key所對應數據在磁盤上的文件指針,直接定位數據。
    2.磁盤數據使用追加寫的方式,充分利用磁盤適合順序存取的特點,每次數據更新會寫入磁盤文件,同時更新索引。
    3.讀數據時根據索引直接定位,利用文件系統的cache機制,bitcask不再單獨實現cache機制。
    4.由于更新會寫入新位置,老位置的數據會定期清理合并,減少占用的磁盤空間。
    5.讀寫的并發控制使用向量時鐘(vector clock)。
    6.內存中的索引數據也會刷新到單獨的索引文件,這樣重啟時不需要重建全部索引。

    http://highscalability.com/blog/2011/1/10/riaks-bitcask-a-log-structured-hash-table-for-fast-keyvalue.html

    posted @ 2011-02-16 19:23 star 閱讀(881) | 評論 (0)編輯 收藏

    varnish的作者Poul-Henning Kamp,是寫freebsd內核的,在寫varnish時結合了內核的一些原理和機制,摘錄了一些設計思路。
    1.現代的操作系統對于內存管理,磁盤讀寫有復雜的優化機制,以提高系統的整體性能,開發用戶空間的程序時需要關注、配合這些機制,以squid為例,內部實現了對象的緩存、淘汰策略,其實現跟操作系統類似,比如被訪問的對象會被緩存,冷對象會刷到磁盤,釋放內存,在一些情況下,這種機制可能跟操作系統沖突,從而并不能達到預期。當squid緩存的內存對象一段時間內未被訪問,并且還未被squid刷到磁盤時,操作系統可能因為內存不足將這些冷對象swap到磁盤,此時squid是不知道的,而一直認為這些冷對象還在內存中,然后squid根據淘汰策略將這些冷對象刷到磁盤時,操作系統需要先把這些冷對象從swap中重新載入內存,squid接著將這些冷對象寫入磁盤??梢钥闯稣麄€過程的性能損耗。
    評注:這個例子需要一分為二的看,應用程序的內存對象被系統swap,說明系統已經內存不夠了,內存cache效率大打折扣。

    2.帶持久化的cache,需要從持久化的數據中重構cache,一般有兩種方法,一種是直接從磁盤中按需讀取,由于訪問是隨機的,而磁盤的隨機讀效率很低,這種方式訪問效率不高但是節省空間,適合低流量的小機器,大數據量的cache。另外一種方法是預先從磁盤中建立完整的索引,能夠大大提升訪問效率。
    持久化緩存和磁盤不同的是持久化緩存對可靠性要求不高,不需要嚴格的崩潰恢復,varnish使用了第二種方式,通過分層的保護提升可靠性,頂層通過A/B寫保證可靠性。底層具體數據不保證可靠性。
    http://www.varnish-cache.org/trac/wiki/ArchitectNotes
    posted @ 2011-01-28 11:52 star 閱讀(495) | 評論 (0)編輯 收藏

    消息中間件kafka簡介

    目的及應用場景

    Kafkalinkedin的分布式消息系統,設計側重高吞吐量,用于好友動態,相關性統計,排行統計,訪問頻率控制,批處理等系統。

    傳統的離線分析方案是使用日志文件記錄數據,然后集中批量處理分析。這種方式對于實時性要求很高的活動流數據不適合,而大部分的消息中間件能夠處理實時性要求高的消息/數據,但是對于隊列中大量未處理的消息/數據在持久性方面比較弱。

     

    設計理念

             持久化消息

             高吞吐量

             consumer決定消息狀態

             系統中各個角色都是分布式集群

    consumer有邏輯組的概念,每個consumer進程屬于一個consumer組,每個消息會發給每個關注此消息的consumer組中的某一個consumer進程。

    Linkedin使用了多個consumer組,每個組多個相同職責的consumer進程。

    部署架構

    http://sna-projects.com/kafka/images/tracking_high_level.png

    消息持久化和緩存

    Kafka使用磁盤文件做持久化,磁盤文件的讀寫速度在于如何使用,隨機寫比順序寫慢的多,現代os會在內存回收對性能影響不大的情況下盡量使用內存cache進行磁盤的合并寫。所以用戶進程再做一次緩存沒有太大必要。Kafka的讀寫都是順序的,以append方式寫入文件。

     

    為減少內存copy,kafka使用sendfile發送數據,通過合并message提升性能。

     

    Kafka不儲存每個消息的狀態,而使用(consumer,topic,partition)保存每個客戶端狀態,大大減小了維護每個消息狀態的麻煩。

     

    在消息的推vs拉的選擇上,kafka使用拉的方式,因為推的方式會因為各個客戶端的處理能力、流量等不同產生不確定性。

     

    負載均衡

    Producersbrokers通過硬件做負載均衡,brokersconsumers都以集群方式運行,通過zookeeper協調變更和成員管理。

     

     

    posted @ 2011-01-25 15:56 star 閱讀(2126) | 評論 (0)編輯 收藏

    http://www.kernel.org/doc/man-pages/online/pages/man5/proc.5.html
    /proc/{pid}/下存放運行進程的所有相關數據,可以據此分析進程資源消耗和運行情況。

    1./proc/{pid}/stat
    進程運行統計
    awk '{print $1,$2,$3,$14,$15,$20,$22,$23,$24}' stat
    PID,COMM,STATE,UTIME(cpu ticks in user mode),STIME(cpu ticks in kernel mode),THREADS,START_TIME,VSIZE(virtual memory size),RSS(physical memory page)
    2./proc/{pid}/status
    包含stat的大部分數據,可讀性更強。
    3./proc/{pid}/task/
    各子線程的運行情況
    4./proc/{pid}/fd/
    進程打開的fd
    5./proc/{pid}/io
    進程IO統計


    posted @ 2011-01-05 15:31 star 閱讀(241) | 評論 (0)編輯 收藏

    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_fin_timeout = 30
    net.ipv4.ip_local_port_range = 1024 65000

    net.ipv4.route.max_size = 4096000
    net.core.somaxconn = 8192
    net.ipv4.tcp_synack_retries = 1
    net.ipv4.tcp_syn_retries = 1
    net.ipv4.netfilter.ip_conntrack_max = 2621400
    net.core.rmem_max = 20000000

    ulimit -n 40960
    ulimit -c unlimited

    做個記號,有待增補完全。

    posted @ 2010-11-17 10:27 star 閱讀(134) | 評論 (0)編輯 收藏

    redis根據數據的更新量和間隔時間定期將數據刷新到存儲中,相當于做checkpoint。
    通過系統調用fork的copy-on-write的方式實現內存的拷貝,保證刷數據時的一致性。
    但是如果在刷數據期間數據發生大量變化,可能會造成內存的大量copy-on-write,引起系統內存拷貝的負載變化。
    邏輯:
    1.主進程調用fork 。
    2.子進程關閉listen fd ,開始刷數據到存儲。
    3.主進程調整策略,減少內存數據更改。

    redis的這種策略并不能保證數據可靠性,沒有write ahead日志,異常情況數據可能會丟失。
    因此redis加入了append only的日志文件,以保證數據可靠,但是每次數據更新都寫日志的做法使得日志文件增長很快,redis使用跟刷數據類似
    的方式后臺整理這個日志文件。

    注:目前的數據庫一般通過write ahead日志保證數據可靠性,但是這種日志也不是實時刷新,而是寫到buffer中,被觸發刷新到文件。


    posted @ 2010-08-21 10:37 star 閱讀(918) | 評論 (1)編輯 收藏

    僅列出標題
    共4頁: 1 2 3 4 
    青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            中文日韩欧美| 国产精品盗摄久久久| 亚洲精品国产精品乱码不99按摩| 亚洲天堂av在线免费| 日韩一区二区福利| 夜夜夜精品看看| 亚洲图色在线| 性色av一区二区怡红| 欧美一区二视频| 蜜桃av一区二区三区| 亚洲破处大片| 亚洲在线一区二区| 久久精品免费观看| 欧美成人一区二区三区在线观看 | 久久精品一区二区| 久久亚洲综合| 欧美日韩国产一中文字不卡| 一本一本久久a久久精品综合妖精| 国产精品美女一区二区| 国产偷国产偷精品高清尤物| 一区免费观看| 中文精品视频| 麻豆精品网站| 99riav1国产精品视频| 午夜视频久久久久久| 免费av成人在线| 国产精品卡一卡二| 亚洲精品黄色| 久久久久久久久久久一区 | 久久成人18免费网站| 久久综合色影院| 国产精品成人在线| 亚洲日本在线观看| 久久久蜜桃一区二区人| 9i看片成人免费高清| 久久久国产亚洲精品| 欧美午夜精品久久久| 亚洲国产日韩一区| 久久色在线播放| 99精品国产一区二区青青牛奶| 久久精品视频在线播放| 国产精品久久久久影院亚瑟| 亚洲日本电影在线| 美腿丝袜亚洲色图| 久久av在线| 国产亚洲欧美日韩日本| 亚洲一区二区在线免费观看| 亚洲福利一区| 久久综合久久88| 国产无一区二区| 免费不卡欧美自拍视频| 亚洲国产精品va在线看黑人| 亚洲一区二区在线视频| 亚洲人成欧美中文字幕| 欧美aa国产视频| 在线国产精品一区| 欧美国内亚洲| 午夜精品视频| 国产精品五月天| 午夜精品久久久久久久久久久| 欧美国产1区2区| 欧美在线播放一区| 国产精品久久婷婷六月丁香| 亚洲视频 欧洲视频| 亚洲裸体在线观看| 欧美日韩大陆在线| 亚洲精品在线视频观看| 亚洲国产成人精品久久久国产成人一区 | 亚洲日本欧美日韩高观看| 91久久午夜| 欧美二区乱c少妇| 91久久精品国产91久久性色| 欧美国产视频日韩| 欧美成人国产| 一区二区三区四区五区在线| 亚洲九九九在线观看| 欧美日韩国产三级| 亚洲午夜av在线| 欧美亚洲视频在线观看| 激情五月综合色婷婷一区二区| 麻豆成人小视频| 欧美国产精品专区| 亚洲尤物精选| 欧美一区激情| 91久久在线播放| 亚洲精品免费在线播放| 国产精品免费在线| 免费h精品视频在线播放| 欧美日韩高清在线观看| 欧美综合第一页| 欧美一区永久视频免费观看| 亚洲国产婷婷香蕉久久久久久| 亚洲美女视频在线观看| 国产日韩视频| 亚洲高清在线观看一区| 国产精品v日韩精品| 久久国产免费看| 欧美激情网友自拍| 久久经典综合| 欧美激情aaaa| 欧美在线免费观看亚洲| 欧美成人一区二区三区| 亚洲欧美在线播放| 久久天天躁狠狠躁夜夜av| 日韩一级大片| 香蕉久久夜色精品国产使用方法| 亚洲精品久久久久久久久久久| 亚洲精品中文字幕女同| 黑人操亚洲美女惩罚| 99视频精品在线| 国产亚洲欧美另类一区二区三区| 农村妇女精品| 国产日韩欧美二区| 一区二区免费在线播放| 亚洲成人在线观看视频| 亚洲欧美中文字幕| 中文成人激情娱乐网| 久久久综合激的五月天| 久久精品99久久香蕉国产色戒 | 毛片一区二区三区| 欧美亚州韩日在线看免费版国语版| 久久国产精品毛片| 国产精品久久久久77777| 亚洲精品一品区二品区三品区| 狠狠色丁香婷婷综合| 午夜精彩视频在线观看不卡| 99精品欧美一区二区蜜桃免费| 久久女同精品一区二区| 久久精品官网| 国产一区二区0| 亚洲理伦在线| 一区福利视频| 一区二区冒白浆视频| 亚洲日本精品国产第一区| 欧美一级一区| 香蕉国产精品偷在线观看不卡| 欧美日韩国产999| 亚洲国产欧美不卡在线观看| 国产欧美一区二区色老头 | 欧美成人国产va精品日本一级| 久久精品免费播放| 国产视频在线一区二区 | 欧美国产精品日韩| 亚洲成色www久久网站| 久久久久久久尹人综合网亚洲| 久久米奇亚洲| 亚洲韩国一区二区三区| 欧美11—12娇小xxxx| 亚洲第一二三四五区| 夜夜嗨av一区二区三区| 欧美午夜性色大片在线观看| 一区二区三区久久久| 欧美在线短视频| 激情六月婷婷综合| 美女网站久久| 亚洲美女一区| 久久国产精品久久久| 尤妮丝一区二区裸体视频| 久久综合五月| 一本一本大道香蕉久在线精品| 亚洲自拍啪啪| 黄色av一区| 亚洲日本电影在线| 麻豆成人在线观看| 性亚洲最疯狂xxxx高清| 国产午夜精品久久| 欧美专区亚洲专区| 亚洲精品韩国| 久久福利电影| 亚洲国产一区二区三区在线播| 欧美日韩激情网| 欧美怡红院视频| 亚洲大胆av| 一道本一区二区| 国产精品亚洲综合久久| 久久夜色精品国产| 中文国产成人精品| 欧美www在线| 午夜精品久久久久影视| 亚洲第一搞黄网站| 欧美成人按摩| 欧美在线日韩| 国产精品99久久99久久久二8| 蜜桃久久精品一区二区| 亚洲欧美日韩国产成人| 91久久综合亚洲鲁鲁五月天| 国产午夜精品一区二区三区欧美 | 亚洲国产高清高潮精品美女| 亚洲免费在线视频| 亚洲人成在线观看网站高清| 国产精品一区在线观看| 欧美电影打屁股sp| 欧美在线视频免费观看| 一区二区三区高清在线观看| 午夜国产一区| 久久成人精品电影| 一区二区三区欧美| 亚洲国产精品999| 国产欧美一区二区精品性| 欧美日韩一区二区在线视频|