• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>

            lxyfirst

            C++博客 首頁 新隨筆 聯系 聚合 管理
              33 Posts :: 3 Stories :: 27 Comments :: 0 Trackbacks

            #

                 摘要: keytool是java提供的管理密鑰和簽名的工具。數據儲存在keystore文件中,即jks文件 。1.創建rsa密鑰對(公鑰和私鑰)并儲存在keystore文件中: Normal 0 7.8 磅 0 2 false false false EN-US ZH-CN X-NONE ...  閱讀全文
            posted @ 2011-04-15 14:53 star 閱讀(8603) | 評論 (1)編輯 收藏


            來自http://www.usenix.org/events/osdi10/tech/full_papers/Geambasu.pdf
            分布式key-value存儲系統有很多,但comet是分布式“活動”key-value存儲系統,特點如下:
            1.在普通的key-value上做了一些回調機制,當存取key-value對象時,回調相應handler函數,從而實現邏輯控制。
              已實現的回調有onGet,onPut,onUpdate,onTimer 。
            2.handler函數是由lua語言實現,comet內部集成了lua解析器的簡化版,加了很多限制,形成lua代碼運行的安全沙箱。
            其他方面可參考論文。




            posted @ 2011-03-30 15:56 star 閱讀(384) | 評論 (0)編輯 收藏


            http://highscalability.com/numbers-everyone-should-know

            Numbers Everyone Should Know

            Google AppEngine Numbers

            This group of numbers is from Brett Slatkin in Building Scalable Web Apps with Google App Engine.

            Writes are expensive!

          1. Datastore is transactional: writes require disk access
          2. Disk access means disk seeks
          3. Rule of thumb: 10ms for a disk seek
          4. Simple math: 1s / 10ms = 100 seeks/sec maximum
          5. Depends on:
            * The size and shape of your data
            * Doing work in batches (batch puts and gets)

            Reads are cheap!

          6. Reads do not need to be transactional, just consistent
          7. Data is read from disk once, then it's easily cached
          8. All subsequent reads come straight from memory
          9. Rule of thumb: 250usec for 1MB of data from memory
          10. Simple math: 1s / 250usec = 4GB/sec maximum
            * For a 1MB entity, that's 4000 fetches/sec

            Numbers Miscellaneous

            This group of numbers is from a presentation Jeff Dean gave at a Engineering All-Hands Meeting at Google.

          11. L1 cache reference 0.5 ns
          12. Branch mispredict 5 ns
          13. L2 cache reference 7 ns
          14. Mutex lock/unlock 100 ns
          15. Main memory reference 100 ns
          16. Compress 1K bytes with Zippy 10,000 ns
          17. Send 2K bytes over 1 Gbps network 20,000 ns
          18. Read 1 MB sequentially from memory 250,000 ns
          19. Round trip within same datacenter 500,000 ns
          20. Disk seek 10,000,000 ns
          21. Read 1 MB sequentially from network 10,000,000 ns
          22. Read 1 MB sequentially from disk 30,000,000 ns
          23. Send packet CA->Netherlands->CA 150,000,000 ns

            The Lessons

          24. Writes are 40 times more expensive than reads.
          25. Global shared data is expensive. This is a fundamental limitation of distributed systems. The lock contention in shared heavily written objects kills performance as transactions become serialized and slow.
          26. Architect for scaling writes.
          27. Optimize for low write contention.
          28. Optimize wide. Make writes as parallel as you can.

            The Techniques

            Keep in mind these are from a Google AppEngine perspective, but the ideas are generally applicable.

            Sharded Counters

            We always seem to want to keep count of things. But BigTable doesn't keep a count of entities because it's a key-value store. It's very good at getting data by keys, it's not interested in how many you have. So the job of keeping counts is shifted to you.

            The naive counter implementation is to lock-read-increment-write. This is fine if there a low number of writes. But if there are frequent updates there's high contention. Given the the number of writes that can be made per second is so limited, a high write load serializes and slows down the whole process.

            The solution is to shard counters. This means:
          29. Create N counters in parallel.
          30. Pick a shard to increment transactionally at random for each item counted.
          31. To get the real current count sum up all the sharded counters.
          32. Contention is reduced by 1/N. Writes have been optimized because they have been spread over the different shards. A bottleneck around shared state has been removed.

            This approach seems counter-intuitive because we are used to a counter being a single incrementable variable. Reads are cheap so we replace having a single easily read counter with having to make multiple reads to recover the actual count. Frequently updated shared variables are expensive so we shard and parallelize those writes.

            With a centralized database letting the database be the source of sequence numbers is doable. But to scale writes you need to partition and once you partition it becomes difficult to keep any shared state like counters. You might argue that so common a feature should be provided by GAE and I would agree 100 percent, but it's the ideas that count (pun intended).
          33. Paging Through Comments

            How can comments be stored such that they can be paged through
            in roughly the order they were entered?

            Under a high write load situation this is a surprisingly hard question to answer. Obviously what you want is just a counter. As a comment is made you get a sequence number and that's the order comments are displayed. But as we saw in the last section shared state like a single counter won't scale in high write environments.

            A sharded counter won't work in this situation either because summing the shared counters isn't transactional. There's no way to guarantee each comment will get back the sequence number it allocated so we could have duplicates.

            Searches in BigTable return data in alphabetical order. So what is needed for a key is something unique and alphabetical so when searching through comments you can go forward and backward using only keys.

            A lot of paging algorithms use counts. Give me records 1-20, 21-30, etc. SQL makes this easy, but it doesn't work for BigTable. BigTable knows how to get things by keys so you must make keys that return data in the proper order.

            In the grand old tradition of making unique keys we just keep appending stuff until it becomes unique. The suggested key for GAE is: time stamp + user ID + user comment ID.

            Ordering by date is obvious. The good thing is getting a time stamp is a local decision, it doesn't rely on writes and is scalable. The problem is timestamps are not unique, especially with a lot of users.

            So we can add the user name to the key to distinguish it from all other comments made at the same time. We already have the user name so this too is a cheap call.

            Theoretically even time stamps for a single user aren't sufficient. What we need then is a sequence number for each user's comments.

            And this is where the GAE solution turns into something totally unexpected. Our goal is to remove write contention so we want to parallelize writes. And we have a lot available storage so we don't have to worry about that.

            With these forces in mind, the idea is to create a counter per user. When a user adds a comment it's added to a user's comment list and a sequence number is allocated. Comments are added in a transactional context on a per user basis using Entity Groups. So each comment add is guaranteed to be unique because updates in an Entity Group are serialized.

            The resulting key is guaranteed unique and sorts properly in alphabetical order. When paging a query is made across entity groups using the ID index. The results will be in the correct order. Paging is a matter of getting the previous and next keys in the query for the current page. These keys can then be used to move through index.

            I certainly would have never thought of this approach. The idea of keeping per user comment indexes is out there. But it cleverly follows the rules of scaling in a distributed system. Writes and reads are done in parallel and that's the goal. Write contention is removed.

            posted @ 2011-03-24 14:01 star 閱讀(422) | 評論 (0)編輯 收藏

            在linux下開發的多線程系統中,每個線程的調試和監控一直比較麻煩,無法精準定位,現在有了解決辦法了。
            linux下的prctl庫自kernel 2.6.9后支持PR_SET_NAME選項,用于設置進程名字,linux的進程一般使用lwp,所以這個函數可以設置線程名字。
            api定義如下
            int prctl( int option,unsigned long arg2,unsigned long arg3,unsigned long arg4,unsigned long arg5); 

            PR_SET_NAME (since Linux 
            2.6.9
            Set the process name 
            for the calling process, using the value in the location pointed to by (char *) arg2. The name can be up to 16 bytes long, and should be null-terminated if it contains fewer bytes.

            PR_GET_NAME (since Linux 
            2.6.11
            Return the process name 
            for the calling process, in the buffer pointed to by (char *) arg2. The buffer should allow space for up to 16 bytes; the returned string will be null-terminated if it is shorter than that.


            簡單實現代碼:

            int set_thread_title(const char* fmt, )
            {
                
            char title [16={0};
                va_list ap;
                va_start(ap, fmt);
                vsnprintf (title, 
            sizeof (title) , fmt, ap);
                va_end (ap);

               
            return prctl(PR_SET_NAME,title) ;

            }

            現在能夠為線程設置名字了,那么如何看到呢
            ps -eL -o pid,user,lwp,comm
            top 
            -



            posted @ 2011-03-07 16:11 star 閱讀(7801) | 評論 (2)編輯 收藏

            bitcask是一個key-value存儲系統,其特點是使用內存儲存索引數據,使用硬盤儲存實際數據。
            1.所有的key數據放在內存中,通過hashmap組織,便于快速查找,內存中同時存放了key所對應數據在磁盤上的文件指針,直接定位數據。
            2.磁盤數據使用追加寫的方式,充分利用磁盤適合順序存取的特點,每次數據更新會寫入磁盤文件,同時更新索引。
            3.讀數據時根據索引直接定位,利用文件系統的cache機制,bitcask不再單獨實現cache機制。
            4.由于更新會寫入新位置,老位置的數據會定期清理合并,減少占用的磁盤空間。
            5.讀寫的并發控制使用向量時鐘(vector clock)。
            6.內存中的索引數據也會刷新到單獨的索引文件,這樣重啟時不需要重建全部索引。

            http://highscalability.com/blog/2011/1/10/riaks-bitcask-a-log-structured-hash-table-for-fast-keyvalue.html

            posted @ 2011-02-16 19:23 star 閱讀(860) | 評論 (0)編輯 收藏

            varnish的作者Poul-Henning Kamp,是寫freebsd內核的,在寫varnish時結合了內核的一些原理和機制,摘錄了一些設計思路。
            1.現代的操作系統對于內存管理,磁盤讀寫有復雜的優化機制,以提高系統的整體性能,開發用戶空間的程序時需要關注、配合這些機制,以squid為例,內部實現了對象的緩存、淘汰策略,其實現跟操作系統類似,比如被訪問的對象會被緩存,冷對象會刷到磁盤,釋放內存,在一些情況下,這種機制可能跟操作系統沖突,從而并不能達到預期。當squid緩存的內存對象一段時間內未被訪問,并且還未被squid刷到磁盤時,操作系統可能因為內存不足將這些冷對象swap到磁盤,此時squid是不知道的,而一直認為這些冷對象還在內存中,然后squid根據淘汰策略將這些冷對象刷到磁盤時,操作系統需要先把這些冷對象從swap中重新載入內存,squid接著將這些冷對象寫入磁盤。可以看出整個過程的性能損耗。
            評注:這個例子需要一分為二的看,應用程序的內存對象被系統swap,說明系統已經內存不夠了,內存cache效率大打折扣。

            2.帶持久化的cache,需要從持久化的數據中重構cache,一般有兩種方法,一種是直接從磁盤中按需讀取,由于訪問是隨機的,而磁盤的隨機讀效率很低,這種方式訪問效率不高但是節省空間,適合低流量的小機器,大數據量的cache。另外一種方法是預先從磁盤中建立完整的索引,能夠大大提升訪問效率。
            持久化緩存和磁盤不同的是持久化緩存對可靠性要求不高,不需要嚴格的崩潰恢復,varnish使用了第二種方式,通過分層的保護提升可靠性,頂層通過A/B寫保證可靠性。底層具體數據不保證可靠性。
            http://www.varnish-cache.org/trac/wiki/ArchitectNotes
            posted @ 2011-01-28 11:52 star 閱讀(485) | 評論 (0)編輯 收藏

            消息中間件kafka簡介

            目的及應用場景

            Kafkalinkedin的分布式消息系統,設計側重高吞吐量,用于好友動態,相關性統計,排行統計,訪問頻率控制,批處理等系統。

            傳統的離線分析方案是使用日志文件記錄數據,然后集中批量處理分析。這種方式對于實時性要求很高的活動流數據不適合,而大部分的消息中間件能夠處理實時性要求高的消息/數據,但是對于隊列中大量未處理的消息/數據在持久性方面比較弱。

             

            設計理念

                     持久化消息

                     高吞吐量

                     consumer決定消息狀態

                     系統中各個角色都是分布式集群

            consumer有邏輯組的概念,每個consumer進程屬于一個consumer組,每個消息會發給每個關注此消息的consumer組中的某一個consumer進程。

            Linkedin使用了多個consumer組,每個組多個相同職責的consumer進程。

            部署架構

            http://sna-projects.com/kafka/images/tracking_high_level.png

            消息持久化和緩存

            Kafka使用磁盤文件做持久化,磁盤文件的讀寫速度在于如何使用,隨機寫比順序寫慢的多,現代os會在內存回收對性能影響不大的情況下盡量使用內存cache進行磁盤的合并寫。所以用戶進程再做一次緩存沒有太大必要。Kafka的讀寫都是順序的,以append方式寫入文件。

             

            為減少內存copykafka使用sendfile發送數據,通過合并message提升性能。

             

            Kafka不儲存每個消息的狀態,而使用(consumer,topic,partition)保存每個客戶端狀態,大大減小了維護每個消息狀態的麻煩。

             

            在消息的推vs拉的選擇上,kafka使用拉的方式,因為推的方式會因為各個客戶端的處理能力、流量等不同產生不確定性。

             

            負載均衡

            Producersbrokers通過硬件做負載均衡,brokersconsumers都以集群方式運行,通過zookeeper協調變更和成員管理。

             

             

            posted @ 2011-01-25 15:56 star 閱讀(2116) | 評論 (0)編輯 收藏

            http://www.kernel.org/doc/man-pages/online/pages/man5/proc.5.html
            /proc/{pid}/下存放運行進程的所有相關數據,可以據此分析進程資源消耗和運行情況。

            1./proc/{pid}/stat
            進程運行統計
            awk '{print $1,$2,$3,$14,$15,$20,$22,$23,$24}' stat
            PID,COMM,STATE,UTIME(cpu ticks in user mode),STIME(cpu ticks in kernel mode),THREADS,START_TIME,VSIZE(virtual memory size),RSS(physical memory page)
            2./proc/{pid}/status
            包含stat的大部分數據,可讀性更強。
            3./proc/{pid}/task/
            各子線程的運行情況
            4./proc/{pid}/fd/
            進程打開的fd
            5./proc/{pid}/io
            進程IO統計


            posted @ 2011-01-05 15:31 star 閱讀(231) | 評論 (0)編輯 收藏

            net.ipv4.tcp_syncookies = 1
            net.ipv4.tcp_tw_reuse = 1
            net.ipv4.tcp_tw_recycle = 1
            net.ipv4.tcp_fin_timeout = 30
            net.ipv4.ip_local_port_range = 1024 65000

            net.ipv4.route.max_size = 4096000
            net.core.somaxconn = 8192
            net.ipv4.tcp_synack_retries = 1
            net.ipv4.tcp_syn_retries = 1
            net.ipv4.netfilter.ip_conntrack_max = 2621400
            net.core.rmem_max = 20000000

            ulimit -n 40960
            ulimit -c unlimited

            做個記號,有待增補完全。

            posted @ 2010-11-17 10:27 star 閱讀(127) | 評論 (0)編輯 收藏

            redis根據數據的更新量和間隔時間定期將數據刷新到存儲中,相當于做checkpoint。
            通過系統調用fork的copy-on-write的方式實現內存的拷貝,保證刷數據時的一致性。
            但是如果在刷數據期間數據發生大量變化,可能會造成內存的大量copy-on-write,引起系統內存拷貝的負載變化。
            邏輯:
            1.主進程調用fork 。
            2.子進程關閉listen fd ,開始刷數據到存儲。
            3.主進程調整策略,減少內存數據更改。

            redis的這種策略并不能保證數據可靠性,沒有write ahead日志,異常情況數據可能會丟失。
            因此redis加入了append only的日志文件,以保證數據可靠,但是每次數據更新都寫日志的做法使得日志文件增長很快,redis使用跟刷數據類似
            的方式后臺整理這個日志文件。

            注:目前的數據庫一般通過write ahead日志保證數據可靠性,但是這種日志也不是實時刷新,而是寫到buffer中,被觸發刷新到文件。


            posted @ 2010-08-21 10:37 star 閱讀(904) | 評論 (1)編輯 收藏

            僅列出標題
            共4頁: 1 2 3 4 
            久久伊人影视| 精品久久久无码人妻中文字幕豆芽| 国产精品视频久久| 99久久国产综合精品麻豆| 国产91色综合久久免费分享| 精品久久久久久久久中文字幕| 国产精品欧美久久久久无广告 | 亚洲国产精品人久久| 国产精品免费久久久久久久久| 久久婷婷人人澡人人爽人人爱| 精品久久久久久国产| 亚洲婷婷国产精品电影人久久| 精品免费久久久久久久| 久久精品国产99国产精品| 久久午夜无码鲁丝片| 久久综合色区| 久久亚洲欧美日本精品| 国产偷久久久精品专区| 国产一区二区精品久久凹凸| 无码人妻久久一区二区三区免费丨 | 久久精品中文字幕一区| 亚洲国产成人久久精品影视| 色欲av伊人久久大香线蕉影院| 久久久久国产一区二区三区| 精品亚洲综合久久中文字幕| 亚洲av伊人久久综合密臀性色| 怡红院日本一道日本久久| 国产精品久久久久久久久久影院| 伊人久久精品线影院| 精品久久久久香蕉网| 伊人久久精品无码av一区| 99久久综合国产精品免费| 伊色综合久久之综合久久| 蜜臀久久99精品久久久久久| 91精品国产91久久久久久蜜臀| 国内精品人妻无码久久久影院 | 亚洲国产精品久久久久久| 国产精品久久成人影院| 精品一区二区久久久久久久网站| 99久久精品国产高清一区二区| 久久久久久国产精品免费无码|