• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            posts - 200, comments - 8, trackbacks - 0, articles - 0

            hadoop面試時可能遇到的問題

            Posted on 2013-03-18 13:03 鑫龍 閱讀(1486) 評論(0)  編輯 收藏 引用 所屬分類: Hadoop

            面試hadoop可能被問到的問題,你能回答出幾個 ?

            1、hadoop運行的原理?

            2、mapreduce的原理?

            3、HDFS存儲的機制?

            4、舉一個簡單的例子說明mapreduce是怎么來運行的 ?

            5、面試的人給你出一些問題,讓你用mapreduce來實現?

                  比如:現在有10個文件夾,每個文件夾都有1000000個url.現在讓你找出top1000000url。

            6、hadoop中Combiner的作用?

            Src: http://p-x1984.javaeye.com/blog/859843


             

            Q1. Name the most common InputFormats defined in Hadoop? Which one is default ? 
            Following 2 are most common InputFormats defined in Hadoop 
            - TextInputFormat
            - KeyValueInputFormat
            - SequenceFileInputFormat
            Q2. What is the difference between TextInputFormatand KeyValueInputFormat class
            TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper
            KeyValueInputFormat: Reads text file and parses lines into key, val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.
            Q3. What is InputSplit in Hadoop
            When a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split 
            Q4. How is the splitting of file invoked in Hadoop Framework 
            It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user 
            Q5. Consider case scenario: In M/R system,
                - HDFS block size is 64 MB
                - Input format is FileInputFormat
                - We have 3 files of size 64K, 65Mb and 127Mb 
            then how many input splits will be made by Hadoop framework?
            Hadoop will make 5 splits as follows 
            - 1 split for 64K files 
            - 2  splits for 65Mb files 
            - 2 splits for 127Mb file 
            Q6. What is the purpose of RecordReader in Hadoop
            The InputSplithas defined a slice of work, but does not describe how to access it. The RecordReaderclass actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the InputFormat 
            Q7. After the Map phase finishes, the hadoop framework does "Partitioning, Shuffle and sort". Explain what happens in this phase?
            - Partitioning
            Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same

            - Shuffle
            After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.
            - Sort
            Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer 
            Q9. If no custom partitioner is defined in the hadoop then how is data partitioned before its sent to the reducer 
            The default partitioner computes a hash value for the key and assigns the partition based on this result 
            Q10. What is a Combiner 
            The Combiner is a "mini-reduce" process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.
            Q11. Give an example scenario where a cobiner can be used and where it cannot be used
            There can be several examples following are the most common ones
            - Scenario where you can use combiner
              Getting list of distinct words in a file
            - Scenario where you cannot use a combiner
              Calculating mean of a list of numbers 
            Q12. What is job tracker
            Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster
            Q13. What are some typical functions of Job Tracker
            The following are some typical tasks of Job Tracker
            - Accepts jobs from clients
            - It talks to the NameNode to determine the location of the data
            - It locates TaskTracker nodes with available slots at or near the data
            - It submits the work to the chosen Task Tracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker 
            Q14. What is task tracker
            Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker 

            Q15. Whats the relationship between Jobs and Tasks in Hadoop
            One job is broken down into one or many tasks in Hadoop
            Q16. Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What willhadoop do ?
            It will restart the task again on some other task tracker and only if the task fails more than 4 (default setting and can be changed) times will it kill the job
            Q17. Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoop provides to combat this  
            Speculative Execution 
            Q18. How does speculative execution works in Hadoop 
            Job tracker makes different task trackers process same input. When tasks complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the Task Trackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first. 
            Q19. Using command line in Linux, how will you 
            - see all jobs running in the hadoop cluster
            - kill a job
            hadoop job -list
            hadoop job -kill jobid 
            Q20. What is Hadoop Streaming 
            Streaming is a generic API that allows programs written in virtually any language to be used asHadoop Mapper and Reducer implementations 

            Q21. What is the characteristic of streaming API that makes it flexible run map reduce jobs in languages like perl, ruby, awk etc. 
            Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a Map Reduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.
            Q22. Whats is Distributed Cache in Hadoop
            Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.
            Q23. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it 
            This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.

            Q.24 What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application 
            This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution

            Q25. Have you ever used Counters in Hadoop. Give us an example scenario
            Anybody who claims to have worked on a Hadoop project is expected to use counters

            Q26. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job 
            Yes, The input format class provides methods to add multiple directories as input to a Hadoop job

            Q27. Is it possible to have Hadoop job output in multiple directories. If yes then how 
            Yes, by using Multiple Outputs class

            Q28. What will a hadoop job do if you try to run it with an output directory that is already present? Will it
            - overwrite it
            - warn you and continue
            - throw an exception and exit

            The hadoop job will throw an exception and exit.

            Q29. How can you set an arbitary number of mappers to be created for a job in Hadoop 
            This is a trick question. You cannot set it

            Q30. How can you set an arbitary number of reducers to be created for a job in Hadoop 
            You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting

             

            Src:http://xsh8637.blog.163.com/blog/#m=0&t=1&c=fks_084065087084081065083083087095086082081074093080080069

            国产精品99久久99久久久| 国产精品99久久久久久人| 青青热久久综合网伊人| 精品免费tv久久久久久久| 1000部精品久久久久久久久| 成人免费网站久久久| 国产99久久久久久免费看| 久久无码一区二区三区少妇| 久久精品中文字幕大胸| 国产美女久久久| 久久精品桃花综合| 久久久久久久尹人综合网亚洲| 亚洲国产成人久久笫一页| 久久久亚洲欧洲日产国码aⅴ| 久久强奷乱码老熟女| 久久99国产精品尤物| 久久久久久精品久久久久| 国产成人无码精品久久久久免费 | 麻豆av久久av盛宴av| 日韩一区二区久久久久久| 亚洲精品tv久久久久久久久| 国产精品熟女福利久久AV| 久久人人爽人人爽人人片AV不| 色婷婷久久综合中文久久一本| 国产亚洲美女精品久久久2020| 久久嫩草影院免费看夜色| 亚洲一区中文字幕久久| 99久久99久久| 午夜天堂av天堂久久久| 欧美亚洲国产精品久久| 国产精品久久久久久久午夜片 | 久久精品中文字幕无码绿巨人| 久久亚洲国产精品五月天婷| 99久久99久久精品国产片果冻| 狠狠色丁香久久婷婷综| 久久精品国产第一区二区三区 | 精品久久人人做人人爽综合| 久久久久国产一级毛片高清版| 久久精品国产99国产精品澳门| 国产精品9999久久久久| 欧美777精品久久久久网|