Hadoop项目实战之从HBase中读取数据

一.环境配置

1.本次实验的主要配置环境如下:

  • 物理机:windows 10
  • 虚拟机:VMware pro 12,用其分别创建了三个虚拟机,其ip地址分别为192.168.211.3
  • hadoop2.6.4
  • Server version: 5.7.21 MySQL Community Server (GPL)

二.需求分析

1.从HBase表中读取数据,将读取到的数据,做WordCount处理,统计出最后的频数。

三.实例代码

  • HBaseMapper 类
package mapReduce.FromHBToMys; 
 
import org.apache.hadoop.hbase.Cell; 
import org.apache.hadoop.hbase.CellUtil; 
import org.apache.hadoop.hbase.client.Result; 
import org.apache.hadoop.hbase.io.ImmutableBytesWritable; 
import org.apache.hadoop.hbase.mapreduce.TableMapper; 
import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.Text; 
 
import java.io.IOException; 
 
public class HBaseMapper extends TableMapper<Text, IntWritable> {
   
     
 
    /** 
     * 
     * @param key:KEYIN,这个是HBase API操作得到的rowkey 
     * @param value:VALUEINT,这个是HBase API操作读取到的内容 
     * @param context 
     * @throws IOException 
     * @throws InterruptedException 
     */ 
    @Override 
    protected void map(ImmutableBytesWritable key, Result value, Context context) 
            throws IOException, InterruptedException {
   
     
        //return array of cells 
        for (Cell cell : value.rawCells()) 
        {
   
     
            /*String family = new String((CellUtil.cloneFamily(cell))); 
            String row = new String ((CellUtil.cloneRow(cell))); 
            String qualifier = new String((CellUtil.cloneQualifier(cell))); 
            */ 
 
            String line = new String ((CellUtil.cloneValue(cell))); 
            //System.out.println("qualifier = "+qualifier+"\n line = "+line+"\n family = "+family+"\n row = "+row); 
            String [] word = line.split(" "); 
            for(int i = 0;i< word.length;i++){
   
     
                context.write(new Text(word[i]),new IntWritable(1)); 
            } 
        } 
    } 
} 
/* 
1.cloneQualifier(cell)这个方法是将cell中的内容赋值到一个byte[]数组中 
紧接着,将byte[]数组构造成一个String,再由String构造成一个Text对象【说明最后传递出去的keyOut是Text类型】 
2.ValueOut:IntWritable; 
 
 
3.如下代码分析: 
if(new String(CellUtil.cloneQualifier(cell)).equals("GroupID")){ 
   context.write(new Text(new String(CellUtil.cloneValue(cell))), new IntWritable(1)); 
} 
 
context.write(new Text(new String(CellUtil.cloneValue(cell))), new IntWritable(1));这行代码只是将每行中的值拿出来, 
然后作为Mapper阶段的KeyOut输出。 
 
4.Object a; 
String.valueOf(a)与new String(a)有时候的功能是不一样的 
 */ 
  • HBaseReducer 类
package mapReduce.FromHBToMys; 
 
import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapreduce.Reducer; 
 
import java.io.IOException; 
 
 
public class HBaseReducer extends 
        Reducer<Text, IntWritable,Text,IntWritable> {
   
     
 
    @Override 
    public void reduce(Text key, Iterable<IntWritable> values,Context context) 
            throws IOException, InterruptedException {
   
     
            int sum = 0; 
            for (IntWritable i : values) {
   
     
                sum += i.get(); 
            } 
            context.write(key,new IntWritable(sum)); 
        } 
} 
 
/* 
1.如下代码: 
byte[] keyBytes = Bytes.toBytes(key.toString()); 
        if(keyBytes.length>0){ 
        // 列族为content,列为count,列值为数目 
        context.write(key, new IntWritable(sum)); 
} 
 */ 
  • HBaseWCJob 类
package mapReduce.FromHBToMys; 
 
import org.apache.hadoop.conf.Configuration; 
import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.hbase.HBaseConfiguration; 
import org.apache.hadoop.hbase.HColumnDescriptor; 
import org.apache.hadoop.hbase.HTableDescriptor; 
 
import org.apache.hadoop.hbase.TableName; 
import org.apache.hadoop.hbase.client.*; 
import org.apache.hadoop.hbase.mapreduce.TableInputFormat; 
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; 
import org.apache.hadoop.hbase.util.Bytes; 
import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapreduce.Job; 
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 
 
import java.io.IOException; 
 
 
public class HBaseWCJob {
   
     
    public static final String tableName = "mytable"; 
    public static final String outputFilePath ="hdfs://192.168.211.3:9000/output/mytable"; 
 
    public static Configuration conf = HBaseConfiguration.create(); 
    static{
   
     
        //this configuration is very important 
        conf.set("hbase.master", "192.168.211.3:60000"); 
        conf.set("hbase.zookeeper.quorum", "192.168.211.3"); 
        conf.set("hbase.zookeeper.property.clientPort","2181"); 
    } 
    /** 
     * 
     * @param tableName table name you want to create 
     * @param columnFamily column name you want to add 
     * @throws IOException 
     */ 
    public static void createHBaseTable(String tableName,String ...columnFamily) throws IOException {
   
     
        HTableDescriptor hTableDescriptor = new HTableDescriptor(TableName.valueOf(tableName)); 
 
        HBaseAdmin admin = new HBaseAdmin(conf); 
        if (admin.tableExists(tableName)) {
   
     
            System.out.println("table exists, trying to recreate table......"); 
            //admin.disableTable(tableName); 
            //admin.deleteTable(tableName); 
            return ; 
        } 
        System.out.println("create new table:" + tableName); 
        for(String string:columnFamily){
   
     
            //Adds a column family. 
            hTableDescriptor.addFamily(new HColumnDescriptor(string)); 
        } 
        admin.createTable(hTableDescriptor); 
    } 
 
    public static void main(String[] args) 
            throws IOException, InterruptedException, ClassNotFoundException {
   
     
        //create a Hbase table with a column named cf 
        createHBaseTable(tableName,"cf"); 
        putToTable(tableName, "cf"); 
        Scan scan = new Scan(); 
 
        //Get all columns from the specified family 
        //scan.addFamily(Byte.getBytes("cf"));与下面这句作用相同 
        scan.addFamily("cf".getBytes()); 
 
        Job job = Job.getInstance(conf, "hbase_word_count"); 
        job.setJarByClass(HBaseWCJob.class); 
 
        //输入的类是TableInputFormat 
        job.setInputFormatClass(TableInputFormat.class); 
 
        //配置作业对象 
        //HBaseMapper.class :The mapper class to use. 
        //HBaseReducer.class :The reducer class to use. 
        TableMapReduceUtil.initTableMapperJob( 
                "mytable", 
                scan, 
                HBaseMapper.class, 
                Text.class, 
                IntWritable.class, 
                job); 
 
        //set output file 
        //care!The class of FileOutputFormat is in package of mapreduce 
        //judge the output file catalog have exists 
 
        FileOutputFormat.setOutputPath(job, new Path(outputFilePath)); 
 
        System.exit(job.waitForCompletion(true) ? 0 : 1); 
    } 
 
    public static void putToTable( 
            String tableName,String columnName){
   
     
        try {
   
     
            Connection connection = ConnectionFactory.createConnection(conf); 
            Admin admin = connection.getAdmin(); 
            //Table:Used to communicate with a single HBase table. 
            Table table = connection.getTable(TableName.valueOf(tableName)); 
            String []rowName = {
   
    "first","second","third","fourth"}; 
            String[] value = {
   
    "hello spark", "hi hadoop", "hello hbase", "hello kafka"}; 
            for(int i = 0;i< rowName.length;i++){
   
     
                Put put = new Put(Bytes.toBytes(rowName[i])); 
                put.addColumn( 
                        Bytes.toBytes("cf"), 
                        Bytes.toBytes("keyWord"), 
                        Bytes.toBytes(value[i])); 
                table.put(put); 
            } 
            System.out.println("put data to "+tableName+" successfully"); 
        } catch (IOException e) {
   
     
            e.printStackTrace(); 
        } 
    } 
} 

四. 执行结果

执行程序之后,得到如下输出日志:

16:17:08.813 [main] DEBUG org.apache.hadoop.security.Groups -  Creating new Groups object 
16:17:08.832 [main] DEBUG o.a.hadoop.util.NativeCodeLoader - Trying to load the custom-built native-hadoop library... 
16:17:08.835 [main] DEBUG o.a.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library 
16:17:08.836 [main] DEBUG o.a.h.s.JniBasedUnixGroupsMapping - Using JniBasedUnixGroupsMapping for Group resolution 
16:17:08.836 [main] DEBUG o.a.h.s.JniBasedUnixGroupsMappingWithFallback - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping 
16:17:08.907 [main] DEBUG org.apache.hadoop.security.Groups - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 
16:17:08.974 [main] DEBUG o.a.h.m.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)], valueName=Time) 
16:17:08.984 [main] DEBUG o.a.h.m.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)], valueName=Time) 
16:17:08.984 [main] DEBUG o.a.h.m.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[GetGroups], valueName=Time) 
16:17:08.985 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - UgiMetrics, User and group related metrics 
16:17:09.028 [main] DEBUG o.a.h.s.a.util.KerberosName - Kerberos krb5 configuration not found, setting default realm to empty 
16:17:09.036 [main] DEBUG o.a.h.security.UserGroupInformation - hadoop login 
16:17:09.037 [main] DEBUG o.a.h.security.UserGroupInformation - hadoop login commit 
16:17:09.041 [main] DEBUG o.a.h.security.UserGroupInformation - using local user:NTUserPrincipal: Administrator 
16:17:09.041 [main] DEBUG o.a.h.security.UserGroupInformation - Using user: "NTUserPrincipal: Administrator" with name Administrator 
16:17:09.041 [main] DEBUG o.a.h.security.UserGroupInformation - User entry: "Administrator" 
16:17:09.041 [main] DEBUG o.a.h.security.UserGroupInformation - UGI loginUser:Administrator (auth:SIMPLE) 
16:17:09.226 [main] INFO  o.a.h.h.z.RecoverableZooKeeper - Process identifier=hconnection-0x33c911a1 connecting to ZooKeeper ensemble=192.168.211.4:2181 
16:17:09.249 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - from system property: null 
16:17:09.249 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - from environment variable: null 
16:17:09.270 [main] DEBUG o.a.c.c.ConfigurationUtils - ConfigurationUtils.locate(): base is null, name is hadoop-metrics2-hbase.properties 
16:17:09.272 [main] DEBUG o.a.c.c.ConfigurationUtils - ConfigurationUtils.locate(): base is null, name is hadoop-metrics2.properties 
16:17:09.273 [main] WARN  o.a.h.metrics2.impl.MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 
16:17:09.276 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: period 
16:17:09.281 [main] DEBUG o.a.h.m.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableStat org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotStat with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Snapshot, Snapshot stats], valueName=Time) 
16:17:09.281 [main] DEBUG o.a.h.m.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableStat org.apache.hadoop.metrics2.impl.MetricsSystemImpl.publishStat with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Publish, Publishing stats], valueName=Time) 
16:17:09.281 [main] DEBUG o.a.h.m.lib.MutableMetricsFactory - field org.apache.hadoop.metrics2.lib.MutableCounterLong org.apache.hadoop.metrics2.impl.MetricsSystemImpl.droppedPubAll with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Dropped updates by all sinks], valueName=Time) 
16:17:09.284 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans 
16:17:09.284 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'MetricsConfig' for key: source.start_mbeans 
16:17:09.284 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans 
16:17:09.321 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating attr cache... 
16:17:09.321 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done. # tags & metrics=10 
16:17:09.321 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating info cache... 
16:17:09.321 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - [javax.management.MBeanAttributeInfo[description=Metrics context, name=tag.Context, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of active metrics sources, name=NumActiveSources, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of all registered metrics sources, name=NumAllSources, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of active metrics sinks, name=NumActiveSinks, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of all registered metrics sinks, name=NumAllSinks, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of ops for snapshot stats, name=SnapshotNumOps, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Average time for snapshot stats, name=SnapshotAvgTime, type=java.lang.Double, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of ops for publishing stats, name=PublishNumOps, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Average time for publishing stats, name=PublishAvgTime, type=java.lang.Double, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Dropped updates by all sinks, name=DroppedPubAll, type=java.lang.Long, read-only, descriptor={
   
    }]] 
16:17:09.321 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done 
16:17:09.322 [main] DEBUG o.apache.hadoop.metrics2.util.MBeans - Registered Hadoop:service=HBase,name=MetricsSystem,sub=Stats 
16:17:09.322 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - MBean for source MetricsSystem,sub=Stats registered. 
16:17:09.322 [main] INFO  o.a.h.m.impl.MetricsSystemImpl - Scheduled snapshot period at 10 second(s). 
16:17:09.323 [main] INFO  o.a.h.m.impl.MetricsSystemImpl - HBase metrics system started 
16:17:09.323 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans 
16:17:09.323 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'MetricsConfig' for key: source.start_mbeans 
16:17:09.323 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans 
16:17:09.323 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating attr cache... 
16:17:09.323 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done. # tags & metrics=8 
16:17:09.323 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating info cache... 
16:17:09.323 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - [javax.management.MBeanAttributeInfo[description=Metrics context, name=tag.Context, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Local hostname, name=tag.Hostname, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of ops for rate of successful kerberos logins and latency (milliseconds), name=LoginSuccessNumOps, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Average time for rate of successful kerberos logins and latency (milliseconds), name=LoginSuccessAvgTime, type=java.lang.Double, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of ops for rate of failed kerberos logins and latency (milliseconds), name=LoginFailureNumOps, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Average time for rate of failed kerberos logins and latency (milliseconds), name=LoginFailureAvgTime, type=java.lang.Double, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of ops for getGroups, name=GetGroupsNumOps, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Average time for getGroups, name=GetGroupsAvgTime, type=java.lang.Double, read-only, descriptor={
   
    }]] 
16:17:09.324 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done 
16:17:09.324 [main] DEBUG o.apache.hadoop.metrics2.util.MBeans - Registered Hadoop:service=HBase,name=UgiMetrics 
16:17:09.324 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - MBean for source UgiMetrics registered. 
16:17:09.324 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - Registered source UgiMetrics 
16:17:09.325 [main] DEBUG o.apache.hadoop.metrics2.util.MBeans - Registered Hadoop:service=HBase,name=MetricsSystem,sub=Control 
16:17:09.326 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - JvmMetrics, JVM related metrics etc. 
16:17:09.327 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans 
16:17:09.327 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'MetricsConfig' for key: source.start_mbeans 
16:17:09.327 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans 
16:17:09.329 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating attr cache... 
16:17:09.329 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done. # tags & metrics=27 
16:17:09.329 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating info cache... 
16:17:09.329 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - [javax.management.MBeanAttributeInfo[description=Metrics context, name=tag.Context, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Process name, name=tag.ProcessName, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Session ID, name=tag.SessionId, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Local hostname, name=tag.Hostname, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Non-heap memory used in MB, name=MemNonHeapUsedM, type=java.lang.Float, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Non-heap memory committed in MB, name=MemNonHeapCommittedM, type=java.lang.Float, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Non-heap memory max in MB, name=MemNonHeapMaxM, type=java.lang.Float, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Heap memory used in MB, name=MemHeapUsedM, type=java.lang.Float, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Heap memory committed in MB, name=MemHeapCommittedM, type=java.lang.Float, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Heap memory max in MB, name=MemHeapMaxM, type=java.lang.Float, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Max memory size in MB, name=MemMaxM, type=java.lang.Float, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=GC Count for PS Scavenge, name=GcCountPS Scavenge, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=GC Time for PS Scavenge, name=GcTimeMillisPS Scavenge, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=GC Count for PS MarkSweep, name=GcCountPS MarkSweep, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=GC Time for PS MarkSweep, name=GcTimeMillisPS MarkSweep, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Total GC count, name=GcCount, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Total GC time in milliseconds, name=GcTimeMillis, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of new threads, name=ThreadsNew, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of runnable threads, name=ThreadsRunnable, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of blocked threads, name=ThreadsBlocked, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of waiting threads, name=ThreadsWaiting, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of timed waiting threads, name=ThreadsTimedWaiting, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Number of terminated threads, name=ThreadsTerminated, type=java.lang.Integer, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Total number of fatal log events, name=LogFatal, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Total number of error log events, name=LogError, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Total number of warning log events, name=LogWarn, type=java.lang.Long, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Total number of info log events, name=LogInfo, type=java.lang.Long, read-only, descriptor={
   
    }]] 
16:17:09.330 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done 
16:17:09.330 [main] DEBUG o.apache.hadoop.metrics2.util.MBeans - Registered Hadoop:service=HBase,name=JvmMetrics 
16:17:09.330 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - MBean for source JvmMetrics registered. 
16:17:09.330 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - Registered source JvmMetrics 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - ZooKeeper,sub=ZOOKEEPER, Metrics about ZooKeeper 
16:17:09.337 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: source.source.start_mbeans 
16:17:09.337 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'MetricsConfig' for key: source.start_mbeans 
16:17:09.337 [main] DEBUG o.a.h.metrics2.impl.MetricsConfig - poking parent 'PropertiesConfiguration' for key: *.source.start_mbeans 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating attr cache... 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done. # tags & metrics=2 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Updating info cache... 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - [javax.management.MBeanAttributeInfo[description=Metrics context, name=tag.Context, type=java.lang.String, read-only, descriptor={
   
    }], javax.management.MBeanAttributeInfo[description=Local hostname, name=tag.Hostname, type=java.lang.String, read-only, descriptor={
   
    }]] 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - Done 
16:17:09.337 [main] DEBUG o.apache.hadoop.metrics2.util.MBeans - Registered Hadoop:service=HBase,name=ZooKeeper,sub=ZOOKEEPER 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSourceAdapter - MBean for source ZooKeeper,sub=ZOOKEEPER registered. 
16:17:09.337 [main] DEBUG o.a.h.m.impl.MetricsSystemImpl - Registered source ZooKeeper,sub=ZOOKEEPER 
16:17:09.341 [main] INFO  o.a.h.hbase.metrics.MetricRegistries - Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 
16:17:09.356 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 
16:17:09.356 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:host.name=ICOS-20180710CX 
16:17:09.356 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.8.0_77 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.home=D:\Program Files\Java\jdk1.8.0_77\jre 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=D:\Program Files\Java\jdk1.8.0_77\jre\lib\charsets.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\deploy.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\access-bridge-64.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\cldrdata.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\dnsns.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\jaccess.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\jfxrt.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\localedata.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\nashorn.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\sunec.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\sunjce_provider.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\sunmscapi.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\sunpkcs11.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\ext\zipfs.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\javaws.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\jce.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\jfr.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\jfxswt.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\jsse.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\management-agent.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\plugin.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\resources.jar;D:\Program Files\Java\jdk1.8.0_77\jre\lib\rt.jar;E:\intellij_Project\AllDemo\hadoopDemp\target\classes;E:\.m2\repository\org\apache\spark\spark-core_2.11\2.2.0\spark-core_2.11-2.2.0.jar;E:\.m2\repository\org\apache\avro\avro\1.7.7\avro-1.7.7.jar;E:\.m2\repository\com\thoughtworks\paranamer\paranamer\2.3\paranamer-2.3.jar;E:\.m2\repository\org\apache\avro\avro-mapred\1.7.7\avro-mapred-1.7.7-hadoop2.jar;E:\.m2\repository\org\apache\avro\avro-ipc\1.7.7\avro-ipc-1.7.7.jar;E:\.m2\repository\org\apache\avro\avro-ipc\1.7.7\avro-ipc-1.7.7-tests.jar;E:\.m2\repository\com\twitter\chill_2.11\0.8.0\chill_2.11-0.8.0.jar;E:\.m2\repository\com\esotericsoftware\kryo-shaded\3.0.3\kryo-shaded-3.0.3.jar;E:\.m2\repository\com\esotericsoftware\minlog\1.3.0\minlog-1.3.0.jar;E:\.m2\repository\org\objenesis\objenesis\2.1\objenesis-2.1.jar;E:\.m2\repository\com\twitter\chill-java\0.8.0\chill-java-0.8.0.jar;E:\.m2\repository\org\apache\xbean\xbean-asm5-shaded\4.4\xbean-asm5-shaded-4.4.jar;E:\.m2\repository\org\apache\hadoop\hadoop-client\2.6.5\hadoop-client-2.6.5.jar;E:\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-app\2.6.5\hadoop-mapreduce-client-app-2.6.5.jar;E:\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-common\2.6.5\hadoop-mapreduce-client-common-2.6.5.jar;E:\.m2\repository\org\apache\hadoop\hadoop-yarn-client\2.6.5\hadoop-yarn-client-2.6.5.jar;E:\.m2\repository\org\apache\hadoop\hadoop-yarn-server-common\2.6.5\hadoop-yarn-server-common-2.6.5.jar;E:\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-shuffle\2.6.5\hadoop-mapreduce-client-shuffle-2.6.5.jar;E:\.m2\repository\org\apache\hadoop\hadoop-yarn-api\2.6.5\hadoop-yarn-api-2.6.5.jar;E:\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-jobclient\2.6.5\hadoop-mapreduce-client-jobclient-2.6.5.jar;E:\.m2\repository\org\apache\spark\spark-launcher_2.11\2.2.0\spark-launcher_2.11-2.2.0.jar;E:\.m2\repository\org\apache\spark\spark-network-common_2.11\2.2.0\spark-network-common_2.11-2.2.0.jar;E:\.m2\repository\org\fusesource\leveldbjni\leveldbjni-all\1.8\leveldbjni-all-1.8.jar;E:\.m2\repository\org\apache\spark\spark-network-shuffle_2.11\2.2.0\spark-network-shuffle_2.11-2.2.0.jar;E:\.m2\repository\org\apache\spark\spark-unsafe_2.11\2.2.0\spark-unsafe_2.11-2.2.0.jar;E:\.m2\repository\net\java\dev\jets3t\jets3t\0.9.3\jets3t-0.9.3.jar;E:\.m2\repository\javax\activation\activation\1.1.1\activation-1.1.1.jar;E:\.m2\repository\mx4j\mx4j\3.0.2\mx4j-3.0.2.jar;E:\.m2\repository\javax\mail\mail\1.4.7\mail-1.4.7.jar;E:\.m2\repository\org\bouncycastle\bcprov-jdk15on\1.51\bcprov-jdk15on-1.51.jar;E:\.m2\repository\com\jamesmurty\utils\java-xmlbuilder\1.0\java-xmlbuilder-1.0.jar;E:\.m2\repository\net\iharder\base64\2.3.8\base64-2.3.8.jar;E:\.m2\repository\org\apache\curator\curator-recipes\2.6.0\curator-recipes-2.6.0.jar;E:\.m2\repository\org\apache\curator\curator-framework\2.6.0\curator-framework-2.6.0.jar;E:\.m2\repository\javax\servlet\javax.servlet-api\3.1.0\javax.servlet-api-3.1.0.jar;E:\.m2\repository\org\apache\commons\commons-lang3\3.5\commons-lang3-3.5.jar;E:\.m2\repository\org\apache\commons\commons-math3\3.4.1\commons-math3-3.4.1.jar;E:\.m2\repository\com\google\code\findbugs\jsr305\1.3.9\jsr305-1.3.9.jar;E:\.m2\repository\org\slf4j\slf4j-api\1.7.16\slf4j-api-1.7.16.jar;E:\.m2\repository\org\slf4j\jul-to-slf4j\1.7.16\jul-to-slf4j-1.7.16.jar;E:\.m2\repository\org\slf4j\jcl-over-slf4j\1.7.16\jcl-over-slf4j-1.7.16.jar;E:\.m2\repository\log4j\log4j\1.2.17\log4j-1.2.17.jar;E:\.m2\repository\com\ning\compress-lzf\1.0.3\compress-lzf-1.0.3.jar;E:\.m2\repository\org\xerial\snappy\snappy-java\1.1.2.6\snappy-java-1.1.2.6.jar;E:\.m2\repository\net\jpountz\lz4\lz4\1.3.0\lz4-1.3.0.jar;E:\.m2\repository\org\roaringbitmap\RoaringBitmap\0.5.11\RoaringBitmap-0.5.11.jar;E:\.m2\repository\commons-net\commons-net\2.2\commons-net-2.2.jar;E:\.m2\repository\org\json4s\json4s-jackson_2.11\3.2.11\json4s-jackson_2.11-3.2.11.jar;E:\.m2\repository\org\json4s\json4s-core_2.11\3.2.11\json4s-core_2.11-3.2.11.jar;E:\.m2\repository\org\json4s\json4s-ast_2.11\3.2.11\json4s-ast_2.11-3.2.11.jar;E:\.m2\repository\org\scala-lang\scalap\2.11.0\scalap-2.11.0.jar;E:\.m2\repository\org\scala-lang\scala-compiler\2.11.0\scala-compiler-2.11.0.jar;E:\.m2\repository\org\glassfish\jersey\core\jersey-client\2.22.2\jersey-client-2.22.2.jar;E:\.m2\repository\javax\ws\rs\javax.ws.rs-api\2.0.1\javax.ws.rs-api-2.0.1.jar;E:\.m2\repository\org\glassfish\hk2\hk2-api\2.4.0-b34\hk2-api-2.4.0-b34.jar;E:\.m2\repository\org\glassfish\hk2\hk2-utils\2.4.0-b34\hk2-utils-2.4.0-b34.jar;E:\.m2\repository\org\glassfish\hk2\external\aopalliance-repackaged\2.4.0-b34\aopalliance-repackaged-2.4.0-b34.jar;E:\.m2\repository\org\glassfish\hk2\external\javax.inject\2.4.0-b34\javax.inject-2.4.0-b34.jar;E:\.m2\repository\org\glassfish\hk2\hk2-locator\2.4.0-b34\hk2-locator-2.4.0-b34.jar;E:\.m2\repository\org\javassist\javassist\3.18.1-GA\javassist-3.18.1-GA.jar;E:\.m2\repository\org\glassfish\jersey\core\jersey-common\2.22.2\jersey-common-2.22.2.jar;E:\.m2\repository\javax\annotation\javax.annotation-api\1.2\javax.annotation-api-1.2.jar;E:\.m2\repository\org\glassfish\jersey\bundles\repackaged\jersey-guava\2.22.2\jersey-guava-2.22.2.jar;E:\.m2\repository\org\glassfish\hk2\osgi-resource-locator\1.0.1\osgi-resource-locator-1.0.1.jar;E:\.m2\repository\org\glassfish\jersey\core\jersey-server\2.22.2\jersey-server-2.22.2.jar;E:\.m2\repository\org\glassfish\jersey\media\jersey-media-jaxb\2.22.2\jersey-media-jaxb-2.22.2.jar;E:\.m2\repository\javax\validation\validation-api\1.1.0.Final\validation-api-1.1.0.Final.jar;E:\.m2\repository\org\glassfish\jersey\containers\jersey-container-servlet\2.22.2\jersey-container-servlet-2.22.2.jar;E:\.m2\repository\org\glassfish\jersey\containers\jersey-container-servlet-core\2.22.2\jersey-container-servlet-core-2.22.2.jar;E:\.m2\repository\io\netty\netty-all\4.0.43.Final\netty-all-4.0.43.Final.jar;E:\.m2\repository\io\netty\netty\3.9.9.Final\netty-3.9.9.Final.jar;E:\.m2\repository\com\clearspring\analytics\stream\2.7.0\stream-2.7.0.jar;E:\.m2\repository\io\dropwizard\metrics\metrics-core\3.1.2\metrics-core-3.1.2.jar;E:\.m2\repository\io\dropwizard\metrics\metrics-jvm\3.1.2\metrics-jvm-3.1.2.jar;E:\.m2\repository\io\dropwizard\metrics\metrics-json\3.1.2\metrics-json-3.1.2.jar;E:\.m2\repository\io\dropwizard\metrics\metrics-graphite\3.1.2\metrics-graphite-3.1.2.jar;E:\.m2\repository\com\fasterxml\jackson\core\jackson-databind\2.6.5\jackson-databind-2.6.5.jar;E:\.m2\repository\com\fasterxml\jackson\module\jackson-module-scala_2.11\2.6.5\jackson-module-scala_2.11-2.6.5.jar;E:\.m2\repository\org\scala-lang\scala-reflect\2.11.7\scala-reflect-2.11.7.jar;E:\.m2\repository\com\fasterxml\jackson\module\jackson-module-paranamer\2.6.5\jackson-module-paranamer-2.6.5.jar;E:\.m2\repository\org\apache\ivy\ivy\2.4.0\ivy-2.4.0.jar;E:\.m2\repository\oro\oro\2.0.8\oro-2.0.8.jar;E:\.m2\repository\net\razorvine\pyrolite\4.13\pyrolite-4.13.jar;E:\.m2\repository\net\sf\py4j\py4j\0.10.4\py4j-0.10.4.jar;E:\.m2\repository\org\apache\spark\spark-tags_2.11\2.2.0\spark-tags_2.11-2.2.0.jar;E:\.m2\repository\org\apache\commons\commons-crypto\1.0.0\commons-crypto-1.0.0.jar;E:\.m2\repository\org\spark-project\spark\unused\1.0.0\unused-1.0.0.jar;E:\.m2\repository\org\apache\spark\spark-streaming_2.11\2.2.0\spark-streaming_2.11-2.2.0.jar;E:\.m2\repository\org\apache\spark\spark-streaming-kafka_2.11\1.6.3\spark-streaming-kafka_2.11-1.6.3.jar;E:\.m2\repository\org\apache\kafka\kafka_2.11\0.8.2.1\kafka_2.11-0.8.2.1.jar;E:\.m2\repository\org\scala-lang\modules\scala-xml_2.11\1.0.2\scala-xml_2.11-1.0.2.jar;E:\.m2\repository\org\scala-lang\modules\scala-parser-combinators_2.11\1.0.2\scala-parser-combinators_2.11-1.0.2.jar;E:\.m2\repository\com\101tec\zkclient\0.3\zkclient-0.3.jar;E:\.m2\repository\org\scala-lang\scala-library\2.11.0\scala-library-2.11.0.jar;E:\.m2\repository\org\apache\spark\spark-sql_2.11\2.2.0\spark-sql_2.11-2.2.0.jar;E:\.m2\repository\com\univocity\univocity-parsers\2.2.1\univocity-parsers-2.2.1.jar;E:\.m2\repository\org\apache\spark\spark-sketch_2.11\2.2.0\spark-sketch_2.11-2.2.0.jar;E:\.m2\repository\org\apache\spark\spark-catalyst_2.11\2.2.0\spark-catalyst_2.11-2.2.0.jar;E:\.m2\repository\org\codehaus\janino\janino\3.0.0\janino-3.0.0.jar;E:\.m2\repository\org\codehaus\janino\commons-compiler\3.0.0\commons-compiler-3.0.0.jar;E:\.m2\repository\org\antlr\antlr4-runtime\4.5.3\antlr4-runtime-4.5.3.jar;E:\.m2\repository\org\apache\parquet\parquet-column\1.8.2\parquet-column-1.8.2.jar;E:\.m2\repository\org\apache\parquet\parquet-common\1.8.2\parquet-common-1.8.2.jar;E:\.m2\repository\org\apache\parquet\parquet-encoding\1.8.2\parquet-encoding-1.8.2.jar;E:\.m2\repository\org\apache\parquet\parquet-hadoop\1.8.2\parquet-hadoop-1.8.2.jar;E:\.m2\repository\org\apache\parquet\parquet-format\2.3.1\parquet-format-2.3.1.jar;E:\.m2\repository\org\apache\parquet\parquet-jackson\1.8.2\parquet-jackson-1.8.2.jar;E:\.m2\repository\org\apache\kafka\kafka-clients\1.0.0\kafka-clients-1.0.0.jar;E:\.m2\repository\org\lz4\lz4-java\1.4\lz4-java-1.4.jar;E:\.m2\repository\org\apache\hadoop\hadoop-common\2.6.4\hadoop-common-2.6.4.jar;E:\.m2\repository\org\apache\hadoop\hadoop-annotations\2.6.4\hadoop-annotations-2.6.4.jar;D:\Program Files\Java\jdk1.8.0_77\lib\tools.jar;E:\.m2\repository\com\google\guava\guava\11.0.2\guava-11.0.2.jar;E:\.m2\repository\commons-cli\commons-cli\1.2\commons-cli-1.2.jar;E:\.m2\repository\xmlenc\xmlenc\0.52\xmlenc-0.52.jar;E:\.m2\repository\commons-httpclient\commons-httpclient\3.1\commons-httpclient-3.1.jar;E:\.m2\repository\commons-codec\commons-codec\1.4\commons-codec-1.4.jar;E:\.m2\repository\commons-io\commons-io\2.4\commons-io-2.4.jar;E:\.m2\repository\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;E:\.m2\repository\javax\servlet\servlet-api\2.5\servlet-api-2.5.jar;E:\.m2\repository\org\mortbay\jetty\jetty\6.1.26\jetty-6.1.26.jar;E:\.m2\repository\org\mortbay\jetty\jetty-util\6.1.26\jetty-util-6.1.26.jar;E:\.m2\repository\com\sun\jersey\jersey-core\1.9\jersey-core-1.9.jar;E:\.m2\repository\com\sun\jersey\jersey-json\1.9\jersey-json-1.9.jar;E:\.m2\repository\org\codehaus\jettison\jettison\1.1\jettison-1.1.jar;E:\.m2\repository\com\sun\xml\bind\jaxb-impl\2.2.3-1\jaxb-impl-2.2.3-1.jar;E:\.m2\repository\org\codehaus\jackson\jackson-xc\1.8.3\jackson-xc-1.8.3.jar;E:\.m2\repository\com\sun\jersey\jersey-server\1.9\jersey-server-1.9.jar;E:\.m2\repository\asm\asm\3.1\asm-3.1.jar;E:\.m2\repository\tomcat\jasper-compiler\5.5.23\jasper-compiler-5.5.23.jar;E:\.m2\repository\tomcat\jasper-runtime\5.5.23\jasper-runtime-5.5.23.jar;E:\.m2\repository\javax\servlet\jsp\jsp-api\2.1\jsp-api-2.1.jar;E:\.m2\repository\commons-el\commons-el\1.0\commons-el-1.0.jar;E:\.m2\repository\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar;E:\.m2\repository\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;E:\.m2\repository\commons-configuration\commons-configuration\1.6\commons-configuration-1.6.jar;E:\.m2\repository\commons-digester\commons-digester\1.8\commons-digester-1.8.jar;E:\.m2\repository\commons-beanutils\commons-beanutils-core\1.8.0\commons-beanutils-core-1.8.0.jar;E:\.m2\repository\org\codehaus\jackson\jackson-core-asl\1.9.13\jackson-core-asl-1.9.13.jar;E:\.m2\repository\org\codehaus\jackson\jackson-mapper-asl\1.9.13\jackson-mapper-asl-1.9.13.jar;E:\.m2\repository\com\google\protobuf\protobuf-java\2.5.0\protobuf-java-2.5.0.jar;E:\.m2\repository\org\apache\hadoop\hadoop-auth\2.6.4\hadoop-auth-2.6.4.jar;E:\.m2\repository\org\apache\directory\server\apacheds-kerberos-codec\2.0.0-M15\apacheds-kerberos-codec-2.0.0-M15.jar;E:\.m2\repository\org\apache\directory\server\apacheds-i18n\2.0.0-M15\apacheds-i18n-2.0.0-M15.jar;E:\.m2\repository\org\apache\directory\api\api-asn1-api\1.0.0-M20\api-asn1-api-1.0.0-M20.jar;E:\.m2\repository\org\apache\directory\api\api-util\1.0.0-M20\api-util-1.0.0-M20.jar;E:\.m2\repository\com\jcraft\jsch\0.1.42\jsch-0.1.42.jar;E:\.m2\repository\org\apache\curator\curator-client\2.6.0\curator-client-2.6.0.jar;E:\.m2\repository\org\htrace\htrace-core\3.0.4\htrace-core-3.0.4.jar;E:\.m2\repository\org\apache\zookeeper\zookeeper\3.4.6\zookeeper-3.4.6.jar;E:\.m2\repository\org\apache\commons\commons-compress\1.4.1\commons-compress-1.4.1.jar;E:\.m2\repository\org\tukaani\xz\1.0\xz-1.0.jar;E:\.m2\repository\org\apache\hadoop\hadoop-hdfs\2.6.4\hadoop-hdfs-2.6.4.jar;E:\.m2\repository\commons-daemon\commons-daemon\1.0.13\commons-daemon-1.0.13.jar;E:\.m2\repository\xerces\xercesImpl\2.9.1\xercesImpl-2.9.1.jar;E:\.m2\repository\xml-apis\xml-apis\1.3.04\xml-apis-1.3.04.jar;E:\.m2\repository\org\apache\hadoop\hadoop-mapreduce-client-core\2.6.4\hadoop-mapreduce-client-core-2.6.4.jar;E:\.m2\repository\org\apache\hadoop\hadoop-yarn-common\2.6.4\hadoop-yarn-common-2.6.4.jar;E:\.m2\repository\javax\xml\bind\jaxb-api\2.2.2\jaxb-api-2.2.2.jar;E:\.m2\repository\javax\xml\stream\stax-api\1.0-2\stax-api-1.0-2.jar;E:\.m2\repository\com\sun\jersey\jersey-client\1.9\jersey-client-1.9.jar;E:\.m2\repository\com\google\inject\guice\3.0\guice-3.0.jar;E:\.m2\repository\javax\inject\javax.inject\1\javax.inject-1.jar;E:\.m2\repository\aopalliance\aopalliance\1.0\aopalliance-1.0.jar;E:\.m2\repository\com\sun\jersey\contribs\jersey-guice\1.9\jersey-guice-1.9.jar;E:\.m2\repository\com\google\inject\extensions\guice-servlet\3.0\guice-servlet-3.0.jar;E:\.m2\repository\com\github\stephenc\findbugs\findbugs-annotations\1.3.9-1\findbugs-annotations-1.3.9-1.jar;E:\.m2\repository\junit\junit\4.12\junit-4.12.jar;E:\.m2\repository\org\hamcrest\hamcrest-core\1.3\hamcrest-core-1.3.jar;E:\.m2\repository\org\apache\hbase\hbase-client\1.4.0\hbase-client-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-annotations\1.4.0\hbase-annotations-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-hadoop2-compat\1.4.0\hbase-hadoop2-compat-1.4.0.jar;E:\.m2\repository\org\apache\htrace\htrace-core\3.1.0-incubating\htrace-core-3.1.0-incubating.jar;E:\.m2\repository\org\jruby\jcodings\jcodings\1.0.8\jcodings-1.0.8.jar;E:\.m2\repository\org\jruby\joni\joni\2.1.2\joni-2.1.2.jar;E:\.m2\repository\com\yammer\metrics\metrics-core\2.2.0\metrics-core-2.2.0.jar;E:\.m2\repository\org\apache\hbase\hbase-server\1.4.0\hbase-server-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-procedure\1.4.0\hbase-procedure-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-common\1.4.0\hbase-common-1.4.0-tests.jar;E:\.m2\repository\org\apache\hbase\hbase-prefix-tree\1.4.0\hbase-prefix-tree-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-metrics-api\1.4.0\hbase-metrics-api-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-metrics\1.4.0\hbase-metrics-1.4.0.jar;E:\.m2\repository\org\apache\commons\commons-math\2.2\commons-math-2.2.jar;E:\.m2\repository\org\mortbay\jetty\jetty-sslengine\6.1.26\jetty-sslengine-6.1.26.jar;E:\.m2\repository\org\mortbay\jetty\jsp-2.1\6.1.14\jsp-2.1-6.1.14.jar;E:\.m2\repository\org\mortbay\jetty\jsp-api-2.1\6.1.14\jsp-api-2.1-6.1.14.jar;E:\.m2\repository\org\mortbay\jetty\servlet-api-2.5\6.1.14\servlet-api-2.5-6.1.14.jar;E:\.m2\repository\org\codehaus\jackson\jackson-jaxrs\1.9.13\jackson-jaxrs-1.9.13.jar;E:\.m2\repository\org\jamon\jamon-runtime\2.4.1\jamon-runtime-2.4.1.jar;E:\.m2\repository\com\lmax\disruptor\3.3.0\disruptor-3.3.0.jar;E:\.m2\repository\org\apache\httpcomponents\httpclient\4.5.2\httpclient-4.5.2.jar;E:\.m2\repository\org\apache\httpcomponents\httpcore\4.4.4\httpcore-4.4.4.jar;E:\.m2\repository\org\apache\hbase\hbase-common\1.4.0\hbase-common-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-protocol\1.4.0\hbase-protocol-1.4.0.jar;E:\.m2\repository\org\apache\hbase\hbase-hadoop-compat\1.4.0\hbase-hadoop-compat-1.4.0.jar;E:\.m2\repository\jline\jline\0.9.94\jline-0.9.94.jar;E:\.m2\repository\org\apache\yetus\audience-annotations\0.5.0\audience-annotations-0.5.0.jar;E:\.m2\repository\mysql\mysql-connector-java\5.1.30\mysql-connector-java-5.1.30.jar;E:\.m2\repository\commons-beanutils\commons-beanutils\1.8.0\commons-beanutils-1.8.0.jar;E:\.m2\repository\net\sf\ezmorph\ezmorph\1.0.6\ezmorph-1.0.6.jar;E:\.m2\repository\org\jsoup\jsoup\1.11.3\jsoup-1.11.3.jar;E:\.m2\repository\net\opentsdb\opentsdb\2.3.0\opentsdb-2.3.0.jar;E:\.m2\repository\com\fasterxml\jackson\core\jackson-annotations\2.4.3\jackson-annotations-2.4.3.jar;E:\.m2\repository\com\fasterxml\jackson\core\jackson-core\2.4.3\jackson-core-2.4.3.jar;E:\.m2\repository\com\stumbleupon\async\1.4.0\async-1.4.0.jar;E:\.m2\repository\org\apache\commons\commons-jexl\2.1.1\commons-jexl-2.1.1.jar;E:\.m2\repository\org\jgrapht\jgrapht-core\0.9.1\jgrapht-core-0.9.1.jar;E:\.m2\repository\org\slf4j\log4j-over-slf4j\1.7.7\log4j-over-slf4j-1.7.7.jar;E:\.m2\repository\ch\qos\logback\logback-core\1.0.13\logback-core-1.0.13.jar;E:\.m2\repository\ch\qos\logback\logback-classic\1.0.13\logback-classic-1.0.13.jar;E:\.m2\repository\com\google\gwt\gwt-user\2.6.0\gwt-user-2.6.0.jar;E:\.m2\repository\javax\validation\validation-api\1.0.0.GA\validation-api-1.0.0.GA-sources.jar;E:\.m2\repository\org\json\json\20090211\json-20090211.jar;E:\.m2\repository\net\opentsdb\opentsdb_gwt_theme\1.0.0\opentsdb_gwt_theme-1.0.0.jar;E:\.m2\repository\org\hbase\asynchbase\1.7.2\asynchbase-1.7.2.jar;E:\.m2\repository\com\google\code\gson\gson\2.7\gson-2.7.jar;E:\.m2\repository\com\alibaba\fastjson\1.2.10\fastjson-1.2.10.jar;E:\.m2\repository\org\slf4j\slf4j-log4j12\1.7.6\slf4j-log4j12-1.7.6.jar;D:\Program Files\JetBrains\IntelliJ IDEA 2017.2.3\lib\idea_rt.jar 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=D:\Program Files\Java\jdk1.8.0_77\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Program Files\VanDyke Software\Clients\;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;%JAVA_HOME%\bin;D:\Program Files\MySQL\MySQL Server 5.7\bin;D:\Program Files (x86)\apache-maven-3.3.9\bin;E:\intellij_Project\opentsdb_dev\src\main\java\net\opentsdb;D:\Program Files\Git\cmd;D:\Program Files\gnuplot\bin;D:\Program Files\Java\jdk1.8.0_77\bin;D:\Program Files\hadoop-2.6.4\bin;D:\Program Files (x86)\cmder;D:\Program Files\EmEditor;C:\Users\Administrator\AppData\Local\Microsoft\WindowsApps;d:\Users\Administrator\AppData\Local\Programs\Microsoft VS Code\bin;D:\Program Files\hadoop-2.6.4bin;. 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\ 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA> 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.name=Windows 10 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:os.version=10.0 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:user.name=Administrator 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:user.home=C:\Users\Administrator 
16:17:09.357 [main] INFO  org.apache.zookeeper.ZooKeeper - Client environment:user.dir=E:\intellij_Project\AllDemo 
16:17:09.358 [main] INFO  org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=192.168.211.4:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@54c5a2ff 
16:17:09.360 [main] DEBUG org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false 
16:17:09.407 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Opening socket connection to server 192.168.211.4/192.168.211.4:2181. Will not attempt to authenticate using SASL (unknown error) 
16:17:09.408 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Socket connection established to 192.168.211.4/192.168.211.4:2181, initiating session 
16:17:09.409 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 192.168.211.4/192.168.211.4:2181 
16:17:09.416 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Session establishment complete on server 192.168.211.4/192.168.211.4:2181, sessionid = 0x4000005536a0007, negotiated timeout = 40000 
16:17:09.417 [main-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x33c911a10x0, quorum=192.168.211.4:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 
16:17:09.417 [main-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x33c911a1-0x4000005536a0007 connected 
16:17:09.431 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0007, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,339302416519,0  request:: '/hbase/hbaseid,F  response:: s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:17:09.433 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0007, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,339302416519,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a36303030306932ffffff8effffff960fffffff371ffffffec50425546a2433326339633631312d613435322d343130352d396138352d613166343766353335373633,s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:17:09.590 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework 
16:17:09.593 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple 
16:17:09.594 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4 
16:17:09.606 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false 
16:17:09.607 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available 
16:17:09.607 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available 
16:17:09.608 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available 
16:17:09.608 [main] DEBUG i.n.util.internal.PlatformDependent0 - direct buffer constructor: available 
16:17:09.609 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true 
16:17:09.609 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.<init>(long, int): available 
16:17:09.609 [main] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available 
16:17:09.609 [main] DEBUG i.n.util.internal.PlatformDependent - Platform: Windows 
16:17:09.610 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8 
16:17:09.610 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available 
16:17:09.610 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false 
16:17:09.658 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: available 
16:17:09.658 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: C:\Users\ADMINI~1\AppData\Local\Temp (java.io.tmpdir) 
16:17:09.658 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model) 
16:17:09.658 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false 
16:17:09.658 [main] DEBUG i.n.util.internal.PlatformDependent - io.netty.maxDirectMemory: 3791650816 bytes 
16:17:09.658 [main] DEBUG i.n.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@7efaad5a 
16:17:09.662 [main] DEBUG i.n.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available 
16:17:09.666 [main] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0x8e70dadcbd6b5709 (took 0 ms) 
16:17:09.672 [main] DEBUG o.apache.hadoop.hbase.util.ClassSize - Using Unsafe to estimate memory layout 
16:17:09.675 [main] DEBUG o.a.h.hbase.ipc.AbstractRpcClient - Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@466276d8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 
16:17:09.706 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0007, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,339302416519,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3136303230ffffff9a46fffffffdffffffec367effffffeaffffff9e50425546a13a77365727665723410ffffff947d18ffffff8dffffff9affffff96ffffffa7fffffffd2c100183,s{
   
    339302416421,339302416421,1545465097164,1545465097164,0,0,0,0,60,0,339302416421}  
16:17:09.714 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0007, packet:: clientPath:null serverPath:null finished:false header:: 4,8  replyHeader:: 4,339302416519,0  request:: '/hbase,F  response:: v{'replication,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'master-maintenance,'region-in-transition,'online-snapshot,'switch,'master,'running,'recovering-regions,'draining,'namespace,'hbaseid,'table}  
16:17:09.850 [htable-pool3-t1] DEBUG o.a.hadoop.hbase.ipc.RpcConnection - Use SIMPLE authentication for service ClientService, sasl=false 
16:17:09.896 [htable-pool3-t1] DEBUG o.a.h.h.ipc.BlockingRpcConnection - Connecting to server4/192.168.211.4:16020 
table exists, trying to recreate table...... 
16:17:09.940 [main] INFO  o.a.h.h.z.RecoverableZooKeeper - Process identifier=hconnection-0x5adb0db3 connecting to ZooKeeper ensemble=192.168.211.4:2181 
16:17:09.940 [main] INFO  org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=192.168.211.4:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@3f270e0a 
16:17:09.941 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Opening socket connection to server 192.168.211.4/192.168.211.4:2181. Will not attempt to authenticate using SASL (unknown error) 
16:17:09.942 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Socket connection established to 192.168.211.4/192.168.211.4:2181, initiating session 
16:17:09.942 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 192.168.211.4/192.168.211.4:2181 
16:17:09.945 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Session establishment complete on server 192.168.211.4/192.168.211.4:2181, sessionid = 0x4000005536a0008, negotiated timeout = 40000 
16:17:09.945 [main-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x5adb0db30x0, quorum=192.168.211.4:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 
16:17:09.945 [main-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x5adb0db3-0x4000005536a0008 connected 
16:17:09.946 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0008, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,339302416520,0  request:: '/hbase/hbaseid,F  response:: s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:17:09.947 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0008, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,339302416520,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a36303030306932ffffff8effffff960fffffff371ffffffec50425546a2433326339633631312d613435322d343130352d396138352d613166343766353335373633,s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:17:09.947 [main] DEBUG o.a.h.hbase.ipc.AbstractRpcClient - Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a760689, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 
16:17:09.954 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0008, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,339302416520,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3136303230ffffff9a46fffffffdffffffec367effffffeaffffff9e50425546a13a77365727665723410ffffff947d18ffffff8dffffff9affffff96ffffffa7fffffffd2c100183,s{
   
    339302416421,339302416421,1545465097164,1545465097164,0,0,0,0,60,0,339302416421}  
16:17:09.954 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0008, packet:: clientPath:null serverPath:null finished:false header:: 4,8  replyHeader:: 4,339302416520,0  request:: '/hbase,F  response:: v{'replication,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'master-maintenance,'region-in-transition,'online-snapshot,'switch,'master,'running,'recovering-regions,'draining,'namespace,'hbaseid,'table}  
16:17:10.018 [hconnection-0x5adb0db3-metaLookup-shared--pool5-t1] DEBUG o.a.hadoop.hbase.ipc.RpcConnection - Use SIMPLE authentication for service ClientService, sasl=false 
16:17:10.019 [hconnection-0x5adb0db3-metaLookup-shared--pool5-t1] DEBUG o.a.h.h.ipc.BlockingRpcConnection - Connecting to server4/192.168.211.4:16020 
16:17:10.065 [hconnection-0x5adb0db3-shared--pool4-t1] DEBUG o.a.hadoop.hbase.ipc.RpcConnection - Use SIMPLE authentication for service ClientService, sasl=false 
16:17:10.082 [hconnection-0x5adb0db3-shared--pool4-t1] DEBUG o.a.h.h.ipc.BlockingRpcConnection - Connecting to server6/192.168.211.6:16020 
put data to mytable successfully 
16:17:10.161 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.HConstants, using jar /E:/.m2/repository/org/apache/hbase/hbase-common/1.4.0/hbase-common-1.4.0.jar 
16:17:10.164 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.protobuf.generated.ClientProtos, using jar /E:/.m2/repository/org/apache/hbase/hbase-protocol/1.4.0/hbase-protocol-1.4.0.jar 
16:17:10.164 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.client.Put, using jar /E:/.m2/repository/org/apache/hbase/hbase-client/1.4.0/hbase-client-1.4.0.jar 
16:17:10.165 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.CompatibilityFactory, using jar /E:/.m2/repository/org/apache/hbase/hbase-hadoop-compat/1.4.0/hbase-hadoop-compat-1.4.0.jar 
16:17:10.165 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.mapreduce.JobUtil, using jar /E:/.m2/repository/org/apache/hbase/hbase-hadoop2-compat/1.4.0/hbase-hadoop2-compat-1.4.0.jar 
16:17:10.166 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.mapreduce.TableMapper, using jar /E:/.m2/repository/org/apache/hbase/hbase-server/1.4.0/hbase-server-1.4.0.jar 
16:17:10.166 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec, using jar /E:/.m2/repository/org/apache/hbase/hbase-prefix-tree/1.4.0/hbase-prefix-tree-1.4.0.jar 
16:17:10.167 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.zookeeper.ZooKeeper, using jar /E:/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar 
16:17:10.167 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class io.netty.channel.Channel, using jar /E:/.m2/repository/io/netty/netty-all/4.0.43.Final/netty-all-4.0.43.Final.jar 
16:17:10.167 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class com.google.protobuf.Message, using jar /E:/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar 
16:17:10.168 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class com.google.common.collect.Lists, using jar /E:/.m2/repository/com/google/guava/guava/11.0.2/guava-11.0.2.jar 
16:17:10.168 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.htrace.Trace, using jar /E:/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar 
16:17:10.169 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class com.yammer.metrics.core.MetricsRegistry, using jar /E:/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar 
16:17:10.172 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.io.Text, using jar /E:/.m2/repository/org/apache/hadoop/hadoop-common/2.6.4/hadoop-common-2.6.4.jar 
16:17:10.173 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.io.IntWritable, using jar /E:/.m2/repository/org/apache/hadoop/hadoop-common/2.6.4/hadoop-common-2.6.4.jar 
16:17:10.173 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.hbase.mapreduce.TableInputFormat, using jar /E:/.m2/repository/org/apache/hbase/hbase-server/1.4.0/hbase-server-1.4.0.jar 
16:17:10.173 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.io.LongWritable, using jar /E:/.m2/repository/org/apache/hadoop/hadoop-common/2.6.4/hadoop-common-2.6.4.jar 
16:17:10.174 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.io.Text, using jar /E:/.m2/repository/org/apache/hadoop/hadoop-common/2.6.4/hadoop-common-2.6.4.jar 
16:17:10.174 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat, using jar /E:/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.6.4/hadoop-mapreduce-client-core-2.6.4.jar 
16:17:10.175 [main] DEBUG o.a.h.h.mapreduce.TableMapReduceUtil - For class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner, using jar /E:/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.6.4/hadoop-mapreduce-client-core-2.6.4.jar 
16:17:10.188 [main] DEBUG o.a.hadoop.hdfs.BlockReaderLocal - dfs.client.use.legacy.blockreader.local = false 
16:17:10.189 [main] DEBUG o.a.hadoop.hdfs.BlockReaderLocal - dfs.client.read.shortcircuit = false 
16:17:10.189 [main] DEBUG o.a.hadoop.hdfs.BlockReaderLocal - dfs.client.domain.socket.data.traffic = false 
16:17:10.189 [main] DEBUG o.a.hadoop.hdfs.BlockReaderLocal - dfs.domain.socket.path =  
16:17:10.196 [main] DEBUG org.apache.hadoop.hdfs.DFSClient - No KeyProvider found. 
16:17:10.211 [main] DEBUG o.apache.hadoop.io.retry.RetryUtils - multipleLinearRandomRetry = null 
16:17:10.220 [main] DEBUG org.apache.hadoop.ipc.Server - rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@35390ee3 
16:17:10.315 [main] DEBUG org.apache.hadoop.ipc.Client - getting client out of cache: org.apache.hadoop.ipc.Client@aafcffa 
16:17:10.520 [main] DEBUG o.a.hadoop.util.PerformanceAdvisory - Both short-circuit local reads and UNIX domain socket are disabled. 
16:17:10.524 [main] DEBUG o.a.h.h.p.d.s.DataTransferSaslUtil - DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection 
16:17:10.528 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:1262) 
16:17:10.532 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.LocalClientProtocolProvider 
16:17:10.536 [main] INFO  o.a.h.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id 
16:17:10.537 [main] INFO  o.a.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId= 
16:17:10.541 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Picked org.apache.hadoop.mapred.LocalClientProtocolProvider as the ClientProtocolProvider 
16:17:10.541 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:161) 
16:17:10.544 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:1294) 
16:17:10.557 [main] DEBUG org.apache.hadoop.ipc.Client - The ping interval is 60000 ms. 
16:17:10.557 [main] DEBUG org.apache.hadoop.ipc.Client - Connecting to /192.168.211.4:9000 
16:17:10.562 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator: starting, having connections 1 
16:17:10.564 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #0 
16:17:10.568 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #0 
16:17:10.569 [main] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 23ms 
16:17:10.661 [main] DEBUG o.apache.hadoop.io.nativeio.NativeIO - Initialized cache for IDs to User/Group mapping with a  cache timeout of 14400 seconds. 
16:17:10.685 [main] DEBUG o.a.hadoop.mapreduce.JobSubmitter - Configuring job job_local145551582_0001 with file:/tmp/hadoop-Administrator/mapred/staging/Administrator145551582/.staging/job_local145551582_0001 as the submit dir 
16:17:10.685 [main] DEBUG o.a.hadoop.mapreduce.JobSubmitter - adding the following namenodes' delegation tokens:[file:///] 
16:17:10.992 [main] WARN  o.a.h.mapreduce.JobResourceUploader - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 
16:17:10.992 [main] DEBUG o.a.h.mapreduce.JobResourceUploader - default FileSystem: file:/// 
16:17:10.999 [main] WARN  o.a.h.mapreduce.JobResourceUploader - No job jar file set.  User classes may not be found. See Job or Job#setJar(String). 
16:17:11.927 [main] DEBUG o.a.hadoop.mapreduce.JobSubmitter - Creating splits at file:/tmp/hadoop-Administrator/mapred/staging/Administrator145551582/.staging/job_local145551582_0001 
16:17:11.928 [main] INFO  o.a.h.h.z.RecoverableZooKeeper - Process identifier=hconnection-0x315ba14a connecting to ZooKeeper ensemble=192.168.211.4:2181 
16:17:11.928 [main] INFO  org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=192.168.211.4:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@17f9344b 
16:17:11.929 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Opening socket connection to server 192.168.211.4/192.168.211.4:2181. Will not attempt to authenticate using SASL (unknown error) 
16:17:11.930 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Socket connection established to 192.168.211.4/192.168.211.4:2181, initiating session 
16:17:11.930 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 192.168.211.4/192.168.211.4:2181 
16:17:11.933 [main-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Session establishment complete on server 192.168.211.4/192.168.211.4:2181, sessionid = 0x4000005536a0009, negotiated timeout = 40000 
16:17:11.933 [main-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x315ba14a0x0, quorum=192.168.211.4:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 
16:17:11.933 [main-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x315ba14a-0x4000005536a0009 connected 
16:17:11.933 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0009, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,339302416521,0  request:: '/hbase/hbaseid,F  response:: s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:17:11.934 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0009, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,339302416521,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a36303030306932ffffff8effffff960fffffff371ffffffec50425546a2433326339633631312d613435322d343130352d396138352d613166343766353335373633,s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:17:11.934 [main] DEBUG o.a.h.hbase.ipc.AbstractRpcClient - Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@27f0ad19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 
16:17:12.025 [main] INFO  o.a.h.h.util.RegionSizeCalculator - Calculating region sizes for table "mytable". 
16:17:12.028 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0009, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,339302416521,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3136303230ffffff9a46fffffffdffffffec367effffffeaffffff9e50425546a13a77365727665723410ffffff947d18ffffff8dffffff9affffff96ffffffa7fffffffd2c100183,s{
   
    339302416421,339302416421,1545465097164,1545465097164,0,0,0,0,60,0,339302416421}  
16:17:12.029 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0009, packet:: clientPath:null serverPath:null finished:false header:: 4,8  replyHeader:: 4,339302416521,0  request:: '/hbase,F  response:: v{'replication,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'master-maintenance,'region-in-transition,'online-snapshot,'switch,'master,'running,'recovering-regions,'draining,'namespace,'hbaseid,'table}  
16:17:12.029 [htable-pool7-t1] DEBUG o.a.hadoop.hbase.ipc.RpcConnection - Use SIMPLE authentication for service ClientService, sasl=false 
16:17:12.030 [htable-pool7-t1] DEBUG o.a.h.h.ipc.BlockingRpcConnection - Connecting to server4/192.168.211.4:16020 
16:17:12.037 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0009, packet:: clientPath:null serverPath:null finished:false header:: 5,3  replyHeader:: 5,339302416521,0  request:: '/hbase,F  response:: s{
   
    4294967298,4294967298,1535859280602,1535859280602,0,102,0,0,0,18,339302416421}  
16:17:12.039 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0009, packet:: clientPath:null serverPath:null finished:false header:: 6,4  replyHeader:: 6,339302416521,0  request:: '/hbase/master,F  response:: #ffffffff000146d61737465723a3630303030bffffff92ffffffabffffffc66bffffffe560ffffffb650425546a14a77365727665723410ffffffe0ffffffd4318ffffff92ffffffc4ffffff95ffffffa7fffffffd2c10018ffffffeaffffffd43,s{
   
    339302416390,339302416390,1545465086598,1545465086598,0,0,0,360287989600157696,57,0,339302416390}  
16:17:12.045 [main] DEBUG o.a.hadoop.hbase.ipc.RpcConnection - Use SIMPLE authentication for service MasterService, sasl=false 
16:17:12.045 [main] DEBUG o.a.h.h.ipc.BlockingRpcConnection - Connecting to server4/192.168.211.4:60000 
16:17:12.080 [main] DEBUG o.a.h.h.util.RegionSizeCalculator - Region mytable,,1545466443059.f549f94d90e70bf4f58dd1269c734813. has size 0 
16:17:12.081 [main] DEBUG o.a.h.h.util.RegionSizeCalculator - Region sizes calculated 
16:17:20.563 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator: closed 
16:17:20.563 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator: stopped, remaining connections 0 
16:17:23.048 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0007 after 0ms 
16:17:23.289 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0008 after 0ms 
16:17:25.373 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0009 after 0ms 
16:17:36.380 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0007 after 0ms 
16:17:36.622 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0008 after 0ms 
16:17:38.706 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0009 after 0ms 
16:17:49.714 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0007 after 0ms 
16:17:49.957 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0008 after 0ms 
16:17:52.040 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0009 after 0ms 
16:17:57.441 [main] DEBUG o.a.h.h.m.TableInputFormatBase - getSplits: split -> 0 -> HBase table split(table name: mytable, scan: {
   
    "loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{
   
    "cf":["ALL"]},"caching":-1,"maxVersions":1,"timeRange":[0,9223372036854775807]}, start row: , end row: , region location: server6, encoded region name: f549f94d90e70bf4f58dd1269c734813) 
16:17:57.441 [main] INFO  o.a.h.h.c.ConnectionManager$HConnectionImplementation - Closing master protocol: MasterService 
16:17:57.441 [main] INFO  o.a.h.h.c.ConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x4000005536a0009 
16:17:57.441 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x4000005536a0009 
16:17:57.441 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x4000005536a0009 
16:17:57.444 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a0009, packet:: clientPath:null serverPath:null finished:false header:: 7,-11  replyHeader:: 7,339302416522,0  request:: null response:: null 
16:17:57.444 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x4000005536a0009 
16:17:57.445 [main] INFO  org.apache.zookeeper.ZooKeeper - Session: 0x4000005536a0009 closed 
16:17:57.445 [main-EventThread] INFO  org.apache.zookeeper.ClientCnxn - EventThread shut down 
16:17:57.445 [main] DEBUG o.a.h.hbase.ipc.AbstractRpcClient - Stopping rpc client 
16:17:57.489 [main] INFO  o.a.hadoop.mapreduce.JobSubmitter - number of splits:1 
···· 
16:17:57.718 [main] INFO  o.a.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local145551582_0001 
16:17:57.808 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331) 
16:17:57.879 [LocalDistributedCacheManager Downloader #0] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.880 [LocalDistributedCacheManager Downloader #1] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.881 [LocalDistributedCacheManager Downloader #3] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.881 [LocalDistributedCacheManager Downloader #2] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.883 [LocalDistributedCacheManager Downloader #6] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.887 [LocalDistributedCacheManager Downloader #5] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.887 [LocalDistributedCacheManager Downloader #7] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.888 [LocalDistributedCacheManager Downloader #4] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.888 [LocalDistributedCacheManager Downloader #10] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.889 [LocalDistributedCacheManager Downloader #11] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.889 [LocalDistributedCacheManager Downloader #14] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.890 [LocalDistributedCacheManager Downloader #12] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.890 [LocalDistributedCacheManager Downloader #13] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.893 [LocalDistributedCacheManager Downloader #9] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:57.894 [LocalDistributedCacheManager Downloader #8] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
16:17:58.022 [LocalDistributedCacheManager Downloader #2] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677808_tmp/hbase-prefix-tree-1.4.0.jar to perm r-x------ 
16:17:58.022 [LocalDistributedCacheManager Downloader #7] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677813_tmp/hbase-hadoop-compat-1.4.0.jar to perm r-x------ 
16:17:58.022 [LocalDistributedCacheManager Downloader #4] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677810_tmp/hbase-common-1.4.0.jar to perm r-x------ 
16:17:58.022 [LocalDistributedCacheManager Downloader #0] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677806_tmp/metrics-core-2.2.0.jar to perm r-x------ 
16:17:58.022 [LocalDistributedCacheManager Downloader #5] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677811_tmp/hbase-hadoop2-compat-1.4.0.jar to perm r-x------ 
16:17:58.023 [LocalDistributedCacheManager Downloader #5] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.023 [LocalDistributedCacheManager Downloader #7] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.023 [LocalDistributedCacheManager Downloader #2] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.023 [LocalDistributedCacheManager Downloader #4] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.023 [LocalDistributedCacheManager Downloader #0] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.062 [LocalDistributedCacheManager Downloader #3] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677809_tmp/protobuf-java-2.5.0.jar to perm r-x------ 
16:17:58.063 [LocalDistributedCacheManager Downloader #3] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.079 [LocalDistributedCacheManager Downloader #11] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677817_tmp/hbase-client-1.4.0.jar to perm r-x------ 
16:17:58.079 [LocalDistributedCacheManager Downloader #11] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.080 [LocalDistributedCacheManager Downloader #10] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677816_tmp/hadoop-mapreduce-client-core-2.6.4.jar to perm r-x------ 
16:17:58.081 [LocalDistributedCacheManager Downloader #10] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.085 [LocalDistributedCacheManager Downloader #1] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677807_tmp/hadoop-common-2.6.4.jar to perm r-x------ 
16:17:58.085 [LocalDistributedCacheManager Downloader #1] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.093 [LocalDistributedCacheManager Downloader #6] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677812_tmp/zookeeper-3.4.6.jar to perm r-x------ 
16:17:58.093 [LocalDistributedCacheManager Downloader #6] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.160 [LocalDistributedCacheManager Downloader #12] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677818_tmp/htrace-core-3.1.0-incubating.jar to perm r-x------ 
16:17:58.160 [LocalDistributedCacheManager Downloader #12] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.243 [LocalDistributedCacheManager Downloader #8] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677814_tmp/guava-11.0.2.jar to perm r-x------ 
16:17:58.243 [LocalDistributedCacheManager Downloader #8] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.288 [LocalDistributedCacheManager Downloader #9] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677815_tmp/netty-all-4.0.43.Final.jar to perm r-x------ 
16:17:58.289 [LocalDistributedCacheManager Downloader #9] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.324 [LocalDistributedCacheManager Downloader #14] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677820_tmp/hbase-protocol-1.4.0.jar to perm r-x------ 
16:17:58.324 [LocalDistributedCacheManager Downloader #14] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:58.325 [LocalDistributedCacheManager Downloader #13] DEBUG o.apache.hadoop.yarn.util.FSDownload - Changing permissions for path file:/tmp/hadoop-Administrator/mapred/local/1545466677819_tmp/hbase-server-1.4.0.jar to perm r-x------ 
16:17:58.325 [LocalDistributedCacheManager Downloader #13] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:417) 
16:17:59.416 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677806\metrics-core-2.2.0.jar <- E:\intellij_Project\AllDemo/metrics-core-2.2.0.jar 
16:18:03.048 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0007 after 0ms 
16:18:03.290 [main-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 0x4000005536a0008 after 0ms 
16:18:06.195 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677806/metrics-core-2.2.0.jar 
16:18:06.195 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677807\hadoop-common-2.6.4.jar <- E:\intellij_Project\AllDemo/hadoop-common-2.6.4.jar 
16:18:06.237 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hadoop/hadoop-common/2.6.4/hadoop-common-2.6.4.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677807/hadoop-common-2.6.4.jar 
16:18:06.237 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677808\hbase-prefix-tree-1.4.0.jar <- E:\intellij_Project\AllDemo/hbase-prefix-tree-1.4.0.jar 
16:18:06.281 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hbase/hbase-prefix-tree/1.4.0/hbase-prefix-tree-1.4.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677808/hbase-prefix-tree-1.4.0.jar 
16:18:06.281 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677809\protobuf-java-2.5.0.jar <- E:\intellij_Project\AllDemo/protobuf-java-2.5.0.jar 
16:18:06.321 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677809/protobuf-java-2.5.0.jar 
16:18:06.321 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677810\hbase-common-1.4.0.jar <- E:\intellij_Project\AllDemo/hbase-common-1.4.0.jar 
16:18:06.361 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hbase/hbase-common/1.4.0/hbase-common-1.4.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677810/hbase-common-1.4.0.jar 
16:18:06.361 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677811\hbase-hadoop2-compat-1.4.0.jar <- E:\intellij_Project\AllDemo/hbase-hadoop2-compat-1.4.0.jar 
16:18:06.398 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hbase/hbase-hadoop2-compat/1.4.0/hbase-hadoop2-compat-1.4.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677811/hbase-hadoop2-compat-1.4.0.jar 
16:18:06.398 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677812\zookeeper-3.4.6.jar <- E:\intellij_Project\AllDemo/zookeeper-3.4.6.jar 
16:18:06.437 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677812/zookeeper-3.4.6.jar 
16:18:06.437 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677813\hbase-hadoop-compat-1.4.0.jar <- E:\intellij_Project\AllDemo/hbase-hadoop-compat-1.4.0.jar 
16:18:06.483 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hbase/hbase-hadoop-compat/1.4.0/hbase-hadoop-compat-1.4.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677813/hbase-hadoop-compat-1.4.0.jar 
16:18:06.483 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677814\guava-11.0.2.jar <- E:\intellij_Project\AllDemo/guava-11.0.2.jar 
16:18:06.524 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/com/google/guava/guava/11.0.2/guava-11.0.2.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677814/guava-11.0.2.jar 
16:18:06.524 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677815\netty-all-4.0.43.Final.jar <- E:\intellij_Project\AllDemo/netty-all-4.0.43.Final.jar 
16:18:06.559 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/io/netty/netty-all/4.0.43.Final/netty-all-4.0.43.Final.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677815/netty-all-4.0.43.Final.jar 
16:18:06.559 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677816\hadoop-mapreduce-client-core-2.6.4.jar <- E:\intellij_Project\AllDemo/hadoop-mapreduce-client-core-2.6.4.jar 
16:18:06.603 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.6.4/hadoop-mapreduce-client-core-2.6.4.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677816/hadoop-mapreduce-client-core-2.6.4.jar 
16:18:06.603 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677817\hbase-client-1.4.0.jar <- E:\intellij_Project\AllDemo/hbase-client-1.4.0.jar 
16:18:06.641 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hbase/hbase-client/1.4.0/hbase-client-1.4.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677817/hbase-client-1.4.0.jar 
16:18:06.641 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677818\htrace-core-3.1.0-incubating.jar <- E:\intellij_Project\AllDemo/htrace-core-3.1.0-incubating.jar 
16:18:06.688 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677818/htrace-core-3.1.0-incubating.jar 
16:18:06.688 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677819\hbase-server-1.4.0.jar <- E:\intellij_Project\AllDemo/hbase-server-1.4.0.jar 
16:18:06.726 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hbase/hbase-server/1.4.0/hbase-server-1.4.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677819/hbase-server-1.4.0.jar 
16:18:06.726 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Creating symlink: \tmp\hadoop-Administrator\mapred\local\1545466677820\hbase-protocol-1.4.0.jar <- E:\intellij_Project\AllDemo/hbase-protocol-1.4.0.jar 
16:18:06.761 [main] INFO  o.a.h.m.LocalDistributedCacheManager - Localized file:/E:/.m2/repository/org/apache/hbase/hbase-protocol/1.4.0/hbase-protocol-1.4.0.jar as file:/tmp/hadoop-Administrator/mapred/local/1545466677820/hbase-protocol-1.4.0.jar 
··· 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677806/metrics-core-2.2.0.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677807/hadoop-common-2.6.4.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677808/hbase-prefix-tree-1.4.0.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677809/protobuf-java-2.5.0.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677810/hbase-common-1.4.0.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677811/hbase-hadoop2-compat-1.4.0.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677812/zookeeper-3.4.6.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677813/hbase-hadoop-compat-1.4.0.jar 
16:18:06.816 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677814/guava-11.0.2.jar 
16:18:06.817 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677815/netty-all-4.0.43.Final.jar 
16:18:06.817 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677816/hadoop-mapreduce-client-core-2.6.4.jar 
16:18:06.817 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677817/hbase-client-1.4.0.jar 
16:18:06.817 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677818/htrace-core-3.1.0-incubating.jar 
16:18:06.817 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677819/hbase-server-1.4.0.jar 
16:18:06.817 [main] INFO  o.a.h.m.LocalDistributedCacheManager - file:/E:/tmp/hadoop-Administrator/mapred/local/1545466677820/hbase-protocol-1.4.0.jar 
16:18:06.820 [main] INFO  org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/ 
16:18:06.821 [main] INFO  org.apache.hadoop.mapreduce.Job - Running job: job_local145551582_0001 
16:18:06.822 [Thread-21] INFO  o.a.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null 
16:18:06.822 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:06.822 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:06.827 [Thread-21] INFO  o.a.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 
16:18:06.830 [Thread-21] DEBUG org.apache.hadoop.hdfs.DFSClient - /output/mytable/_temporary/0: masked=rwxr-xr-x 
16:18:06.867 [Thread-21] DEBUG org.apache.hadoop.ipc.Client - The ping interval is 60000 ms. 
16:18:06.867 [Thread-21] DEBUG org.apache.hadoop.ipc.Client - Connecting to /192.168.211.4:9000 
16:18:06.868 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #1 
16:18:06.868 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator: starting, having connections 1 
16:18:06.872 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #1 
16:18:06.872 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: mkdirs took 6ms 
16:18:06.906 [Thread-21] DEBUG o.a.hadoop.mapred.LocalJobRunner - Starting mapper thread pool executor. 
16:18:06.906 [Thread-21] DEBUG o.a.hadoop.mapred.LocalJobRunner - Max local threads: 1 
16:18:06.906 [Thread-21] DEBUG o.a.hadoop.mapred.LocalJobRunner - Map tasks to process: 1 
16:18:06.907 [Thread-21] INFO  o.a.hadoop.mapred.LocalJobRunner - Waiting for map tasks 
16:18:06.907 [LocalJobRunner Map Task Executor #0] INFO  o.a.hadoop.mapred.LocalJobRunner - Starting task: attempt_local145551582_0001_m_000000_0 
16:18:06.916 [LocalJobRunner Map Task Executor #0] DEBUG o.apache.hadoop.mapred.SortedRanges - currentIndex 0   0:0 
16:18:06.929 [LocalJobRunner Map Task Executor #0] DEBUG o.a.hadoop.mapred.LocalJobRunner - mapreduce.cluster.local.dir for child : /tmp/hadoop-Administrator/mapred/local/localRunner//Administrator/jobcache/job_local145551582_0001/attempt_local145551582_0001_m_000000_0 
16:18:06.932 [LocalJobRunner Map Task Executor #0] DEBUG org.apache.hadoop.mapred.Task - using new api for output committer 
16:18:06.937 [LocalJobRunner Map Task Executor #0] INFO  o.a.h.y.util.ProcfsBasedProcessTree - ProcfsBasedProcessTree currently is supported only on Linux. 
16:18:06.987 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.Task -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@12ad3d1e 
16:18:06.990 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - Processing split: HBase table split(table name: mytable, scan: {
   
    "loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":-1,"families":{
   
    "cf":["ALL"]},"caching":-1,"maxVersions":1,"timeRange":[0,9223372036854775807]}, start row: , end row: , region location: server6, encoded region name: f549f94d90e70bf4f58dd1269c734813) 
16:18:06.994 [LocalJobRunner Map Task Executor #0] INFO  o.a.h.h.z.RecoverableZooKeeper - Process identifier=hconnection-0x1984b7d9 connecting to ZooKeeper ensemble=192.168.211.4:2181 
16:18:06.994 [LocalJobRunner Map Task Executor #0] INFO  org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=192.168.211.4:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@700d4431 
16:18:06.995 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Opening socket connection to server 192.168.211.4/192.168.211.4:2181. Will not attempt to authenticate using SASL (unknown error) 
16:18:06.996 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Socket connection established to 192.168.211.4/192.168.211.4:2181, initiating session 
16:18:06.996 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on 192.168.211.4/192.168.211.4:2181 
16:18:07.000 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] INFO  org.apache.zookeeper.ClientCnxn - Session establishment complete on server 192.168.211.4/192.168.211.4:2181, sessionid = 0x4000005536a000a, negotiated timeout = 40000 
16:18:07.000 [LocalJobRunner Map Task Executor #0-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x1984b7d90x0, quorum=192.168.211.4:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 
16:18:07.000 [LocalJobRunner Map Task Executor #0-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher - hconnection-0x1984b7d9-0x4000005536a000a connected 
16:18:07.005 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a000a, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,339302416523,0  request:: '/hbase/hbaseid,F  response:: s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:18:07.006 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a000a, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,339302416523,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a36303030306932ffffff8effffff960fffffff371ffffffec50425546a2433326339633631312d613435322d343130352d396138352d613166343766353335373633,s{
   
    4294967312,339302416395,1535859306918,1545465087921,17,0,0,0,67,0,4294967312}  
16:18:07.006 [LocalJobRunner Map Task Executor #0] DEBUG o.a.h.hbase.ipc.AbstractRpcClient - Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5198545f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 
16:18:07.006 [LocalJobRunner Map Task Executor #0] INFO  o.a.h.h.m.TableInputFormatBase - Input split length: 0 bytes. 
16:18:07.011 [LocalJobRunner Map Task Executor #0] DEBUG org.apache.hadoop.mapred.MapTask - Trying map output collector class: org.apache.hadoop.mapred.MapTask$MapOutputBuffer 
16:18:07.061 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584) 
16:18:07.061 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 100 
16:18:07.061 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - soft limit at 83886080 
16:18:07.061 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 104857600 
16:18:07.061 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 6553600 
16:18:07.064 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 
16:18:07.069 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a000a, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,339302416523,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3136303230ffffff9a46fffffffdffffffec367effffffeaffffff9e50425546a13a77365727665723410ffffff947d18ffffff8dffffff9affffff96ffffffa7fffffffd2c100183,s{
   
    339302416421,339302416421,1545465097164,1545465097164,0,0,0,0,60,0,339302416421}  
16:18:07.070 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a000a, packet:: clientPath:null serverPath:null finished:false header:: 4,8  replyHeader:: 4,339302416523,0  request:: '/hbase,F  response:: v{'replication,'meta-region-server,'rs,'splitWAL,'backup-masters,'table-lock,'flush-table-proc,'master-maintenance,'region-in-transition,'online-snapshot,'switch,'master,'running,'recovering-regions,'draining,'namespace,'hbaseid,'table}  
16:18:07.071 [hconnection-0x1984b7d9-metaLookup-shared--pool10-t1] DEBUG o.a.hadoop.hbase.ipc.RpcConnection - Use SIMPLE authentication for service ClientService, sasl=false 
16:18:07.072 [hconnection-0x1984b7d9-metaLookup-shared--pool10-t1] DEBUG o.a.h.h.ipc.BlockingRpcConnection - Connecting to server4/192.168.211.4:16020 
16:18:07.077 [hconnection-0x1984b7d9-shared--pool9-t1] DEBUG o.a.hadoop.hbase.ipc.RpcConnection - Use SIMPLE authentication for service ClientService, sasl=false 
16:18:07.078 [hconnection-0x1984b7d9-shared--pool9-t1] DEBUG o.a.h.h.ipc.BlockingRpcConnection - Connecting to server6/192.168.211.6:16020 
qualifier = keyWord 
line = hello spark 
family = cf 
row = first 
qualifier = keyWord 
line = hello kafka 
family = cf 
row = fourth 
qualifier = keyWord 
line = hi hadoop 
family = cf 
row = second 
qualifier = keyWord 
line = hello hbase 
family = cf 
row = third 
16:18:07.136 [LocalJobRunner Map Task Executor #0] INFO  o.a.hadoop.mapred.LocalJobRunner -  
16:18:07.137 [LocalJobRunner Map Task Executor #0] INFO  o.a.h.h.c.ConnectionManager$HConnectionImplementation - Closing zookeeper sessionid=0x4000005536a000a 
16:18:07.137 [LocalJobRunner Map Task Executor #0] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x4000005536a000a 
16:18:07.137 [LocalJobRunner Map Task Executor #0] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x4000005536a000a 
16:18:07.151 [LocalJobRunner Map Task Executor #0-SendThread(192.168.211.4:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x4000005536a000a, packet:: clientPath:null serverPath:null finished:false header:: 5,-11  replyHeader:: 5,339302416524,0  request:: null response:: null 
16:18:07.151 [LocalJobRunner Map Task Executor #0] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x4000005536a000a 
16:18:07.151 [LocalJobRunner Map Task Executor #0] INFO  org.apache.zookeeper.ZooKeeper - Session: 0x4000005536a000a closed 
16:18:07.151 [LocalJobRunner Map Task Executor #0-EventThread] INFO  org.apache.zookeeper.ClientCnxn - EventThread shut down 
16:18:07.151 [LocalJobRunner Map Task Executor #0] DEBUG o.a.h.hbase.ipc.AbstractRpcClient - Stopping rpc client 
16:18:07.152 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - Starting flush of map output 
16:18:07.152 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - Spilling map output 
16:18:07.152 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 78; bufvoid = 104857600 
16:18:07.152 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26214368(104857472); length = 29/6553600 
16:18:07.163 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.MapTask - Finished spill 0 
16:18:07.169 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.Task - Task:attempt_local145551582_0001_m_000000_0 is done. And is in the process of committing 
16:18:07.202 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #2 
16:18:07.203 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #2 
16:18:07.203 [LocalJobRunner Map Task Executor #0] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 1ms 
16:18:07.212 [LocalJobRunner Map Task Executor #0] INFO  o.a.hadoop.mapred.LocalJobRunner - map 
16:18:07.212 [LocalJobRunner Map Task Executor #0] INFO  org.apache.hadoop.mapred.Task - Task 'attempt_local145551582_0001_m_000000_0' done. 
16:18:07.212 [LocalJobRunner Map Task Executor #0] INFO  o.a.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local145551582_0001_m_000000_0 
16:18:07.212 [Thread-21] INFO  o.a.hadoop.mapred.LocalJobRunner - map task executor complete. 
16:18:07.214 [Thread-21] DEBUG o.a.hadoop.mapred.LocalJobRunner - Starting reduce thread pool executor. 
16:18:07.214 [Thread-21] DEBUG o.a.hadoop.mapred.LocalJobRunner - Max local threads: 1 
16:18:07.214 [Thread-21] DEBUG o.a.hadoop.mapred.LocalJobRunner - Reduce tasks to process: 1 
16:18:07.214 [Thread-21] INFO  o.a.hadoop.mapred.LocalJobRunner - Waiting for reduce tasks 
16:18:07.214 [pool-7-thread-1] INFO  o.a.hadoop.mapred.LocalJobRunner - Starting task: attempt_local145551582_0001_r_000000_0 
16:18:07.216 [pool-7-thread-1] DEBUG o.apache.hadoop.mapred.SortedRanges - currentIndex 0   0:0 
16:18:07.219 [pool-7-thread-1] DEBUG o.a.hadoop.mapred.LocalJobRunner - mapreduce.cluster.local.dir for child : /tmp/hadoop-Administrator/mapred/local/localRunner//Administrator/jobcache/job_local145551582_0001/attempt_local145551582_0001_r_000000_0 
16:18:07.219 [pool-7-thread-1] DEBUG org.apache.hadoop.mapred.Task - using new api for output committer 
16:18:07.219 [pool-7-thread-1] INFO  o.a.h.y.util.ProcfsBasedProcessTree - ProcfsBasedProcessTree currently is supported only on Linux. 
16:18:07.252 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Task -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@6b210e06 
16:18:07.254 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.ReduceTask - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@5aeeb920 
16:18:07.283 [pool-7-thread-1] INFO  o.a.h.m.task.reduce.MergeManagerImpl - MergerManager: memoryLimit=2654155520, maxSingleShuffleLimit=663538880, mergeThreshold=1751742720, ioSortFactor=10, memToMemMergeOutputsThreshold=10 
16:18:07.284 [EventFetcher for fetching Map Completion Events] INFO  o.a.h.m.task.reduce.EventFetcher - attempt_local145551582_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 
16:18:07.286 [EventFetcher for fetching Map Completion Events] DEBUG o.a.h.m.task.reduce.EventFetcher - Got 0 map completion events from 0 
16:18:07.286 [EventFetcher for fetching Map Completion Events] DEBUG o.a.h.m.task.reduce.EventFetcher - GetMapEventsThread about to sleep for 1000 
16:18:07.290 [localfetcher#1] DEBUG o.a.h.m.task.reduce.LocalFetcher - LocalFetcher 1 going to fetch: attempt_local145551582_0001_m_000000_0 
16:18:07.307 [localfetcher#1] DEBUG o.a.h.m.task.reduce.MergeManagerImpl - attempt_local145551582_0001_m_000000_0: Proceeding with shuffle since usedMemory (0) is lesser than memoryLimit (2654155520).CommitMemory is (0) 
16:18:07.309 [localfetcher#1] INFO  o.a.h.m.task.reduce.LocalFetcher - localfetcher#1 about to shuffle output of map attempt_local145551582_0001_m_000000_0 decomp: 96 len: 100 to MEMORY 
16:18:07.313 [localfetcher#1] INFO  o.a.h.m.t.reduce.InMemoryMapOutput - Read 96 bytes from map-output for attempt_local145551582_0001_m_000000_0 
16:18:07.315 [localfetcher#1] INFO  o.a.h.m.task.reduce.MergeManagerImpl - closeInMemoryFile -> map-output of size: 96, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->96 
16:18:07.316 [localfetcher#1] DEBUG o.a.h.m.t.r.ShuffleSchedulerImpl - map attempt_local145551582_0001_m_000000_0 done 1 / 1 copied. 
16:18:07.316 [EventFetcher for fetching Map Completion Events] INFO  o.a.h.m.task.reduce.EventFetcher - EventFetcher is interrupted.. Returning 
16:18:07.316 [pool-7-thread-1] INFO  o.a.hadoop.mapred.LocalJobRunner - 1 / 1 copied. 
16:18:07.316 [pool-7-thread-1] INFO  o.a.h.m.task.reduce.MergeManagerImpl - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 
16:18:07.327 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Merger - Merging 1 sorted segments 
16:18:07.327 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 87 bytes 
16:18:07.329 [pool-7-thread-1] INFO  o.a.h.m.task.reduce.MergeManagerImpl - Merged 1 segments, 96 bytes to disk to satisfy reduce memory limit 
16:18:07.329 [pool-7-thread-1] DEBUG o.a.h.m.task.reduce.MergeManagerImpl - Disk file: /tmp/hadoop-Administrator/mapred/local/localRunner/Administrator/jobcache/job_local145551582_0001/attempt_local145551582_0001_r_000000_0/output/map_0.out.merged Length is 100 
16:18:07.329 [pool-7-thread-1] INFO  o.a.h.m.task.reduce.MergeManagerImpl - Merging 1 files, 100 bytes from disk 
16:18:07.330 [pool-7-thread-1] INFO  o.a.h.m.task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce 
16:18:07.330 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Merger - Merging 1 sorted segments 
16:18:07.331 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 87 bytes 
16:18:07.332 [pool-7-thread-1] INFO  o.a.hadoop.mapred.LocalJobRunner - 1 / 1 copied. 
16:18:07.335 [pool-7-thread-1] DEBUG org.apache.hadoop.hdfs.DFSClient - /output/mytable/_temporary/0/_temporary/attempt_local145551582_0001_r_000000_0/part-r-00000: masked=rw-r--r-- 
16:18:07.379 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #3 
16:18:07.383 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #3 
16:18:07.383 [pool-7-thread-1] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: create took 4ms 
16:18:07.386 [pool-7-thread-1] DEBUG org.apache.hadoop.hdfs.DFSClient - computePacketChunkSize: src=/output/mytable/_temporary/0/_temporary/attempt_local145551582_0001_r_000000_0/part-r-00000, chunkSize=516, chunksPerPacket=127, packetSize=65532 
16:18:07.391 [pool-7-thread-1] INFO  o.a.h.conf.Configuration.deprecation - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 
16:18:07.391 [LeaseRenewer:Administrator@192.168.211.4:9000] DEBUG org.apache.hadoop.hdfs.LeaseRenewer - Lease renewer daemon for [DFSClient_NONMAPREDUCE_1326440027_1] with renew id 1 started 
16:18:07.403 [pool-7-thread-1] DEBUG org.apache.hadoop.hdfs.DFSClient - DFSClient writeChunk allocating new packet seqno=0, src=/output/mytable/_temporary/0/_temporary/attempt_local145551582_0001_r_000000_0/part-r-00000, packetSize=65532, chunksPerPacket=127, bytesCurBlock=0 
16:18:07.403 [pool-7-thread-1] DEBUG org.apache.hadoop.hdfs.DFSClient - Queued packet 0 
16:18:07.403 [pool-7-thread-1] DEBUG org.apache.hadoop.hdfs.DFSClient - Queued packet 1 
16:18:07.403 [pool-7-thread-1] DEBUG org.apache.hadoop.hdfs.DFSClient - Waiting for ack for: 1 
16:18:07.403 [Thread-106] DEBUG org.apache.hadoop.hdfs.DFSClient - Allocating new block 
16:18:07.407 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #4 
16:18:07.409 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #4 
16:18:07.409 [Thread-106] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: addBlock took 2ms 
16:18:07.418 [Thread-106] DEBUG org.apache.hadoop.hdfs.DFSClient - pipeline = 192.168.211.6:50010 
16:18:07.418 [Thread-106] DEBUG org.apache.hadoop.hdfs.DFSClient - pipeline = 192.168.211.4:50010 
16:18:07.418 [Thread-106] DEBUG org.apache.hadoop.hdfs.DFSClient - pipeline = 192.168.211.5:50010 
16:18:07.418 [Thread-106] DEBUG org.apache.hadoop.hdfs.DFSClient - Connecting to datanode 192.168.211.6:50010 
16:18:07.419 [Thread-106] DEBUG org.apache.hadoop.hdfs.DFSClient - Send buf size 131072 
16:18:07.419 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #5 
16:18:07.420 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #5 
16:18:07.420 [Thread-106] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getServerDefaults took 1ms 
16:18:07.424 [Thread-106] DEBUG o.a.h.h.p.d.s.SaslDataTransferClient - SASL client skipping handshake in unsecured configuration for addr = /192.168.211.6, datanodeId = 192.168.211.6:50010 
16:18:07.622 [DataStreamer for file /output/mytable/_temporary/0/_temporary/attempt_local145551582_0001_r_000000_0/part-r-00000 block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089] DEBUG org.apache.hadoop.hdfs.DFSClient - DataStreamer block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089 sending packet packet seqno:0 offsetInBlock:0 lastPacketInBlock:false lastByteOffsetInBlock: 62 
16:18:07.647 [ResponseProcessor for block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089] DEBUG org.apache.hadoop.hdfs.DFSClient - DFSClient seqno: 0 status: SUCCESS status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 1536668 
16:18:07.647 [DataStreamer for file /output/mytable/_temporary/0/_temporary/attempt_local145551582_0001_r_000000_0/part-r-00000 block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089] DEBUG org.apache.hadoop.hdfs.DFSClient - DataStreamer block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089 sending packet packet seqno:1 offsetInBlock:62 lastPacketInBlock:true lastByteOffsetInBlock: 62 
16:18:07.652 [ResponseProcessor for block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089] DEBUG org.apache.hadoop.hdfs.DFSClient - DFSClient seqno: 1 status: SUCCESS status: SUCCESS status: SUCCESS downstreamAckTimeNanos: 2676871 
16:18:07.652 [DataStreamer for file /output/mytable/_temporary/0/_temporary/attempt_local145551582_0001_r_000000_0/part-r-00000 block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089] DEBUG org.apache.hadoop.hdfs.DFSClient - Closing old block BP-635201075-192.168.211.4-1531450855001:blk_1073749879_9089 
16:18:07.657 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #6 
16:18:07.659 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #6 
16:18:07.659 [pool-7-thread-1] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: complete took 2ms 
16:18:07.661 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Task - Task:attempt_local145551582_0001_r_000000_0 is done. And is in the process of committing 
16:18:07.662 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #7 
16:18:07.662 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #7 
16:18:07.662 [pool-7-thread-1] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 0ms 
16:18:07.663 [pool-7-thread-1] INFO  o.a.hadoop.mapred.LocalJobRunner - 1 / 1 copied. 
16:18:07.663 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Task - Task attempt_local145551582_0001_r_000000_0 is allowed to commit now 
16:18:07.663 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #8 
16:18:07.664 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #8 
16:18:07.664 [pool-7-thread-1] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 1ms 
16:18:07.664 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #9 
16:18:07.664 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #9 
16:18:07.665 [pool-7-thread-1] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 1ms 
16:18:07.675 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #10 
16:18:07.677 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #10 
16:18:07.677 [pool-7-thread-1] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: rename took 2ms 
16:18:07.678 [pool-7-thread-1] INFO  o.a.h.m.l.output.FileOutputCommitter - Saved output of task 'attempt_local145551582_0001_r_000000_0' to hdfs://192.168.211.4:9000/output/mytable/_temporary/0/task_local145551582_0001_r_000000 
16:18:07.679 [pool-7-thread-1] INFO  o.a.hadoop.mapred.LocalJobRunner - reduce > reduce 
16:18:07.679 [pool-7-thread-1] INFO  org.apache.hadoop.mapred.Task - Task 'attempt_local145551582_0001_r_000000_0' done. 
16:18:07.679 [pool-7-thread-1] INFO  o.a.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local145551582_0001_r_000000_0 
16:18:07.679 [Thread-21] INFO  o.a.hadoop.mapred.LocalJobRunner - reduce task executor complete. 
16:18:07.685 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #11 
16:18:07.686 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #11 
16:18:07.686 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getListing took 1ms 
16:18:07.692 [Thread-21] DEBUG o.a.h.m.l.output.FileOutputCommitter - Merging data from FileStatus{
   
    path=hdfs://192.168.211.4:9000/output/mytable/_temporary/0/task_local145551582_0001_r_000000; isDirectory=true; modification_time=1545466688450; access_time=0; owner=Administrator; group=supergroup; permission=rwxr-xr-x; isSymlink=false} to hdfs://192.168.211.4:9000/output/mytable 
16:18:07.692 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #12 
16:18:07.693 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #12 
16:18:07.693 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 1ms 
16:18:07.693 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #13 
16:18:07.694 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #13 
16:18:07.694 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 1ms 
16:18:07.694 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #14 
16:18:07.694 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #14 
16:18:07.694 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getListing took 0ms 
16:18:07.694 [Thread-21] DEBUG o.a.h.m.l.output.FileOutputCommitter - Merging data from FileStatus{
   
    path=hdfs://192.168.211.4:9000/output/mytable/_temporary/0/task_local145551582_0001_r_000000/part-r-00000; isDirectory=false; length=62; replication=3; blocksize=134217728; modification_time=1545466688726; access_time=1545466688450; owner=Administrator; group=supergroup; permission=rw-r--r--; isSymlink=false} to hdfs://192.168.211.4:9000/output/mytable/part-r-00000 
16:18:07.695 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #15 
16:18:07.695 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #15 
16:18:07.695 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 0ms 
16:18:07.695 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #16 
16:18:07.697 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #16 
16:18:07.697 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: rename took 2ms 
16:18:07.698 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #17 
16:18:07.699 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #17 
16:18:07.699 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: delete took 1ms 
16:18:07.701 [Thread-21] DEBUG org.apache.hadoop.hdfs.DFSClient - /output/mytable/_SUCCESS: masked=rw-r--r-- 
16:18:07.701 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #18 
16:18:07.703 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #18 
16:18:07.703 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: create took 2ms 
16:18:07.703 [Thread-21] DEBUG org.apache.hadoop.hdfs.DFSClient - computePacketChunkSize: src=/output/mytable/_SUCCESS, chunkSize=516, chunksPerPacket=127, packetSize=65532 
16:18:07.703 [Thread-21] DEBUG org.apache.hadoop.hdfs.DFSClient - Waiting for ack for: -1 
16:18:07.703 [IPC Parameter Sending Thread #0] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator sending #19 
16:18:07.704 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator got value #19 
16:18:07.704 [Thread-21] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: complete took 1ms 
16:18:07.739 [Thread-21] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331) 
16:18:07.823 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:07.823 [main] INFO  org.apache.hadoop.mapreduce.Job - Job job_local145551582_0001 running in uber mode : false 
16:18:07.841 [main] INFO  org.apache.hadoop.mapreduce.Job -  map 100% reduce 100% 
16:18:07.841 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:677) 
16:18:07.841 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:07.841 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:07.841 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.getTaskCompletionEvents(Job.java:677) 
16:18:07.841 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:07.841 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:07.841 [main] INFO  org.apache.hadoop.mapreduce.Job - Job job_local145551582_0001 completed successfully 
16:18:07.842 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.getCounters(Job.java:765) 
16:18:07.866 [main] INFO  org.apache.hadoop.mapreduce.Job - Counters: 51 
	File System Counters 
		FILE: Number of bytes read=47114022 
		FILE: Number of bytes written=48157608 
		FILE: Number of read operations=0 
		FILE: Number of large read operations=0 
		FILE: Number of write operations=0 
		HDFS: Number of bytes read=0 
		HDFS: Number of bytes written=62 
		HDFS: Number of read operations=7 
		HDFS: Number of large read operations=0 
		HDFS: Number of write operations=4 
	Map-Reduce Framework 
		Map input records=4 
		Map output records=8 
		Map output bytes=78 
		Map output materialized bytes=100 
		Input split bytes=115 
		Combine input records=0 
		Combine output records=0 
		Reduce input groups=6 
		Reduce shuffle bytes=100 
		Reduce input records=8 
		Reduce output records=8 
		Spilled Records=16 
		Shuffled Maps =1 
		Failed Shuffles=0 
		Merged Map outputs=1 
		GC time elapsed (ms)=0 
		CPU time spent (ms)=0 
		Physical memory (bytes) snapshot=0 
		Virtual memory (bytes) snapshot=0 
		Total committed heap usage (bytes)=881852416 
	HBase Counters 
		BYTES_IN_REMOTE_RESULTS=196 
		BYTES_IN_RESULTS=196 
		MILLIS_BETWEEN_NEXTS=39 
		NOT_SERVING_REGION_EXCEPTION=0 
		NUM_SCANNER_RESTARTS=0 
		NUM_SCAN_RESULTS_STALE=0 
		REGIONS_SCANNED=1 
		REMOTE_RPC_CALLS=1 
		REMOTE_RPC_RETRIES=0 
		ROWS_FILTERED=0 
		ROWS_SCANNED=4 
		RPC_CALLS=1 
		RPC_RETRIES=0 
	Shuffle Errors 
		BAD_ID=0 
		CONNECTION=0 
		IO_ERROR=0 
		WRONG_LENGTH=0 
		WRONG_MAP=0 
		WRONG_REDUCE=0 
	File Input Format Counters  
		Bytes Read=0 
	File Output Format Counters  
		Bytes Written=62 
16:18:07.866 [main] DEBUG o.a.h.security.UserGroupInformation - PrivilegedAction as:Administrator (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:323) 
16:18:07.867 [Thread-3] DEBUG org.apache.hadoop.ipc.Client - stopping client from cache: org.apache.hadoop.ipc.Client@aafcffa 
16:18:07.867 [Thread-3] DEBUG org.apache.hadoop.ipc.Client - removing client from cache: org.apache.hadoop.ipc.Client@aafcffa 
16:18:07.867 [Thread-3] DEBUG org.apache.hadoop.ipc.Client - stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@aafcffa 
16:18:07.867 [Thread-3] DEBUG org.apache.hadoop.ipc.Client - Stopping client 
16:18:07.868 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator: closed 
16:18:07.868 [IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1811922029) connection to /192.168.211.4:9000 from Administrator: stopped, remaining connections 0 

日志分析可见我的下一篇博客。
检查hdfs 文件夹,如下:

[root@server4 hbase-1.4.0]# hdfs dfs -ls /output 
Found 2 items 
drwxr-xr-x   - Administrator supergroup          0 2018-12-22 16:18 /output/mytable 
drwxr-xr-x   - Administrator supergroup          0 2018-12-22 09:26 /output/wordCount 
[root@server4 hbase-1.4.0]# hdfs dfs -ls /output/mytable 
Found 2 items 
-rw-r--r--   3 Administrator supergroup          0 2018-12-22 16:18 /output/mytable/_SUCCESS 
-rw-r--r--   3 Administrator supergroup         62 2018-12-22 16:18 /output/mytable/part-r-00000 

可以看到,已经输出了一个新的文件夹:/output/mytable。里面包括两个子文件:part-r-00000,和_SUCCESS两个文件。查看其中的结果如下:

[root@server4 hbase-1.4.0]# hdfs dfs -cat /output/mytable/part-r-00000 
hadoop	1 
hbase	1 
hello	1 
hello	1 
hello	1 
hi	1 
kafka	1 
spark	1 

可以看到这里并没有做出一个WordCount 操作,我们需要检查相应的Reducer操作。发现在HBaseWCJob类中的没有为这个job设置指定的Reducer 处理类,导致直接对Mapper结果进行了输出。在Main方法中添加如下代码:

   //set the ReducerClass 
   job.setReducerClass(HBaseReducer.class); 

再次执行之后,查看输出,得到的结果如下:

[root@server4 hbase-1.4.0]# hdfs dfs -cat /output/mytable/part-r-00000 
hadoop	1 
hbase	1 
hello	3 
hi	1 
kafka	1 
spark	1 

5.总结

在本次的案例中,有如下要点:
1.如果MapReduce程序的数据源是HBase中的表的话,那么我们就要依据HBase为我们提供的Mapper入口【TableMapper
2.但是数据是放在HBase中的,需要先将其读取出来,然后再进行MapReduce的分析,所以说:我们可能会用到表Scan的操作【主要是HBase API】,然后使用 TableMapReduceUtil 工具类初始化Mapperjob
3.关于MapperReducer中的实现则与平常的MapReduce job没有差别。
4.项目源码可见我的github 中的hadoopDemo


评论关闭
IT虾米网

微信公众号号:IT虾米 (左侧二维码扫一扫)欢迎添加!

Hadoop项目实战之将MapReduce的结果写入到Mysql