“java.io.IOException: Pass a Delete or a Put” when reading HDFS and storing HBase

不羁的心 提交于 2019-12-23 04:20:13

问题


I has been crazy with this error in a week. There was a post with the same problem Pass a Delete or a Put error in hbase mapreduce. But that resolution's not really work on mine.

My Driver:

 Configuration conf = HBaseConfiguration.create();
    Job job;
    try {
        job = new Job(conf, "Training");
        job.setJarByClass(TrainingDriver.class);
        job.setMapperClass(TrainingMapper.class);
        job.setMapOutputKeyClass(LongWritable.class);
        job.setMapOutputValueClass(Text.class);
        FileInputFormat.setInputPaths(job, new Path("my/path"));
        Scan scan = new Scan();
        scan.setCaching(500);        // 1 is the default in Scan, which will be bad for MapReduce jobs
        scan.setCacheBlocks(false);  // don't set to true for MR jobs
        // set other scan attrs
        TableMapReduceUtil.initTableReducerJob(Constants.PREFIX_TABLE,
                TrainingReducer.class, job);
        job.setReducerClass(TrainingReducer.class);
        //job.setNumReduceTasks(1);   // at least one, adjust as required
        try {
            job.waitForCompletion(true);
        } catch (ClassNotFoundException | InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

My Mapper:

public class TrainingMapper extends
    Mapper<LongWritable, Text, LongWritable, Text> {

    public void map(LongWritable key, Text value,
        Context context)
        throws IOException {
    context.write(key, new Text(generateNewText());
}

My Reducer

public class TrainingReducer extends TableReducer<LongWritable,Text,ImmutableBytesWritable>{

    public void reduce(LongWritable key, Iterator<Text> values,Context context)
        throws IOException {
        while (values.hasNext()) {
             try {
                Put put = new Put(Bytes.toBytes(key.toString()));
                put.add("cf1".getBytes(), "c1".getBytes(), values.next().getBytes());
                context.write(null, put);
             } catch (InterruptedException e) {
                 // TODO Auto-generated catch block
                  e.printStackTrace();
             }
       }
   }
 }

Do you have any experience on that? Please tell me how to fix it.


回答1:


I got the solution by myself.

Insert Annotation @Override before my reduce function and change the second param of reduce function like this: @Override public void reduce(LongWritable key, Iterable values,Context context)



来源:https://stackoverflow.com/questions/21887050/java-io-ioexception-pass-a-delete-or-a-put-when-reading-hdfs-and-storing-hbas

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!