Hadoop mapReduce 如何在 HDFS 中仅存储值

时间:2023-05-04
本文介绍了Hadoop mapReduce 如何在 HDFS 中仅存储值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我正在使用它来删除重复的行

I am using This for removing Duplicate lines

public class DLines
 {
   public static class TokenCounterMapper extends Mapper<Object, Text, Text, IntWritable>
    {
    private final static IntWritable one = new IntWritable(1);
      private Text word = new Text();
      @Override
      public void map(Object key, Text value, Context context) throws IOException, InterruptedException
       {
           String line=value.toString();
           //int hash_code=line.hashCode();
           context.write(value, one);
       }
   }

public static class TokenCounterReducer extends Reducer<Text, IntWritable, Text, IntWritable> 
 {
        @Override
    public void reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException, InterruptedException 
     {
 public void reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException, InterruptedException 
     {
       int sum = 0;
       for (IntWritable value : values) 
       {   
           sum += value.get();        
       }
       if (sum<2)
           {
             context.write(key,new IntWritable(sum));
           }
      }
      }

我只需要在 hdfs 中存储密钥.

i have to store only Key in hdfs.

推荐答案

如果你不需要 reducer 的值,只需使用 NullWritable.

If you do not require value from your reducer, just use NullWritable.

你可以简单地说 context.write(key,NullWritable.get());

在你的驱动中,你也可以设置

In you driver, you could also set

 job.setMapOutputKeyClass(Text.class);
 job.setMapOutputValueClass(IntWritable.class);

&

 job.setOutputKeyClass(Text.class);
 job.setOutputValueClass(NullWritable.class);

这篇关于Hadoop mapReduce 如何在 HDFS 中仅存储值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:hadoop映射器中对象类型的键 下一篇:Spark on yarn jar 上传问题

相关文章