我正在使用它来删除重复的行
I am using This for removing Duplicate lines
public class DLines
{
public static class TokenCounterMapper extends Mapper<Object, Text, Text, IntWritable>
{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException
{
String line=value.toString();
//int hash_code=line.hashCode();
context.write(value, one);
}
}
public static class TokenCounterReducer extends Reducer<Text, IntWritable, Text, IntWritable>
{
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException, InterruptedException
{
public void reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException, InterruptedException
{
int sum = 0;
for (IntWritable value : values)
{
sum += value.get();
}
if (sum<2)
{
context.write(key,new IntWritable(sum));
}
}
}
我只需要在 hdfs 中存储密钥.
i have to store only Key in hdfs.
如果你不需要 reducer 的值,只需使用 NullWritable.
If you do not require value from your reducer, just use NullWritable.
你可以简单地说 context.write(key,NullWritable.get());
在你的驱动中,你也可以设置
In you driver, you could also set
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
&
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
这篇关于Hadoop mapReduce 如何在 HDFS 中仅存储值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!