使用 mahout 和 hadoop jar 运行 K-means 集群时出现 IO 异常 [关闭]

Posted

技术标签:

【中文标题】使用 mahout 和 hadoop jar 运行 K-means 集群时出现 IO 异常 [关闭]【英文标题】:IO Exception while running K-means clustering using mahout and hadoop jar [closed] 【发布时间】:2013-05-25 13:02:02 【问题描述】:

我正在尝试使用 Mahout 运行集群程序。以下是我正在使用的 java 代码

package com;

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.Text;
import org.apache.mahout.clustering.WeightedVectorWritable;
import org.apache.mahout.clustering.kmeans.Cluster;
import org.apache.mahout.clustering.kmeans.KMeansDriver;
import org.apache.mahout.common.distance.EuclideanDistanceMeasure;
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.math.Vector;
import org.apache.mahout.math.VectorWritable;

public class ClusteringDemo 

    public static final double[][] points =   1, 1 ,  2, 1 ,  1, 2 ,
             2, 2 ,  3, 3 ,  8, 8 ,  9, 8 ,  8, 9 ,  9, 9  ;

    public static void writePointsToFile(List<Vector> points, String fileName,
            FileSystem fs, Configuration conf) throws IOException 
        Path path = new Path(fileName);
        SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path,
                LongWritable.class, VectorWritable.class);
        long recNum = 0;
        VectorWritable vec = new VectorWritable();
        for (Vector point : points) 
            vec.set(point);
            writer.append(new LongWritable(recNum++), vec);
        
        writer.close();
    

    public static List<Vector> getPoints(double[][] raw) 
        List<Vector> points = new ArrayList<Vector>();
        for (int i = 0; i < raw.length; i++) 
            double[] fr = raw[i];
            Vector vec = new RandomAccessSparseVector(fr.length);
            vec.assign(fr);
            points.add(vec);
        
        return points;
    

    public static void main(String args[]) throws Exception 
        int k = 3;
        List<Vector> vectors = getPoints(points);
        File testData = new File("/home/vishal/testdata");
        if (!testData.exists()) 
            testData.mkdir();
        
        testData = new File("/home/vishal/testdata/points");
        if (!testData.exists()) 
            testData.mkdir();
        
        Configuration conf = new Configuration();
        FileSystem fs = FileSystem.get(conf);
        writePointsToFile(vectors, "/home/vishal/testdata/points/file1", fs,
                conf);

        Path path = new Path("/home/vishal/testdata/clusters/part-00000");
        SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path,
                Text.class, Cluster.class);
        for (int i = 0; i < k; i++) 
            Vector vec = vectors.get(i);
            Cluster cluster = new Cluster(vec, i,
                    new EuclideanDistanceMeasure());
            writer.append(new Text(cluster.getIdentifier()), cluster);

        
        writer.close();
        KMeansDriver.run(conf, new Path("/home/vishal/testdata/points"),
                new Path("/home/vishal/testdata/clusters"), new Path(
                        "/home/vishal/output"), new EuclideanDistanceMeasure(),
                0.001, 10, true, false);

        SequenceFile.Reader reader = new SequenceFile.Reader(fs, new Path(
                "/home/vishal/output/" + Cluster.CLUSTERED_POINTS_DIR
                        + "/part-m-00000"), conf);
        IntWritable key = new IntWritable();
        WeightedVectorWritable value = new WeightedVectorWritable();
        while (reader.next(key, value)) 
            System.out.println(value.toString() + " belongs to cluster "
                    + key.toString());
        
        reader.close();

    


但是当我运行它时,它开始正常执行但最后给我一个错误.. 以下是我在运行它时得到的堆栈跟踪。

13/05/30 09:49:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
13/05/30 09:49:22 INFO kmeans.KMeansDriver: Input: /home/vishal/testdata/points Clusters In: /home/vishal/testdata/clusters Out: /home/vishal/output Distance: org.apache.mahout.common.distance.EuclideanDistanceMeasure
13/05/30 09:49:22 INFO kmeans.KMeansDriver: convergence: 0.0010 max Iterations: 10 num Reduce Tasks: org.apache.mahout.math.VectorWritable Input Vectors: 
13/05/30 09:49:22 INFO kmeans.KMeansDriver: K-Means Iteration 1
13/05/30 09:49:22 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-1
13/05/30 09:49:23 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/05/30 09:49:23 INFO input.FileInputFormat: Total input paths to process : 1
13/05/30 09:49:23 INFO mapred.JobClient: Running job: job_local_0001
13/05/30 09:49:23 INFO util.ProcessTree: setsid exited with exit code 0
13/05/30 09:49:23 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@15fc40c
13/05/30 09:49:23 INFO mapred.MapTask: io.sort.mb = 100
13/05/30 09:49:23 INFO mapred.MapTask: data buffer = 79691776/99614720
13/05/30 09:49:23 INFO mapred.MapTask: record buffer = 262144/327680
13/05/30 09:49:23 INFO mapred.MapTask: Starting flush of map output
13/05/30 09:49:23 INFO mapred.MapTask: Finished spill 0
13/05/30 09:49:23 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
13/05/30 09:49:24 INFO mapred.JobClient:  map 0% reduce 0%
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
13/05/30 09:49:26 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@15ed659
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Merger: Merging 1 sorted segments
13/05/30 09:49:26 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 185 bytes
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
13/05/30 09:49:26 INFO mapred.LocalJobRunner: 
13/05/30 09:49:26 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
13/05/30 09:49:26 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to /home/vishal/output/clusters-1
13/05/30 09:49:27 INFO mapred.JobClient:  map 100% reduce 0%
13/05/30 09:49:29 INFO mapred.LocalJobRunner: reduce > reduce
13/05/30 09:49:29 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
13/05/30 09:49:30 INFO mapred.JobClient:  map 100% reduce 100%
13/05/30 09:49:30 INFO mapred.JobClient: Job complete: job_local_0001
13/05/30 09:49:30 INFO mapred.JobClient: Counters: 21
13/05/30 09:49:30 INFO mapred.JobClient:   File Output Format Counters 
13/05/30 09:49:30 INFO mapred.JobClient:     Bytes Written=474
13/05/30 09:49:30 INFO mapred.JobClient:   Clustering
13/05/30 09:49:30 INFO mapred.JobClient:     Converged Clusters=1
13/05/30 09:49:30 INFO mapred.JobClient:   FileSystemCounters
13/05/30 09:49:30 INFO mapred.JobClient:     FILE_BYTES_READ=3328461
13/05/30 09:49:30 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=3422872
13/05/30 09:49:30 INFO mapred.JobClient:   File Input Format Counters 
13/05/30 09:49:30 INFO mapred.JobClient:     Bytes Read=443
13/05/30 09:49:30 INFO mapred.JobClient:   Map-Reduce Framework
13/05/30 09:49:30 INFO mapred.JobClient:     Map output materialized bytes=189
13/05/30 09:49:30 INFO mapred.JobClient:     Map input records=9
13/05/30 09:49:30 INFO mapred.JobClient:     Reduce shuffle bytes=0
13/05/30 09:49:30 INFO mapred.JobClient:     Spilled Records=6
13/05/30 09:49:30 INFO mapred.JobClient:     Map output bytes=531
13/05/30 09:49:30 INFO mapred.JobClient:     Total committed heap usage (bytes)=325713920
13/05/30 09:49:30 INFO mapred.JobClient:     CPU time spent (ms)=0
13/05/30 09:49:30 INFO mapred.JobClient:     SPLIT_RAW_BYTES=104
13/05/30 09:49:30 INFO mapred.JobClient:     Combine input records=9
13/05/30 09:49:30 INFO mapred.JobClient:     Reduce input records=3
13/05/30 09:49:30 INFO mapred.JobClient:     Reduce input groups=3
13/05/30 09:49:30 INFO mapred.JobClient:     Combine output records=3
13/05/30 09:49:30 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
13/05/30 09:49:30 INFO mapred.JobClient:     Reduce output records=3
13/05/30 09:49:30 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
13/05/30 09:49:30 INFO mapred.JobClient:     Map output records=9
13/05/30 09:49:30 INFO kmeans.KMeansDriver: K-Means Iteration 2
13/05/30 09:49:30 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-2
13/05/30 09:49:30 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/05/30 09:49:30 INFO input.FileInputFormat: Total input paths to process : 1
13/05/30 09:49:30 INFO mapred.JobClient: Running job: job_local_0002
13/05/30 09:49:30 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@13f136e
13/05/30 09:49:30 INFO mapred.MapTask: io.sort.mb = 100
13/05/30 09:49:30 INFO mapred.MapTask: data buffer = 79691776/99614720
13/05/30 09:49:30 INFO mapred.MapTask: record buffer = 262144/327680
13/05/30 09:49:30 INFO mapred.MapTask: Starting flush of map output
13/05/30 09:49:30 INFO mapred.MapTask: Finished spill 0
13/05/30 09:49:30 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
13/05/30 09:49:31 INFO mapred.JobClient:  map 0% reduce 0%
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
13/05/30 09:49:33 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@d6b059
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Merger: Merging 1 sorted segments
13/05/30 09:49:33 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
13/05/30 09:49:33 INFO mapred.LocalJobRunner: 
13/05/30 09:49:33 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
13/05/30 09:49:33 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to /home/vishal/output/clusters-2
13/05/30 09:49:34 INFO mapred.JobClient:  map 100% reduce 0%
13/05/30 09:49:36 INFO mapred.LocalJobRunner: reduce > reduce
13/05/30 09:49:36 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
13/05/30 09:49:37 INFO mapred.JobClient:  map 100% reduce 100%
13/05/30 09:49:37 INFO mapred.JobClient: Job complete: job_local_0002
13/05/30 09:49:37 INFO mapred.JobClient: Counters: 20
13/05/30 09:49:37 INFO mapred.JobClient:   File Output Format Counters 
13/05/30 09:49:37 INFO mapred.JobClient:     Bytes Written=364
13/05/30 09:49:37 INFO mapred.JobClient:   FileSystemCounters
13/05/30 09:49:37 INFO mapred.JobClient:     FILE_BYTES_READ=6658544
13/05/30 09:49:37 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=6844248
13/05/30 09:49:37 INFO mapred.JobClient:   File Input Format Counters 
13/05/30 09:49:37 INFO mapred.JobClient:     Bytes Read=443
13/05/30 09:49:37 INFO mapred.JobClient:   Map-Reduce Framework
13/05/30 09:49:37 INFO mapred.JobClient:     Map output materialized bytes=128
13/05/30 09:49:37 INFO mapred.JobClient:     Map input records=9
13/05/30 09:49:37 INFO mapred.JobClient:     Reduce shuffle bytes=0
13/05/30 09:49:37 INFO mapred.JobClient:     Spilled Records=4
13/05/30 09:49:37 INFO mapred.JobClient:     Map output bytes=531
13/05/30 09:49:37 INFO mapred.JobClient:     Total committed heap usage (bytes)=525074432
13/05/30 09:49:37 INFO mapred.JobClient:     CPU time spent (ms)=0
13/05/30 09:49:37 INFO mapred.JobClient:     SPLIT_RAW_BYTES=104
13/05/30 09:49:37 INFO mapred.JobClient:     Combine input records=9
13/05/30 09:49:37 INFO mapred.JobClient:     Reduce input records=2
13/05/30 09:49:37 INFO mapred.JobClient:     Reduce input groups=2
13/05/30 09:49:37 INFO mapred.JobClient:     Combine output records=2
13/05/30 09:49:37 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
13/05/30 09:49:37 INFO mapred.JobClient:     Reduce output records=2
13/05/30 09:49:37 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
13/05/30 09:49:37 INFO mapred.JobClient:     Map output records=9
13/05/30 09:49:37 INFO kmeans.KMeansDriver: K-Means Iteration 3
13/05/30 09:49:37 INFO common.HadoopUtil: Deleting /home/vishal/output/clusters-3
13/05/30 09:49:37 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/05/30 09:49:37 INFO input.FileInputFormat: Total input paths to process : 1
13/05/30 09:49:37 INFO mapred.JobClient: Running job: job_local_0003
13/05/30 09:49:37 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@988707
13/05/30 09:49:37 INFO mapred.MapTask: io.sort.mb = 100
13/05/30 09:49:37 INFO mapred.MapTask: data buffer = 79691776/99614720
13/05/30 09:49:37 INFO mapred.MapTask: record buffer = 262144/327680
13/05/30 09:49:37 INFO mapred.MapTask: Starting flush of map output
13/05/30 09:49:37 INFO mapred.MapTask: Finished spill 0
13/05/30 09:49:37 INFO mapred.Task: Task:attempt_local_0003_m_000000_0 is done. And is in the process of commiting
13/05/30 09:49:38 INFO mapred.JobClient:  map 0% reduce 0%
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task 'attempt_local_0003_m_000000_0' done.
13/05/30 09:49:40 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6214f5
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Merger: Merging 1 sorted segments
13/05/30 09:49:40 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 124 bytes
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task:attempt_local_0003_r_000000_0 is done. And is in the process of commiting
13/05/30 09:49:40 INFO mapred.LocalJobRunner: 
13/05/30 09:49:40 INFO mapred.Task: Task attempt_local_0003_r_000000_0 is allowed to commit now
13/05/30 09:49:40 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0003_r_000000_0' to /home/vishal/output/clusters-3
13/05/30 09:49:41 INFO mapred.JobClient:  map 100% reduce 0%
13/05/30 09:49:43 INFO mapred.LocalJobRunner: reduce > reduce
13/05/30 09:49:43 INFO mapred.Task: Task 'attempt_local_0003_r_000000_0' done.
13/05/30 09:49:44 INFO mapred.JobClient:  map 100% reduce 100%
13/05/30 09:49:44 INFO mapred.JobClient: Job complete: job_local_0003
13/05/30 09:49:44 INFO mapred.JobClient: Counters: 21
13/05/30 09:49:44 INFO mapred.JobClient:   File Output Format Counters 
13/05/30 09:49:44 INFO mapred.JobClient:     Bytes Written=364
13/05/30 09:49:44 INFO mapred.JobClient:   Clustering
13/05/30 09:49:44 INFO mapred.JobClient:     Converged Clusters=2
13/05/30 09:49:44 INFO mapred.JobClient:   FileSystemCounters
13/05/30 09:49:44 INFO mapred.JobClient:     FILE_BYTES_READ=9988052
13/05/30 09:49:44 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=10265506
13/05/30 09:49:44 INFO mapred.JobClient:   File Input Format Counters 
13/05/30 09:49:44 INFO mapred.JobClient:     Bytes Read=443
13/05/30 09:49:44 INFO mapred.JobClient:   Map-Reduce Framework
13/05/30 09:49:44 INFO mapred.JobClient:     Map output materialized bytes=128
13/05/30 09:49:44 INFO mapred.JobClient:     Map input records=9
13/05/30 09:49:44 INFO mapred.JobClient:     Reduce shuffle bytes=0
13/05/30 09:49:44 INFO mapred.JobClient:     Spilled Records=4
13/05/30 09:49:44 INFO mapred.JobClient:     Map output bytes=531
13/05/30 09:49:44 INFO mapred.JobClient:     Total committed heap usage (bytes)=724434944
13/05/30 09:49:44 INFO mapred.JobClient:     CPU time spent (ms)=0
13/05/30 09:49:44 INFO mapred.JobClient:     SPLIT_RAW_BYTES=104
13/05/30 09:49:44 INFO mapred.JobClient:     Combine input records=9
13/05/30 09:49:44 INFO mapred.JobClient:     Reduce input records=2
13/05/30 09:49:44 INFO mapred.JobClient:     Reduce input groups=2
13/05/30 09:49:44 INFO mapred.JobClient:     Combine output records=2
13/05/30 09:49:44 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
13/05/30 09:49:44 INFO mapred.JobClient:     Reduce output records=2
13/05/30 09:49:44 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
13/05/30 09:49:44 INFO mapred.JobClient:     Map output records=9
Exception in thread "main" java.io.IOException: Target /home/vishal/output/clusters-3-final/clusters-3 is a directory
    at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:359)
    at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:361)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:211)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:163)
    at org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:287)
    at org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:425)
    at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClustersMR(KMeansDriver.java:322)
    at org.apache.mahout.clustering.kmeans.KMeansDriver.buildClusters(KMeansDriver.java:239)
    at org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:154)
    at com.ClusteringDemo.main(ClusteringDemo.java:80)

可能是什么原因??

谢谢

【问题讨论】:

您确实需要开始阅读错误消息。显然存在一个目录,而不是应该存在的目录。 【参考方案1】:

这是KMeansDriver 正在尝试做的事情:

Path finalClustersIn = new Path(output, AbstractCluster.CLUSTERS_DIR + (iteration-1) + "-final");
FileSystem.get(conf).rename(new Path(output, AbstractCluster.CLUSTERS_DIR + (iteration-1)), finalClustersIn);

如您所见,它在 3 次迭代后已经收敛,并试图将目录 clusters-3 中的第 3 次迭代的结果合并到 clusters-3-final表示它已完成。

现在FileSystemrename 方法在实际重命名之前会进行检查,以确保它不会尝试重命名到已经存在的目录。事实上,您似乎已经有了这个目录clusters-3-final,可能来自之前的运行。

删除此目录应该可以解决您的问题,您可以通过命令行执行此操作:

hadoop fs -rmr /home/vishal/output/clusters-3-final

或者因为看起来你是在本地模式下运行你的工作:

rm -rf /home/vishal/output/clusters-3-final

为避免此类问题,我建议每次运行分析时使用唯一的输出目录,例如,您可以获取当前日期并将其附加到输出文件名Path,例如使用@987654328 @。

编辑:关于您的第二期:

Exception in thread "main" java.io.IOException: wrong value class: 0.0: null is not class org.apache.mahout.clustering.WeightedPropertyVectorWritable at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1932) at com.ClusteringDemo.main(ClusteringDemo.java:90)

您实际上正遭受 Mahout 版本之间的冲突,因为较旧的 Mahout 版本使用 WeightedVectorWritable,而较新的 Mahout 版本使用 WeightedPropertyVectorWritable。要修复它,只需将 value 变量的声明从:

WeightedVectorWritable value = new WeightedVectorWritable();

到:

WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable();

【讨论】:

嗨查尔斯............谢谢。但我从你的解释中了解到,这是由于目录的存在,程序终止于第三个集群..所以我删除了目录并再次运行代码..但仍然存在同样的问题......我想我做了其他不需要的事情......是是这样吗?....其实我只是这方面的初学者。所以这就是混乱的原因?谢谢 实际上你好像是在本地模式下运行你的工作,你能不能把你本地磁盘上的这个目录删掉再试一次? 谢谢知道了>但现在它抛出了以下异常;;; 线程“主”java.io.IOException 中的异常:错误值类:0.0:null 不是类 org.apache.mahout.clustering.WeightedPropertyVectorWritable 在 org.apache.hadoop.io.SequenceFile$ Reader.next(SequenceFile.java:1932) at com.ClusteringDemo.main(ClusteringDemo.java:90) 现在您的 K-Means 工作已成功完成,您只是在读取输出时遇到问题。你能检查 `/home/vishal/output/clusteredPoints/part-m-00000" 是否存在吗?

以上是关于使用 mahout 和 hadoop jar 运行 K-means 集群时出现 IO 异常 [关闭]的主要内容,如果未能解决你的问题,请参考以下文章

在Linux上结合Hadoop平台安装mahout运行时显示错误JAVA_HOME is not

甘道夫怎样在cdh5.2上执行mahout的itemcf on hadoop

在 hadoop 多节点集群上运行 mahout kmeans

在 hadoop 集群上部署 Mahout

使用 Hadoop 的机器学习框架 [关闭]

如何打印 mahout lda cvb 主题