hadoop之HDFS核心类Filesystem的使用

Posted 李狗蛋+1

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop之HDFS核心类Filesystem的使用相关的知识,希望对你有一定的参考价值。

1.导入jar包,要使用hadoop的HDFS就要导入hadoop-2.7.7sharehadoopcommon下的3个jar包和lib下的依赖包、hadoop-2.7.7sharehadoophdfs下的3个jar包和lib下的依赖包

2.-ls 查看目录下的所有文件和文件夹

        @Test
        public void listStatus() {
        Configuration conf = new Configuration();
        //使用hdfs的fs功能,客户端就会访问core-site.xml配置文件
        //这里是设置core-site.xml中的属性fs.defaultFS和属性值hdfs://192.168.xx.xx:9000
        //注意写自己的ip地址
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
                        //查看的路径
            FileStatus[] listStatus = fileSystem.listStatus(new Path("/"));
            for(int i = 0; i < listStatus.length; ++i)
            {       String dpath = listStatus[i].getPath().toString();
                    System.out.println(dpath);
            }
            fileSystem.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

3.-lsr 或者 -ls -R 递归查看

    @Test
    public void lsrtest() {
                // 要递归遍历的路径
        lsr("/");
    }
    
    public static List<String> lsr(String path) {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
            FileStatus[] listStatus = fileSystem.listStatus(new Path(path));
            for(int i = 0; i < listStatus.length; ++i)
            {
                    String abpath = listStatus[i].getPath().toString();
                    System.out.println(abpath);
                    boolean directory = listStatus[i].isDirectory();
                    if (directory) {
                        lsr(abpath);
                    }
            }
            fileSystem.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
        return null;
    }

4.-mkdir 创建文件夹

    @Test
    public void mkdir() {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
            boolean mkdirs = fileSystem.mkdirs(new Path("/lyx02/lyx002/lyx0002"));
            System.out.println(mkdirs?"创建成功":"创建失败");
        }catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

5.-touchz 创建文件

        @Test
    public void createNewFile() {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
            boolean createNewFile = fileSystem.createNewFile(new Path("/lyx02/lyx002/22.txt"));
            System.out.println(createNewFile?"创建成功":"创建失败");
        }catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

6.-put 下载hdfs下的文件到主机

@Test
    public void put() {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
        //hdfs下的路径
            FSDataInputStream in = fileSystem.open(new Path("/1.txt"));
            FileOutputStream destFile = new FileOutputStream(new File("D:\111.txt"));
            BufferedOutputStream out = new BufferedOutputStream(destFile);
            
            int count = -1;
            byte[]buffer = new byte[1024 *8];
            while((count=in.read(buffer))!=-1) {
                out.write(buffer,0,count);
            }
            in.close();
            out.close();
        }catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

7.-get 上传主机文件到hdfs

    @Test
    public void get() {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
            FSDataOutputStream out = fileSystem.create(new Path("/555.txt"));
            FileInputStream srcFile = new FileInputStream(new File("D:\111.txt"));
            BufferedInputStream in = new BufferedInputStream(srcFile);
            
            int count = -1;
            byte[]buffer = new byte[1024 *8];
            while((count=in.read(buffer))!=-1) {
                out.write(buffer,0,count);
            }
        }catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

8.-copyFromLocalFile 同get

    @Test
    public void copyFromLocalFile() {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
            fileSystem.copyFromLocalFile(new Path("D:\111.txt"), new Path("/666.txt"));
        }catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

9.-copyToLocalFile 同put

@Test
    public void copyToLocalFile() {
        Configuration conf = new Configuration();
        conf.set("fs.defaultFS", "hdfs://192.168.xx.xx:9000");
        try {
            FileSystem fileSystem = FileSystem.get(conf);
            fileSystem.copyToLocalFile(false,new Path("/666.txt"),new Path("D:\666.txt"),true);
        }catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

以上是关于hadoop之HDFS核心类Filesystem的使用的主要内容,如果未能解决你的问题,请参考以下文章

hadoop之hdfs------------------FileSystem及其源码分析

HDFS之FileSystem

史上最全Hadoop 核心 - HDFS 分布式文件系统详解(上万字建议收藏)

HDFS FileSystem类操作常用方法

Hadoop之深入HDFS原理<二>

Hadoop之HDFS原理及文件上传下载源码分析(下)