简介
作者:彭东林
u-boot版本:u-boot-2015.04
Linux版本:Linux-3.14
硬件平台:tq2440, 内存:64M NandFlash: 256MB
下面我们分两部分,u-boot和kernel,首先介绍u-boot中是如何支持mtdparts的,然后简单分析Linux内核设置分区的两种方式:
方式一
在平台代码中写死,然后在初始化NandFlash的时候设置。
方式二
在u-boot中设置,这个比较灵活,u-boot将分区信息(形如:mtdparts=xxx)添加到bootargs中,kernel在启动的时候会解析mtdparts。
u-boot中支持mtdparts命令
转载自:http://w3sun.blog.163.com/blog/static/1859535342012058369333/
分区方法
1)MTD层的分区
2)通过U-boot传递给内核的命令行中的mtdparts=...
3)其他可以让内核知道分区信息的任何办法,(内核默认的命令参数)
下面说到mtdparts,及它的用法:
mtdparts
mtdparts=fc000000.nor_flash:1920k(linux),128k(fdt),20M(ramdisk),4M(jffs2),38272k(user),256k(env),384k(uboot)要想这个参数起作用,
内核中的mtd驱动必须要支持,即内核配置时需要选上
Device Drivers --->
Memory Technology Device (MTD) support --->
Command line partition table parsing
mtdparts的格式如下:
mtdparts=<mtddef>[;<mtddef]
<mtddef> := <mtd-id>:<partdef>[,<partdef>]
<partdef> := <size>[@offset][<name>][ro]
<mtd-id> := unique id used in mapping driver/device
<size> := standard linux memsize OR "-" to denote all remaining space
<name> := (NAME)
因此你在使用的时候需要按照下面的格式来设置:
mtdparts=mtd-id:<size1>@<offset1>(<name1>),<size2>@<offset2>(<name2>)
这有几个需要注意的地方:
a.mtd-id 必须要跟你当前平台的flash的mtd-id一致,不然整个mtdparts会失效 怎样获取到当前平台的flash的mtd-id?在bootargs参数列表中,可以指定当前flash的mtd-id,指定 mtdid
s:nand0=gen_nand.1,前面的nand0则表示第一个flash
b.size在设置的时候可以为实际的size(xxM,xxk,xx),也可以为\'-\'这表示剩余的所有空间。相
关信息可以查看drivers/mtd/cmdlinepart.c中的注释找到相关描述。
U-boot环境变量有两个,他们分别是: bootcmd 和bootargs。
至于在我们自己的at91sam9263ek板子上为了实现mtdparts分区命令的支持需要在U-boot
-2010.06/include/configs/at91sam9263ek.h中添加相关的宏定义:
#define CONFIG_CMD_MTDPARTS
#define CONFIG_MTD_DEVICE
#define CONFIG_MTD_PARTITIONS
加入MTD分区信息:
#define MTDIDS_DEFAULT "nand0=atmel_nand"
#define MTDPARTS_DEFAULT "mtdparts=atmel_nand:15M@0(cramfs)," \\
"15M(jffs2)," \\
"30M(yaffs2)," \\
"-(user)"
保存后退出,回到根目录,重新make
[root@localhost u-boot-2010.06]#make at91sam9263ek_dataflash_cs0_config
[root@localhost u-boot-2010.06]#make all
将重新编译的u-boot.bin烧到dataflash中,使用mtdparts查看分区:
U-Boot> mtdparts
device nand0 <atmel_nand>, # parts = 4
#: name size offset mask_flags
0: cramfs 0x00f00000 0x00000000 0
1: jffs2 0x00f00000 0x00f00000 0
2: yaffs2 0x01e00000 0x01e00000 0
3: user 0x04400000 0x03c00000 0
active partition: nand0,0 - (cramfs) 0x00f00000 @ 0x00000000
defaults:
mtdids : nand0=atmel_nand
mtdparts: mtdparts=atmel_nand:15M@0(cramfs),15M(jffs2),30M(yaffs2),-(user)
重新设置分区:
U-Boot> setenv mtdparts mtdparts=atmel_nand:30M@0(a),30M(b),-(c)
U-Boot> save
Saving Environment to dataflash...
U-Boot> mtdparts
device nand0 <atmel_nand>, # parts = 3
#: name size offset mask_flags
0: a 0x01e00000 0x00000000 0
1: b 0x01e00000 0x01e00000 0
2: c 0x04400000 0x03c00000 0
active partition: nand0,0 - (a) 0x01e00000 @ 0x00000000
defaults:
mtdids : nand0=atmel_nand
mtdparts: mtdparts=atmel_nand:15M@0(cramfs),15M(jffs2),30M(yaffs2),-(user)
可以看到,我们可以手动设置分区了。最后还要恢复默认。
U-Boot>mtdparts default
Kernel中设置分区
相关的内核代码我已经上传到csdn上了,使用git管理,分支是transplant_to_tq2440
git clone git@code.csdn.net:pengdonglin137/linux-3-14-y.git
在此之前我们先看看NandFlash是如何初始化的。
这里采用platform_device和platform_driver架构。
1. platform_device的注册
在arch/arm/mach-s3c24xx/mach-tq2440.c中:
1: static struct platform_device *tq2440_devices[] __initdata = {
2: ......
3: &s3c_device_nand,
4: };
5:
6: static void __init tq2440_machine_init(void)
7: {
8: ......
9: s3c_nand_set_platdata(&tq2440_nand_info);
10: platform_add_devices(tq2440_devices, ARRAY_SIZE(tq2440_devices));
11: ......
12: }
13:
14: MACHINE_START(TQ2440, "TQ2440")
15: ........
16: .init_machine = tq2440_machine_init,
17: ......
18: MACHINE_END
第5行是nand控制器的platform_device;
第10行platform_add_device会将s3c_device_nand注册到系统中;
第9行设置的就是分区相关的信息
1: /* NAND parititon */
2:
3: static struct mtd_partition tq2440_nand_part[] = {
4: [0] = {
5: .name = "Boot",
6: .offset = 0,
7: .size = SZ_2M,
8: },
9: [1] = {
10: .name = "Kernel",
11: .offset = SZ_2M,
12: .size = SZ_1M * 3,
13: },
14: [2] = {
15: .name = "Rootfs",
16: .offset = SZ_1M * 5,
17: .size = MTDPART_SIZ_FULL,
18: }
19: };
20:
21: static struct s3c2410_nand_set tq2440_nand_sets[] = {
22: [0] = {
23: .name = "NAND",
24: .nr_chips = 1,
25: .nr_partitions = ARRAY_SIZE(tq2440_nand_part),
26: .partitions = tq2440_nand_part,
27: },
28: };
29:
30: /* choose a set of timings which should suit most 512Mbit
31: * chips and beyond.
32: */
33:
34: static struct s3c2410_platform_nand tq2440_nand_info = {
35: .tacls = 10,
36: .twrph0 = 25,
37: .twrph1 = 10,
38: .nr_sets = ARRAY_SIZE(tq2440_nand_sets),
39: .sets = tq2440_nand_sets,
40: };
第3行的结构体tq2440_nand_part就是分区信息,容易看出,这里一共分了3个区,分别是Boot、Kernel和Rootfs:
Boot (2MB) |
Kernel |
Rootfs (剩余) |
然后将这个分区表存放到了第21行的tq2440_nand_set的元素[0]中,分区表的名字”NAND”,注意这里的名字很重要,将来u-boot中传入的mtdparts中的mtd-id一致。tq2440_nand_sets的每一个元素存放一张分区表;
第34行的结构体tq2440_nand_info中的tacls/twrph0/twrph1用于控制Nand控制器读写NandFlash时的时序参数。
s3c_device_nand是在文件arch/arm/plat-samsung/devs.c
1: /* NAND */
2:
3: #ifdef CONFIG_S3C_DEV_NAND
4: static struct resource s3c_nand_resource[] = {
5: [0] = DEFINE_RES_MEM(S3C_PA_NAND, SZ_1M),
6: };
7:
8: struct platform_device s3c_device_nand = {
9: .name = "s3c2410-nand",
10: .id = -1,
11: .num_resources = ARRAY_SIZE(s3c_nand_resource),
12: .resource = s3c_nand_resource,
13: };
14:
15: /*
16: * s3c_nand_copy_set() - copy nand set data
17: * @set: The new structure, directly copied from the old.
18: *
19: * Copy all the fields from the NAND set field from what is probably __initdata
20: * to new kernel memory. The code returns 0 if the copy happened correctly or
21: * an error code for the calling function to display.
22: *
23: * Note, we currently do not try and look to see if we\'ve already copied the
24: * data in a previous set.
25: */
26: static int __init s3c_nand_copy_set(struct s3c2410_nand_set *set)
27: {
28: void *ptr;
29: int size;
30:
31: size = sizeof(struct mtd_partition) * set->nr_partitions;
32: if (size) {
33: ptr = kmemdup(set->partitions, size, GFP_KERNEL);
34: set->partitions = ptr;
35:
36: if (!ptr)
37: return -ENOMEM;
38: }
39:
40: if (set->nr_map && set->nr_chips) {
41: size = sizeof(int) * set->nr_chips;
42: ptr = kmemdup(set->nr_map, size, GFP_KERNEL);
43: set->nr_map = ptr;
44:
45: if (!ptr)
46: return -ENOMEM;
47: }
48:
49: if (set->ecc_layout) {
50: ptr = kmemdup(set->ecc_layout,
51: sizeof(struct nand_ecclayout), GFP_KERNEL);
52: set->ecc_layout = ptr;
53:
54: if (!ptr)
55: return -ENOMEM;
56: }
57:
58: return 0;
59: }
60:
61: void __init s3c_nand_set_platdata(struct s3c2410_platform_nand *nand)
62: {
63: struct s3c2410_platform_nand *npd;
64: int size;
65: int ret;
66:
67: /* note, if we get a failure in allocation, we simply drop out of the
68: * function. If there is so little memory available at initialisation
69: * time then there is little chance the system is going to run.
70: */
71:
72: npd = s3c_set_platdata(nand, sizeof(struct s3c2410_platform_nand),
73: &s3c_device_nand);
74: if (!npd)
75: return;
76:
77: /* now see if we need to copy any of the nand set data */
78:
79: size = sizeof(struct s3c2410_nand_set) * npd->nr_sets;
80: if (size) {
81: struct s3c2410_nand_set *from = npd->sets;
82: struct s3c2410_nand_set *to;
83: int i;
84:
85: to = kmemdup(from, size, GFP_KERNEL);
86: npd->sets = to; /* set, even if we failed */
87:
88: if (!to) {
89: printk(KERN_ERR "%s: no memory for sets\\n", __func__);
90: return;
91: }
92:
93: for (i = 0; i < npd->nr_sets; i++) {
94: ret = s3c_nand_copy_set(to);
95: if (ret) {
96: printk(KERN_ERR "%s: failed to copy set %d\\n",
97: __func__, i);
98: return;
99: }
100: to++;
101: }
102: }
103: }
104: #endif /* CONFIG_S3C_DEV_NAND */
第72行将nand(也就是tq2440_nand_info)的副本存放到s3c_device_nand的dev.platform_data中,然后将nand的副本首地址npd返回;
第79行到第100行将tq2440_nand_info的分区信息拷贝一份,赋值给npd的sets成员;
2. platform_driver的注册
文件:drivers/mtd/nand/s3c2410.c
1: /* driver device registration */
2:
3: static struct platform_device_id s3c24xx_driver_ids[] = {
4: {
5: .name = "s3c2410-nand",
6: .driver_data = TYPE_S3C2410,
7: }, {
8: .name = "s3c2440-nand",
9: .driver_data = TYPE_S3C2440,
10: }, {
11: .name = "s3c2412-nand",
12: .driver_data = TYPE_S3C2412,
13: }, {
14: .name = "s3c6400-nand",
15: .driver_data = TYPE_S3C2412, /* compatible with 2412 */
16: },
17: { }
18: };
19:
20: MODULE_DEVICE_TABLE(platform, s3c24xx_driver_ids);
21:
22: static struct platform_driver s3c24xx_nand_driver = {
23: .probe = s3c24xx_nand_probe,
24: .remove = s3c24xx_nand_remove,
25: .suspend = s3c24xx_nand_suspend,
26: .resume = s3c24xx_nand_resume,
27: .id_table = s3c24xx_driver_ids,
28: .driver = {
29: .name = "s3c24xx-nand",
30: .owner = THIS_MODULE,
31: },
32: };
33:
34: module_platform_driver(s3c24xx_nand_driver);
当注册s3c24xx_nand_driver的时候,由于s3c24xx_driver_ids中的”s3c2410-nand”与s3c_device_nand的name一致,因此函数s3c24xx_nand_probe被执行。
下面我们简要分析函数s3c24xx_nand_probe的实现:
1: /* s3c24xx_nand_probe
2: *
3: * called by device layer when it finds a device matching
4: * one our driver can handled. This code checks to see if
5: * it can allocate all necessary resources then calls the
6: * nand layer to look for devices
7: */
8: static int s3c24xx_nand_probe(struct platform_device *pdev)
9: {
10: struct s3c2410_platform_nand *plat = to_nand_plat(pdev);
11: enum s3c_cpu_type cpu_type;
12: struct s3c2410_nand_info *info;
13: struct s3c2410_nand_mtd *nmtd;
14: struct s3c2410_nand_set *sets;
15: struct resource *res;
16: int err = 0;
17: int size;
18: int nr_sets;
19: int setno;
20:
21: cpu_type = platform_get_device_id(pdev)->driver_data;
22:
23: pr_debug("s3c2410_nand_probe(%p)\\n", pdev);
24:
25: info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
26: if (info == NULL) {
27: err = -ENOMEM;
28: goto exit_error;
29: }
30:
31: platform_set_drvdata(pdev, info);
32:
33: spin_lock_init(&info->controller.lock);
34: init_waitqueue_head(&info->controller.wq);
35:
36: /* get the clock source and enable it */
37:
38: info->clk = devm_clk_get(&pdev->dev, "nand");
39: if (IS_ERR(info->clk)) {
40: dev_err(&pdev->dev, "failed to get clock\\n");
41: err = -ENOENT;
42: goto exit_error;
43: }
44:
45: s3c2410_nand_clk_set_state(info, CLOCK_ENABLE);
46:
47: /* allocate and map the resource */
48:
49: /* currently we assume we have the one resource */
50: res = pdev->resource;
51: size = resource_size(res);
52:
53: info->device = &pdev->dev;
54: info->platform = plat;
55: info->cpu_type = cpu_type;
56:
57: info->regs = devm_ioremap_resource(&pdev->dev, res);
58: if (IS_ERR(info->regs)) {
59: err = PTR_ERR(info->regs);
60: goto exit_error;
61: }
62:
63: dev_dbg(&pdev->dev, "mapped registers at %p\\n", info->regs);
64:
65: /* initialise the hardware */
66:
67: err = s3c2410_nand_inithw(info);
68: if (err != 0)
69: goto exit_error;
70:
71: sets = (plat != NULL) ? plat->sets : NULL;
72: nr_sets = (plat != NULL) ? plat->nr_sets : 1;
73:
74: info->mtd_count = nr_sets;
75:
76: /* allocate our information */
77:
78: size = nr_sets * sizeof(*info->mtds);
79: info->mtds = devm_kzalloc(&pdev->dev, size, GFP_KERNEL);
80: if (info->mtds == NULL) {
81: err = -ENOMEM;
82: goto exit_error;
83: }
84:
85: /* initialise all possible chips */
86:
87: nmtd = info->mtds;
88:
89: for (setno = 0; setno < nr_sets; setno++, nmtd++) {
90: pr_debug("initialising set %d (%p, info %p)\\n",
91: setno, nmtd, info);
92:
93: s3c2410_nand_init_chip(info, nmtd, sets);
94:
95: nmtd->scan_res = nand_scan_ident(&nmtd->mtd,
96: (sets) ? sets->nr_chips : 1,
97: NULL);
98:
99: if (nmtd->scan_res == 0) {
100: s3c2410_nand_update_chip(info, nmtd);
101: nand_scan_tail(&nmtd->mtd);
102: s3c2410_nand_add_partition(info, nmtd, sets);
103: }
104:
105: if (sets != NULL)
106: sets++;
107: }
108:
109: err = s3c2410_nand_cpufreq_register(info);
110: if (err < 0) {
111: dev_err(&pdev->dev, "failed to init cpufreq support\\n");
112: goto exit_error;
113: }
114:
115: if (allow_clk_suspend(info)) {
116: dev_info(&pdev->dev, "clock idle support enabled\\n");
117: s3c2410_nand_clk_set_state(info, CLOCK_SUSPEND);
118: }
119:
120: pr_debug("initialised ok\\n");
121: return 0;
122:
123: exit_error:
124: s3c24xx_nand_remove(pdev);
125:
126: if (err == 0)
127: err = -EINVAL;
128: return err;
129: }
第10行,获得tq2440_nand_info,其实是它的副本;
第21行,cpu_type是TYPE_S3C2410;
第38行,获得nand控制器的时钟;
第45行,使能nand控制器的时钟;
第67行,初始化nand控制器;
第72行,nr_sets的值是1;
第93行,根据cpu_type的设置初始化一些函数指针,如读函数、写函数以及片选函数;
第95行,读取NandFlash的ID,以及容量、规格(如pagesize),成功返回0;
第100行,设置一些关于ecc的信息;
第101行,设置一些关于oob读写的信息;
下面重点分析第102行的函数:s3c2410_nand_add_partition
1: static int s3c2410_nand_add_partition(struct s3c2410_nand_info *info,
2: struct s3c2410_nand_mtd *mtd,
3: struct s3c2410_nand_set *set)
4: {
5: if (set) {
6: mtd->mtd.name = set->name;
7:
8: return mtd_device_parse_register(&mtd->mtd, NULL, NULL,
9: set->partitions, set->nr_partitions);
10: }
11:
12: return -ENODEV;
13: }
第6行的set->name就是”Nand”
第9行的set->partitions中存放的是分区表,set->nr_partitions存放的是这个分区表的包含的分区个数
1: int mtd_device_parse_register(struct mtd_info *mtd, const char * const *types,
2: struct mtd_part_parser_data *parser_data,
3: const struct mtd_partition *parts,
4: int nr_parts)
5: {
6: int err;
7: struct mtd_partition *real_parts;
8:
9: err = parse_mtd_partitions(mtd, types, &real_parts, parser_data);
10: if (err <= 0 && nr_parts && parts) {
11: real_parts = kmemdup(parts, sizeof(*parts) * nr_parts,
12: GFP_KERNEL);
13: if (!real_parts)
14: err = -ENOMEM;
15: else
16: err = nr_parts;
17: }
18:
19: if (err > 0) {
20: err = add_mtd_partitions(mtd, real_parts, err);
21: kfree(real_parts);
22: } else if (err == 0) {
23: err = add_mtd_device(mtd);
24: if (err == 1)
25: err = -ENODEV;
26: }
27:
28: return err;
29: }
第9行,parse_mtd_partitions用于解析命令行中设置的分区,如果命令行中没有设置mtdparts,那么返回0,如果设置了,并且解析没问题,那么返回大于零的数(也就是解析到的分区的个数),否则返回小于零的数。
如果没有设置mtdparts或者解析失败,那么real_parts就来自代码中写好的分区。然后,调用add_mtd_partitions添加分区。
下面我们看一下parse_mtd_partitions的实现:
1: int parse_mtd_partitions(struct mtd_info *master, const char *const *types,
2: struct mtd_partition **pparts,
3: struct mtd_part_parser_data *data)
4: {
5: struct mtd_part_parser *parser;
6: int ret = 0;
7:
8: if (!types)
9: types = default_mtd_part_types;
10:
11: for ( ; ret <= 0 && *types; types++) {
12: parser = get_partition_parser(*types);
13: if (!parser && !request_module("%s", *types))
14: parser = get_partition_parser(*types);
15: if (!parser)
16: continue;
17: ret = (*parser->parse_fn)(master, pparts, data);
18: put_partition_parser(parser);
19: if (ret > 0) {
20: printk(KERN_NOTICE "%d %s partitions found on MTD device %s\\n",
21: ret, parser->name, master->name);
22: break;
23: }
24: }
25: return ret;
26: }
传入参数types是NULL,所以types赋值为default_mtd_part_types,即
static const char * const default_mtd_part_types[] = {
"cmdlinepart",
"ofpart",
NULL
};
然后将”cmdlinepart”传递给函数get_partition_parser,这个函数返回名为”cmdlinepart”的解析器,然后在第17行调用这个解析器的解析函数,给pparts赋值,并返回分区个数。get_partition_parser的实现很简单:
1: static struct mtd_part_parser *get_partition_parser(const char *name)
2: {
3: struct mtd_part_parser *p, *ret = NULL;
4:
5: spin_lock(&part_parser_lock);
6:
7: list_for_each_entry(p, &part_parsers, list)
8: if (!strcmp(p->name, name) && try_module_get(p->owner)) {
9: ret = p;
10: break;
11: }
12:
13: spin_unlock(&part_parser_lock);
14:
15: return ret;
16: }
假设u-boot传入的bootargs的值是:
noinitrd root=/dev/mtdblock3 rootfstype=yaffs2 init=/linuxrc console=ttySAC0,115200n mtdparts=tq2440-0:1m@0(spl)ro,1m(u-boot)ro,3m(kernel)ro,-(rootfs)
其中,根据经验,内核中一定会有类似 __setup(“mtdparts=”, xxx)的结果,其实就在文件cmdlinepart.c中:
1: static int __init mtdpart_setup(char *s)
2: {
3: cmdline = s;
4: return 1;
5: }
6:
7: __setup("mtdparts=", mtdpart_setup);
所以当遇到“mtdparts=tq2440-0:1m@0(spl)ro,1m(u-boot)ro,3m(kernel)ro,-(rootfs)”时,函数mtdpart_setup就会被调用,并将mtdparts的值作为参数,所以cmdline就指向”tq2440-0:1m@0(spl)ro,1m(u-boot)ro,3m(kernel)ro,-(rootfs)”.
下面是解析器的注册:
1: static struct mtd_part_parser cmdline_parser = {
2: .owner = THIS_MODULE,
3: .parse_fn = parse_cmdline_partitions,
4: .name = "cmdlinepart",
以上是关于u-boot中添加mtdparts支持以及Linux的分区设置的主要内容,如果未能解决你的问题,请参考以下文章