Littlefs: [Question] How does littlefs handle bad block 0/1?

Created on 22 Jul 2020  路  5Comments  路  Source: littlefs-project/littlefs

Hi Expert,

As we know, littlefs provide logic for handling bad block during running. But I am just wondering how littlefs handles the bad block failure at superblock 0/1? If bad block error occurs when erasing or writing block 0/1, where is the data written to? From my understanding, when filesystem starts up and mounting, it always start from block 0/1. So does it mean the whole filesystem will not work when super block 0/1 becomes bad?

Thanks,
Wenjun

Most helpful comment

If I remember correctly, littleFs treats block 0 and block 1 like any other bad block, and moves them somewhere else.

Sorry, that's not true.

Blocks 0 and 1 are special. It's how littlefs finds the filesystem on disk. This is valuable as it speeds up mount time quite a bit.

Logging filesystems such as SPIFFS, JFFS2, YAFFS, usually don't have a special block, but to do this they need to scan the full disk during mount to find out where the filesystem is.

If block 0 or 1 die, the filesystem becomes read-only. The reason there are two special blocks is so we can fail to read-only instead of making the filesystem inaccessible.


Instead of relocating blocks 0 and 1... I'm going to start calling these the "anchor pair". Instead of relocating the anchor pair, littlefs takes a proactive approach towards protecting the anchor pair.

littlefs already tracks the number of erases to each metadata block in order to implement wear-leveling. If littlefs notices that there are more than block_cycles erases to the anchor pair, it moves the contents of the anchor pair to a new metadata pair (as it would during a normal relocation), and changes the anchor pair to contain only some readonly info and a pointer to the new pair.

after lfs_format:
 .--------.
.| anchor |
||   +    |
||root dir|
|'--------'
'--------' 

after block_cycle erases:
 .--------.  .--------.
.| anchor |->|root dir|
||        | ||        |
||        | ||        |
|'--------' |'--------'
'--------'  '--------' 

Now, our anchor pair should only every be written to when we relocate the second metadata pair. For normal wear-leveling, this will happen every _block_cycles_ erases. But! We still have our protect-the-anchor-pair logic running. So if we see our anchor have _block_cycles_ erases we create another metadata-pair with only a pointer. So it now takes _block_cycles虏_, and then _block_cycles鲁_, _block_cycles鈦確.

This function grows very quickly.

after block_cycle^2 erases:
 .--------.  .--------.  .--------.
.| anchor |->| anchor |->|root dir|
||        | ||        | ||        |
||        | ||        | ||        |
|'--------' |'--------' |'--------'
'--------'  '--------'  '--------' 

I did the math before, and assuming even file wear, you would need on the order of a >~40MiB disk with 4KiB blocks and 100 block_cycles before littlefs needs 3 anchors. After that you would need >~4 GiB for 4 anchors.

Oh also this is erases, for 4KiB block NOR flash littlefs can fit ~40, 100 byte commits in each erase. This increases the larger your blocks are, which is convenient because the anchors get more expensive the larger your blocks are.


Initially, littlefs always had 1 anchor, but this changed in v2.0 to let littlefs run on really tiny disks. It's common for internal flash to only have 2 blocks for example, in which case if any block dies your filesystem is dead, so it doesn't really matter.

If >1/2 the available storage is full, littlefs will not add an anchor to avoid running out of space. This can be changed in the future or we can even add an option to pre-allocate anchors if it becomes an issue.


It looks like this is documented a bit in SPEC.md, but not in DESIGN.md, I should add this there
https://github.com/ARMmbed/littlefs/blob/master/SPEC.md#0x0ff-lfs_type_superblock

All 5 comments

I'm sure I've seen an answer to this in response to another issue raised. If I remember correctly, littleFs treats block 0 and block 1 like any other bad block, and moves them somewhere else. Certainly it doesn't stop the filesystem from working.

I'm sure I've seen an answer to this in response to another issue raised. If I remember correctly, littleFs treats block 0 and block 1 like any other bad block, and moves them somewhere else. Certainly it doesn't stop the filesystem from working.

Hi e107steved,

Thanks a lot for you answer. I try to dive into the code, and I find littlefs always starts fetch superblock from 0/1 pair. See below code:

int lfs_mount(lfs_t *lfs, const struct lfs_config *cfg) {
    ...
    // scan directory blocks for superblock and any global updates
    lfs_mdir_t dir = {.tail = {0, 1}};
    ...
}

So I am still wondering how it works if block 0/1 pair becomes bad. Could you please give me more detail on this?

Thanks,
Wenjun

If I remember correctly, littleFs treats block 0 and block 1 like any other bad block, and moves them somewhere else.

Sorry, that's not true.

Blocks 0 and 1 are special. It's how littlefs finds the filesystem on disk. This is valuable as it speeds up mount time quite a bit.

Logging filesystems such as SPIFFS, JFFS2, YAFFS, usually don't have a special block, but to do this they need to scan the full disk during mount to find out where the filesystem is.

If block 0 or 1 die, the filesystem becomes read-only. The reason there are two special blocks is so we can fail to read-only instead of making the filesystem inaccessible.


Instead of relocating blocks 0 and 1... I'm going to start calling these the "anchor pair". Instead of relocating the anchor pair, littlefs takes a proactive approach towards protecting the anchor pair.

littlefs already tracks the number of erases to each metadata block in order to implement wear-leveling. If littlefs notices that there are more than block_cycles erases to the anchor pair, it moves the contents of the anchor pair to a new metadata pair (as it would during a normal relocation), and changes the anchor pair to contain only some readonly info and a pointer to the new pair.

after lfs_format:
 .--------.
.| anchor |
||   +    |
||root dir|
|'--------'
'--------' 

after block_cycle erases:
 .--------.  .--------.
.| anchor |->|root dir|
||        | ||        |
||        | ||        |
|'--------' |'--------'
'--------'  '--------' 

Now, our anchor pair should only every be written to when we relocate the second metadata pair. For normal wear-leveling, this will happen every _block_cycles_ erases. But! We still have our protect-the-anchor-pair logic running. So if we see our anchor have _block_cycles_ erases we create another metadata-pair with only a pointer. So it now takes _block_cycles虏_, and then _block_cycles鲁_, _block_cycles鈦確.

This function grows very quickly.

after block_cycle^2 erases:
 .--------.  .--------.  .--------.
.| anchor |->| anchor |->|root dir|
||        | ||        | ||        |
||        | ||        | ||        |
|'--------' |'--------' |'--------'
'--------'  '--------'  '--------' 

I did the math before, and assuming even file wear, you would need on the order of a >~40MiB disk with 4KiB blocks and 100 block_cycles before littlefs needs 3 anchors. After that you would need >~4 GiB for 4 anchors.

Oh also this is erases, for 4KiB block NOR flash littlefs can fit ~40, 100 byte commits in each erase. This increases the larger your blocks are, which is convenient because the anchors get more expensive the larger your blocks are.


Initially, littlefs always had 1 anchor, but this changed in v2.0 to let littlefs run on really tiny disks. It's common for internal flash to only have 2 blocks for example, in which case if any block dies your filesystem is dead, so it doesn't really matter.

If >1/2 the available storage is full, littlefs will not add an anchor to avoid running out of space. This can be changed in the future or we can even add an option to pre-allocate anchors if it becomes an issue.


It looks like this is documented a bit in SPEC.md, but not in DESIGN.md, I should add this there
https://github.com/ARMmbed/littlefs/blob/master/SPEC.md#0x0ff-lfs_type_superblock

If I remember correctly, littleFs treats block 0 and block 1 like any other bad block, and moves them somewhere else.

Sorry, that's not true.

Blocks 0 and 1 are special. It's how littlefs finds the filesystem on disk. This is valuable as it speeds up mount time quite a bit.

Logging filesystems such as SPIFFS, JFFS2, YAFFS, usually don't have a special block, but to do this they need to scan the full disk during mount to find out where the filesystem is.

If block 0 or 1 die, the filesystem becomes read-only. The reason there are two special blocks is so we can fail to read-only instead of making the filesystem inaccessible.

Instead of relocating blocks 0 and 1... I'm going to start calling these the "anchor pair". Instead of relocating the anchor pair, littlefs takes a proactive approach towards protecting the anchor pair.

littlefs already tracks the number of erases to each metadata block in order to implement wear-leveling. If littlefs notices that there are more than block_cycles erases to the anchor pair, it moves the contents of the anchor pair to a new metadata pair (as it would during a normal relocation), and changes the anchor pair to contain only some readonly info and a pointer to the new pair.

after lfs_format:
 .--------.
.| anchor |
||   +    |
||root dir|
|'--------'
'--------' 

after block_cycle erases:
 .--------.  .--------.
.| anchor |->|root dir|
||        | ||        |
||        | ||        |
|'--------' |'--------'
'--------'  '--------' 

Now, our anchor pair should only every be written to when we relocate the second metadata pair. For normal wear-leveling, this will happen every _block_cycles_ erases. But! We still have our protect-the-anchor-pair logic running. So if we see our anchor have _block_cycles_ erases we create another metadata-pair with only a pointer. So it now takes _block_cycles虏_, and then _block_cycles鲁_, _block_cycles鈦確.

This function grows very quickly.

after block_cycle^2 erases:
 .--------.  .--------.  .--------.
.| anchor |->| anchor |->|root dir|
||        | ||        | ||        |
||        | ||        | ||        |
|'--------' |'--------' |'--------'
'--------'  '--------'  '--------' 

I did the math before, and assuming even file wear, you would need on the order of a >~40MiB disk with 4KiB blocks and 100 block_cycles before littlefs needs 3 anchors. After that you would need >~4 GiB for 4 anchors.

Oh also this is erases, for 4KiB block NOR flash littlefs can fit ~40, 100 byte commits in each erase. This increases the larger your blocks are, which is convenient because the anchors get more expensive the larger your blocks are.

Initially, littlefs always had 1 anchor, but this changed in v2.0 to let littlefs run on really tiny disks. It's common for internal flash to only have 2 blocks for example, in which case if any block dies your filesystem is dead, so it doesn't really matter.

If >1/2 the available storage is full, littlefs will not add an anchor to avoid running out of space. This can be changed in the future or we can even add an option to pre-allocate anchors if it becomes an issue.

It looks like this is documented a bit in SPEC.md, but not in DESIGN.md, I should add this there
https://github.com/ARMmbed/littlefs/blob/master/SPEC.md#0x0ff-lfs_type_superblock

Hi Christopher,

Thank you very much for the extremely detailed explanation. It becomes more clear to me now. Thanks!

Thanks,
Wenjun

Question answered. Thanks! Issue closed.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

FreddyBZ picture FreddyBZ  路  3Comments

iverdiver picture iverdiver  路  5Comments

hathach picture hathach  路  7Comments

keck-in-space picture keck-in-space  路  11Comments

nobodyinperson picture nobodyinperson  路  3Comments