Littlefs: V2: Format gives corrupt error

Created on 17 Feb 2019  ·  15Comments  ·  Source: littlefs-project/littlefs

I am testing V2 alpha branch but I am not able to format/mount. Format always end up with error -84 (LFS_ERR_CORRUPT).

read_size = prog_size = cache_size = 256;

read and prog buffers are statically allocated.
block_cycles = 10;
lookahead_size = 256;

V1 works just fine.

Memory Info
Size: 8388608
Page Sz: 256
Block Sz: 4096

Am I doing anything wrong? or anything wrong with configuration?

needs major version

All 15 comments

Can you please dump your full config?

Especially name/file/attribute max.

Here is complete configuration, I have not set name/file/attribute maxes so they should be set to default inside. below configuration works just fine if I use LFS V1. I am not getting any clue what is wrong.

    memset(&_cfg, 0, sizeof(_cfg));
    _cfg.read = spifs_bd_read;
    _cfg.prog = spifs_bd_prog;
    _cfg.erase = spifs_bd_erase;
    _cfg.sync = spifs_bd_sync;
    _cfg.read_size = LFS_READ_BUF;
    _cfg.prog_size = LFS_PROG_BUF;
    _cfg.block_size = spif_get_erase_size();
    _cfg.block_count = spif_size() / _cfg.block_size;
#if LFS_V1
    _cfg.lookahead = 32 * ((_cfg.block_count + 31) / 32);
#else /* LFS V2 */
    _cfg.block_cycles = 10;
    _cfg.cache_size = LFS_READ_BUF;
    _cfg.lookahead_size = LFS_LOOKAHEAD;
#endif
    _cfg.read_buffer = _lfs_read_buffer;
    _cfg.prog_buffer = _lfs_prog_buffer;
    _cfg.lookahead_buffer = _lfs_lookahead;

All except file cache buffers are allocated statically.

static uint8_t _lfs_read_buffer[LFS_READ_BUF];
static uint8_t _lfs_prog_buffer[LFS_PROG_BUF];
static uint8_t _lfs_lookahead[LFS_LOOKAHEAD];

Buffer size

#define LFS_READ_BUF    128
#define LFS_PROG_BUF    256
#define LFS_LOOKAHEAD   256

I have similar issue in V2.
When I use params like following
block_size: 512, block_count: 512, prog_size: 512, read_size: 512, no problem.
But for speed improvement, when I change them to
block_size: 8192, block_count: 64, prog_size: 8192, read_size: 8192, then above issue happens.

In V1, both cases are OK.
Hope any suggestions.

I am yet to get v2 working on my board however in V1 if I keep read=prog=block size it slows down the speed I am not sure why. though I feel it should be higher since lfs can keep whole block in cache.
For testing of speed I usually recreate my file write scenario where I write/append 100bytes (random data) per sec to a file. speed is slower in case of read=prog=block sz but its faster when I keep the setting that I posted in my last message.
I can only imagine write time to increase when file crosses one block and lfs has to read the whole block before writing. I am not sure if my understanding of internal implementation is correct.

@ajaybhargav - read and program buffers (if used) must be cache_size in length.

This is a little bit unintuitive, I admit.

Now that V2 is released, I tried again to make it working but failed.

The settings are same as mentioned earlier. Tried different size for buffers.. tried keeping them same.. But all fail. Still getting:
lfs error:981: Corrupted dir pair at 0 1
Mount error:-84
lfs error:981: Corrupted dir pair at 0 1
format error -84

Here is complete configuration, I have not set name/file/attribute maxes so they should be set to default inside. below configuration works just fine if I use LFS V1. I am not getting any clue what is wrong.
memset(&_cfg, 0, sizeof(_cfg));
_cfg.read = spifs_bd_read;
_cfg.prog = spifs_bd_prog;
_cfg.erase = spifs_bd_erase;
_cfg.sync = spifs_bd_sync;
_cfg.read_size = LFS_READ_BUF;
_cfg.prog_size = LFS_PROG_BUF;
_cfg.block_size = spif_get_erase_size();
_cfg.block_count = spif_size() / _cfg.block_size;

if LFS_V1

_cfg.lookahead = 32 * ((_cfg.block_count + 31) / 32);

else /* LFS V2 */

_cfg.block_cycles = 10;
_cfg.cache_size = LFS_READ_BUF;
_cfg.lookahead_size = LFS_LOOKAHEAD;

endif

_cfg.read_buffer = _lfs_read_buffer;
_cfg.prog_buffer = _lfs_prog_buffer;
_cfg.lookahead_buffer = _lfs_lookahead;

All except file cache buffers are allocated statically.
static uint8_t _lfs_read_buffer[LFS_READ_BUF];
static uint8_t _lfs_prog_buffer[LFS_PROG_BUF];
static uint8_t _lfs_lookahead[LFS_LOOKAHEAD];

Buffer size

define LFS_READ_BUF 128

define LFS_PROG_BUF 256

define LFS_LOOKAHEAD 256

Hi, sorry, I assumed @TheLoneWolfling's comment fixed the issue.

It looks like your cache_size is smaller than your prog_size, this will break:

_cfg.cache_size = LFS_READ_BUF;
...
#define LFS_READ_BUF 128
#define LFS_PROG_BUF 256
#define LFS_LOOKAHEAD 256

The read/prog/cache/block sizes should be ordered roughly:

read_size ≤ prog_size ≤ cache_size ≤ block_size

Are you compiling with asserts disabled? You may want to enable asserts when porting as there are a number of asserts that check if the configuration is valid. This assert should be catching this mistake:
https://github.com/ARMmbed/littlefs/blob/73ea008b74e7eb9eb5079b295b85f5b394e064f0/lfs.c#L3199

Let us know if that doesn't solve the problem.


@ajaybhargav - read and program buffers (if used) must be cache_size in length.

This is a little bit unintuitive, I admit.

I didn't think about the names from this perspective. It is unintuitive, isn't it. Do you think naming the buffers something like read_cache_buffer would help?


I am yet to get v2 working on my board however in V1 if I keep read=prog=block size it slows down the speed I am not sure why. though I feel it should be higher since lfs can keep whole block in cache.

Ah! I can explain this. This is actually the reason why cache_size was introduced.

You are correct to think larger caches should mean better performance, but unfortunately this wasn't the case in v1. See, caching in v1 was very simple, if the block wasn't in the cache, a full cache line would be fetched from disk. Unfortunately, littlefs has a lot of data structures that span multiple blocks. This can sometimes lead to littlefs needing only a single integer before moving on to the next block. This created a surprising situation where larger caches would unintuitively lead to worse performance as littlefs would be moving more data over the bus to read that one integer.

Unfortunately, this wasn't solvable without separate read_size/cache_size. The read_size also describes the minimum possible read on the storage. This can be 1 byte on NOR flash, but is often much larger, 256-2KiB, on NAND and SD/eMMC. It wasn't legal for littlefs to read any data < read_size.

To fix this, I added the cache_size config option, which controls the cache size independently of the physical read size. Internally, the caching layer now takes in a hint variable (here), which is used to control how much data is actually fetched from the disk.

So now you should want to make read_size/prog_size as small as possible, and cache_size as large as possible. Hopefully that helps.

thanks @geky I missed the point earlier. Setting cache size as prog size fixes the format issue. I am able to use V2. However in the process I found another issue:

read_size ≤ prog_size ≤ cache_size ≤ block_size

This might fail, I think the order should be:
prog_size ≤ read_size ≤ cache_size ≤ block_size

You might want to add another assert here for the check: read_size % prog_size == 0
https://github.com/ARMmbed/littlefs/blob/73ea008b74e7eb9eb5079b295b85f5b394e064f0/lfs.c#L3198

because the following line assumes the buffer passed to function is always atleast equal to prog_size.
https://github.com/ARMmbed/littlefs/blob/73ea008b74e7eb9eb5079b295b85f5b394e064f0/lfs.c#L32

Or probably you might want to pass length of buffer to be on safe side. If read_size is less than prog_size then this will create buffer overflow:
https://github.com/ARMmbed/littlefs/blob/73ea008b74e7eb9eb5079b295b85f5b394e064f0/lfs.c#L3232

Ah! That is a mistake. Thanks for pointing it out.

I've pushed up a fix here and ~will merge it once it passes CI~: https://github.com/ARMmbed/littlefs/commit/8628d5e1f11da32a9f3f00240c039721254e08d1

You might want to add another assert here for the check: read_size % prog_size == 0

Ah, in reality read_size does not need to be ≤ prog_size. littlefs only needs read_size/prog_size to be ≤ cache_size. I've never seen a read_size > prog_size, but would be happy to be surprised.

EDIT: Merged 👍

I still feel passing length of buffer would be ideal while doing memset.
if the original statement still holds true:

read_size ≤ prog_size ≤ cache_size ≤ block_size

Consider a case where static buffers are passed to LFS and
if prog_size = read_size < cache_size
then memset will overflow again?
e.g. if prog=read=256
and cache_size = 512

Ah, the static buffers should not be read_size/prog_size. These back caches used for reading/writing. The static buffers should be cache_size.

read_size/prog_size only determines what sizes can be physically written/read from the block device. littlefs keeps larger caches and may read in multiples of the read_size/prog_size.

Does that make sense?


I've only recently realized how confusing this is. I'm thinking they should probably be changed to read_cache_buffer and prog_cache_buffer if there is a breaking API change in the future. Not sure if that help clarifies things.

I've only recently realized how confusing this is. I'm thinking they should probably be changed to read_cache_buffer and prog_cache_buffer if there is a breaking API change in the future. Not sure if that help clarifies things.

A better description in appropriate places would be enough I think.

I agree with @FreddieChopin A proper description at right place might be enough. At least the confiuration part of LFS which drivers the whole process. People (like me) sometimes tend to miss details even though header file is descriptive enough.

Got it, been thinking littlefs needs a PORTING.md. More porting documentation has been much requested (related https://github.com/ARMmbed/littlefs/issues/56).

Resolved! closed!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

iverdiver picture iverdiver  ·  5Comments

e107steved picture e107steved  ·  9Comments

zhabl picture zhabl  ·  12Comments

nizhq picture nizhq  ·  3Comments

hathach picture hathach  ·  7Comments