Littlefs: lfs_file_close() takes to long time

Created on 3 Jul 2020  路  2Comments  路  Source: littlefs-project/littlefs

Hi
My system configuration:
Chip: GD25Q16C
MCU: STM32L071, freq 16MHz, SPI clock 8Mhz
FS config:

    LFS_VERSION 0x00020001
    .read_size                = 16,
    .prog_size                = 16,
    .block_size               = 4096U,
    .block_count            = 512,
    .cache_size               = 16,
    .lookahead_size        = 16,
    .block_cycles             = 1000,
   Static memory allocation

I faced with the issue, the lfs_file_close() takes to long time to close the file.
So I have two files which size around 200K, each record has 46 bytes

The Open, Seek works pretty fast, but when I call Close function it stacks for 30 seconds. All that time SPI bus exchanging the data with chip very actively.

err_t manDataStor_WriteFile(fileType_t type, void *p_data,
                            uint32_t offset, uint32_t size)
{   
    assert_param(IS_FILE_TYPE(type));
    assert_param(NULL != p_data);

    if ((0 == IS_FILE_TYPE(type)) || (NULL == p_data))
    {
        return ERR_ARG;
    }

    if (false == b_isFsMounted)
    {
        return ERR_ERROR;
    }

    bool  isFileOpened = false;
    err_t status       = ERR_ERROR;

    do
    {
        status = ERR_FS;

        if (LFS_ERR_OK != lfs_file_opencfg(&lfs, &fileLog[type].fileInstance, fileLog[type].p_name,
                                                             LFS_O_RDWR, &fileLog[type].fileCfg))
        {
            break;
        }

        isFileOpened = true;

        if (offset != lfs_file_seek(&lfs, &fileLog[type].fileInstance, offset, LFS_SEEK_SET))
        {
            break;
        }

        if (size != lfs_file_write(&lfs, &fileLog[type].fileInstance, p_data, size))
        {
            break;
        }

        status = ERR_OK; 
    } 
    while (false);

    if (true == isFileOpened)
    {
        status = (LFS_ERR_OK != lfs_file_close(&lfs, &fileLog[type].fileInstance)) ?
        ERR_ERROR : status;
    }

    return status;
}

The similar function also exist for Read the file

Also file create:

err_t manDataStore_CreateFile(fileType_t type)
{
    assert_param(IS_FILE_TYPE(type));

    if (0 == IS_FILE_TYPE(type))
    {
        return ERR_ARG;
    }

    bool isFileOpened = false;
    err_t status = ERR_ERROR;

    do
    {
        if (LFS_ERR_OK != lfs_file_opencfg(&lfs, &fileLog[type].fileInstance, fileLog[type].p_name,
                                           LFS_O_RDWR | LFS_O_CREAT, &fileLog[type].fileCfg))
        {
            break;
        }
        else
        {
            isFileOpened = true;
        }

        // Allocate size for each file
        uint32_t fileSize = sizeof(fileHeader_t) + (fileHeader[type].entrySize * fileHeader[type].fileCapacity);

        if (LFS_ERR_OK != lfs_file_truncate(&lfs, &fileLog[type].fileInstance, fileSize))
        {
            break;
        }

        // Check is file really allocated
        if (fileSize != lfs_file_size(&lfs, &fileLog[type].fileInstance))
        {
            break;
        }

        uint8_t *tmpPtr = (uint8_t*)&fileHeader[type];

        // Should return writed data size
        if (sizeof(fileHeader_t) != lfs_file_write(&lfs, &fileLog[type].fileInstance,
                                                   (const void *)tmpPtr,
                                                   sizeof(fileHeader_t)))
        {
            break;
        }

        status = ERR_OK;
    } 
    while (false);

    if (true == isFileOpened)
    {
        if(LFS_ERR_OK != lfs_file_close(&lfs, &fileLog[type].fileInstance))
        {
            status = ERR_ERROR;
        }
    }  
    return status;
}

I need some advice or workaround on how to reduce this time.
I already try to increase sizes of buffers and it gives some positive results, the time reduces twice. But it takes to much RAM which I don't have.

Probably there is some more efficient solution for that? Or this is a bug?

There is a function lfs_file_truncate() which allocates memory for each file. For example if I make file size around 46K it works faster then allocated file for 200K

performance

Most helpful comment

I have plans to improve this, but they are very involved, I would not wait for them
@geky Thanks for answering. If you will need more information from my side, please give me a note.

All 2 comments

Hi @Leon11t, thanks for raising an issue. I need to look closer at the details you've posted, but very much thanks for the info it will be helpful in learning where to look for performance issues.

I do want to note something about lfs_file_close: This is where almost all of the filesystem work occurs as it is the last point before we need the data to be stored on disk. You can add lfs_file_sync to move the cost of writing to disk earlier, but it won't reduce the overall cost.

Performance is the biggest limitation of littlefs right now, so unfortunately you may just be hitting the limits of what littlefs is capable of. (I have plans to improve this, but they are very involved, I would not wait for them).

I have plans to improve this, but they are very involved, I would not wait for them
@geky Thanks for answering. If you will need more information from my side, please give me a note.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

davidefer picture davidefer  路  7Comments

FrederikTB picture FrederikTB  路  4Comments

PoppaChubby picture PoppaChubby  路  6Comments

husigeza picture husigeza  路  9Comments

eastmoutain picture eastmoutain  路  6Comments