Framework: Losing data when using Storage::append

Created on 14 Jul 2017  路  5Comments  路  Source: laravel/framework

  • Laravel Version: 5.2 or higher
  • PHP Version:5.6
  • Database Driver & Version:

Description:

10+ queue worker processes reading messages from queue and using Storage::append (driver:local) to store in one file,data will lose a lot and also it may cause memory exhausted when writing a big file.Please check!

Steps To Reproduce:

All 5 comments

Laravel 5.2 is no longer supported. With that said; are you talking about several workers appending to the same file? Have you implemented any locks at all?

The append method isn't atomic, it consists of a read and a write. It's fully possible for multiple parallell processes to read the same initial content, all append individual data, and the last writer that wrote last to the file will see the content persisted. (A writer will overwrite existing content, so the last one "wins".)

Ref: FilesystemAdapter::append

I am having the same issue what should be the best solution for this ? anyone ?

Laravel 5.6 have same issue. I try to transfer 14563 bytes of chunked data over HTTP, chunk by chunk. Result file became 14570 bytes. I replace this method with file_put_contents($file_path, $data, FILE_APPEND) and result file became less per 7 bytes.

@dyachkovD It doesn't sounds like you have the same problem at all. Do you really have lots of processes writing to the same file?

@sisve I have no parallel IO operations, but binary data becomes unreadable due to extra bytes and I losing my data too.

Was this page helpful?
0 / 5 - 0 ratings