Yii2: FileMutex: unlink(...): No such file or directory

Created on 27 Oct 2016  路  17Comments  路  Source: yiisoft/yii2

What steps will reproduce the problem?

doing many parallel locks

What is the expected result?

no error

What do you get instead?

No such file or directory in
unlink($this->getLockFilePath($name));

Additional info

| Q | A |
| --- | --- |
| Yii version | 2.0.10 |
| PHP version | 5.5.9 |
| Operating system | linux Ubuntu 14.04.4 |

ready for adoption bug

Most helpful comment

No, unless you run many short-living locks/scripts in parallel, which is uncommon I think.

I'm using locks in such scenario. And locks can be useful in web app - sometimes you need do some heavy task (like image processing, that could take ~5 seconds) and you want to be sure that you will not do the same job by 20 parallel processes.

If we can't delete lock file immediately after release, there should be garbage collector at least, which will delete old lock files.

All 17 comments

first unlink() then fclose() solved it for me

This is extremely weird. Any idea why?

fclose() release lock, then other process call acquireLock and releaseLock; there is no file to unlink in first process.

you can call unlink on open file (Unix, I don't know how it works on windows)

I tested, does work on unix, but not on windows (permission denied error). So following should be enough to fix:

@unlink($this->getLockFilePath($name));

@kidol wouldn't that possibly unlink a file that is locked by another process already?

@cebe Yes that's what's happening already on unix (see error message in issue description). Fix is just to add @ to suppress that error.
Little strange if script A deletes file while script B already acquired file handle and is now trying to call flock on it. But it works, you can see here: https://3v4l.org/SilHY

Also verified that Windows gives you Permission denied in case you're trying to unlink locked file.

@kidol is it possible to solve w/o using @?

Hmm, looks like symfony does not remove the lock file at all: https://github.com/symfony/symfony/pull/10475#issuecomment-51464260

Problem is the following:

  • Script A unlocks and calls unlink
  • Script B was waiting (file handle already acquired) and will now acquire a lock on non-existing file
  • Script C starts and is simply creating a new file and is able to lock it...

Maybe it's better to refactor class so that locks get serialized into a single lock-file. Means if script wants to acquire lock it has to:

  • Open lock file
  • flock in blocking mode
  • Unserialize locks array
  • If $locks[$name] === true, unlock file, sleep(1) and return to step 2
  • If $locks[$name] === false, set it to true, serialize and save, unlock file, return true

That's at least how I implemented such locking in the past and it works well.

// Only problem is: If php segfaults or whatever, a lock can stay forever (or until server restart if lock file is in /tmp..). That's why I always used a lock-timeout as safeguard for such cases.

Wouldn't that create too many unnecessary I/O blocks even for unrelated locks?

No, unless you run many short-living locks/scripts in parallel, which is uncommon I think. Also I see no use case to use locks in a web application. If there is one, then it may lead to problems under heavy load indeed.

In case of the symfony solution: User could use the systems temp path for these files so the runtime folder does not get polluted with these lock files.

Maybe symfony solution is better for the sake of simplicity..

No, unless you run many short-living locks/scripts in parallel, which is uncommon I think.

I'm using locks in such scenario. And locks can be useful in web app - sometimes you need do some heavy task (like image processing, that could take ~5 seconds) and you want to be sure that you will not do the same job by 20 parallel processes.

If we can't delete lock file immediately after release, there should be garbage collector at least, which will delete old lock files.

No, unless you run many short-living locks/scripts in parallel, which is uncommon I think. Also I see no use case to use locks in a web application. If there is one, then it may lead to problems under heavy load indeed.

I did that for background tasks w/o queue. These were run via about 20 cronjobs which were independent and all used locks.

@rob006 @samdark
Assume each job takes only 500ms and locking has overhead of 10ms. You can now calculate approx. how many jobs you can run in parallel before you get issues with blocking.
That's what I meant with "No, unless..."

In case you thought one lock will block the lockfile until the lock is released, this is not the case. It will flock + LOCK_EX, write new state then flock + LOCK_UN. Same procedure when releasing the lock.

Was this page helpful?
0 / 5 - 0 ratings