Js-ipfs: daemon leaves behind api file lock on error

Created on 17 May 2016  路  31Comments  路  Source: ipfs/js-ipfs

When the daemon hits an error on load like https://github.com/ipfs/js-ipfs/issues/228, it will leave around the ~/.ipfs/api file lock.

go-ipfs seems to be smart about detecting this and cleaning it up, but it will inhibit future invocations of the js-ipfs daemon.

dihard help wanted kinbug

Most helpful comment

While using the js-ipfs in node, encountering this error:
Error: Lock file is already being held at options.fs.stat (node_modules\proper-lockfile\lib\lockfile.js:68:47) at node_modules\graceful-fs\polyfills.js:285:20 at FSReqWrap.oncomplete (fs.js:155:5)
During the development, the app is being restarted with code modifications, which leads to the above error frequently. Previous instance holding the lock is fine, but how to override / release it in new instance ?

All 31 comments

(@whyrusleeping can confirm) go-ipfs checks if the daemon is indeed on and if it doesn't find anything on the port left behind, it just deletes the lock and spawns the daemon.

great, so the work is

  • [ ] clean up the api lock on daemon error
  • [ ] check if the daemon is on on start-up and clear the lock if not

exactly :)

@noffle would you like to handle this one? :)

Sorry David, I don't have the bandwidth right now for this. Feel free to
reassign.

@VictorBjelkholm does this look like something you can take on? :) Thank you

Just checked the go code, and we are actually not doing what go does at all. In go the api file is deleted on closing, but does not stop the daemon from starting up. Rather it is simply overwritten.

For locking the repo to a single daemon a file called repo.lock is used which uses os level checks to figure out who the file belongs to (https://github.com/ipfs/go-ipfs/blob/master/repo/fsrepo/lock/lock.go)

It looks like the best bet for doing this is using

and using those for repo.lock.

Why can't we not remove the lock file on crash/close? And the only thing the daemon has to check, is if the file is there or not.

@VictorBjelkholm it gets removed on close, how can you assure that it gets removed on crash?

https://github.com/joyent/node-lockfd for unix

Does not work, as it misses the actual important syscall config we need :(

I found https://crates.io/crates/fs2 which is supposed to work on windows as well as on nix systems so I'm looking into creating a small wrapper around it for js.

@VictorBjelkholm it gets removed on close, how can you assure that it gets removed on crash?

@diasdavid by leveraging domain/cluster, which is basically what they are meant for. We can isolate the main process and listen for shutdowns/exists and clean up properly.

@diasdavid by leveraging domain/cluster, which is basically what they are meant for. We can isolate the main process and listen for shutdowns/exists and clean up properly.

This does not help at all with things like power loss. Also it's a massive overhead of running two v8 processes just for simple monitoring purposes.

@VictorBjelkholm also, Domains and Cluster are modules that historically have been a source of issues in Node.js land, so much that they were considered to be removed from core several times. And even with that, it would not save us from problems that are beyond the OS level (machine being shutdown, kernel panic, etc). Checking the API file and testing if the API is running is also how go-ipfs does it //cc @whyrusleeping

@diasdavid not entirely see my comment for what actually happens in go: https://github.com/ipfs/js-ipfs/issues/229#issuecomment-247306857

@dignifiedquire go does:

  • 1) check for API file
  • 2) If it exists, tries to dial to that daemon
  • 3) If there is no daemon running on that port, spawns a new daemon
  • 4) connects to the new daemon

//cc @whyrusleeping

yes but the locking of the repo has nothing to do with the api file. It purely happens through repo.lock as described above

ah yes, there are two files, one for the lock, one for the api (where it is listening), the process that owns the API should also own the lock.

@VictorBjelkholm There is a reason why Unix has its own lock function/syscall. If you lock using the dedicated subroutine the lock will be released automatically when process exits, thus preventing stale lock.

Locking by creating file with PID, as done quite frequently, has been proven error full because of PID reuse. It might happen that process crashes, its PID is reused and the lock is considered valid.

repo.lock is repo wide lock, this means that process that wants to provide the API should acquire the lock, and if successful it may consider the whole repo as its own. That is why if go-ipfs acquires lock successfully it doesn't even check if there is api file, it just overwrites it with its own.

I finally found a solution to do fcntl based locking for repo.lock that does the exact same thing as go-ipfs so it will integrate cleanly. I hope to have a PR for this up this week.

@dignifiedquire this will only work when used with fs store, correct? What is the strategy for all the others?

@diasdavid do we need this anywhere else? This is only related to the http-api/cli as far as I understand

That is indeed correct :)

Started implementation here: https://github.com/dignifiedquire/lock-me

Just published: https://www.npmjs.com/package/lock-me which we now can use to do this :)

As a note, this has been solved :)

\o/

While using the js-ipfs in node, encountering this error:
Error: Lock file is already being held at options.fs.stat (node_modules\proper-lockfile\lib\lockfile.js:68:47) at node_modules\graceful-fs\polyfills.js:285:20 at FSReqWrap.oncomplete (fs.js:155:5)
During the development, the app is being restarted with code modifications, which leads to the above error frequently. Previous instance holding the lock is fine, but how to override / release it in new instance ?

Having this problem when trying to integrate with Next.js while in dev mode. Adding the error to help others find

LockExistsError: Lock already being held for file: C:\Users\timca\.jsipfs\repo.lock
    at Object.exports.lock (C:\Users\timca\Documents\GitHub\migo\node_modules\ipfs-repo\src\lock.js:36:13)
    at async IpfsRepo._openLock (C:\Users\timca\Documents\GitHub\migo\node_modules\ipfs-repo\src\index.js:192:22)
    at async IpfsRepo.open (C:\Users\timca\Documents\GitHub\migo\node_modules\ipfs-repo\src\index.js:113:23)
    at async Proxy.init (C:\Users\timca\Documents\GitHub\migo\node_modules\ipfs\src\core\components\init.js:62:9)
    at async Object.create (C:\Users\timca\Documents\GitHub\migo\node_modules\ipfs\src\core\index.js:55:3)
    at async module.exports../pages/api/hello.js.__webpack_exports__.default (C:\Users\timca\Documents\GitHub\migo\.next\server\static\development\pages\api\hello.js:112:12)
    at async apiResolver (C:\Users\timca\Documents\GitHub\migo\node_modules\next\dist\next-server\server\api-utils.js:6:1)
    at async DevServer.handleApiRequest (C:\Users\timca\Documents\GitHub\migo\node_modules\next\dist\next-server\server\next-server.js:43:397)
    at async Object.fn (C:\Users\timca\Documents\GitHub\migo\node_modules\next\dist\next-server\server\next-server.js:35:764)
    at async Router.execute (C:\Users\timca\Documents\GitHub\migo\node_modules\next\dist\next-server\server\router.js:28:28)
    at async DevServer.run (C:\Users\timca\Documents\GitHub\migo\node_modules\next\dist\next-server\server\next-server.js:44:494)     
    at async DevServer.handleRequest (C:\Users\timca\Documents\GitHub\migo\node_modules\next\dist\next-server\server\next-server.js:13:133) {
  code: 'ERR_LOCK_EXISTS'
}

I'm also having this problem when using js-ipfs in node:

(node:9746) UnhandledPromiseRejectionWarning: LockExistsError: Lock already being held for file: /Users/admin/.jsipfs/repo.lock

@andykitt this usually happens when another IPFS process is running or the previous process did not exit cleanly - is either of these the case for you?

Was this page helpful?
0 / 5 - 0 ratings