With dependencies upgraded to:
"datastore-fs": "~0.5.0",
"ipfs-repo": "~0.23.1"
Error: unexpected error getting the ipns record Q,ALPl|���;GVM�t�Н����ʨ from datastore
at ReadFileContext._repo.datastore.get (/Users/alan/Code/protocol-labs/js-ipfs/src/core/ipns/publisher.js:198:35)
at ReadFileContext.callback (/Users/alan/Code/protocol-labs/js-ipfs/examples/custom-ipfs-repo/node_modules/graceful-fs/graceful-fs.js:78:16)
at FSReqWrap.readFileAfterOpen [as oncomplete] (fs.js:235:13)
Emitted 'error' event at:
at done (/Users/alan/Code/protocol-labs/js-ipfs/src/core/components/init.js:21:14)
at /Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/internal/once.js:12:16
at next (/Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/waterfall.js:21:29)
at /Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/internal/onlyOnce.js:12:16
at parallel (/Users/alan/Code/protocol-labs/js-ipfs/src/core/components/init.js:130:13)
at /Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/internal/parallel.js:39:9
at /Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/internal/once.js:12:16
at iteratorCallback (/Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/eachOf.js:58:13)
at /Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/internal/onlyOnce.js:12:16
at /Users/alan/Code/protocol-labs/js-ipfs/node_modules/async/internal/parallel.js:36:13
So the end error that's showing up here is a bit misleading as it's a result of different errors wrapping each other. Fun stuff!
The root issue here is around the work we need to do with adding error codes to all errors. In js-ipfs-ms it does a check to see if the repo has its mfs key. When it doesn't exist, mfs will create it and everyone is happy. The problem is with the current check, https://github.com/ipfs/js-ipfs-mfs/blob/v0.3.2/src/core/utils/with-mfs-root.js#L25. This check is specific to a LevelDown not found error, so datastore-fs and datastore-s3 are breaking.
The reason they break is that since the error isn't properly identified, they go ahead and try to create a cid with the undefined result, https://github.com/ipfs/js-ipfs-mfs/blob/v0.3.2/src/core/utils/with-mfs-root.js#L41, which throws an error, making everyone sad.
I have verified these work with some quick node_module modifications.
error.code === 'ENOENT'.err-code module we are using elsewhere to standardize the interface-datastore error codes in interface-datastore and have all other datastores use those.ERR_NOT_FOUND.@alanshaw @vasco-santos what do you think? I could probably get PRs out for option 2 to all the datastores and possibly releases (provided I have permissions to all of them) tomorrow.
Also, I was wrong about the error wrapping, they're just two different areas with the same problem.
https://github.com/ipfs/js-ipfs/blob/v0.32.0/src/core/ipns/publisher.js#L194
ipns publisher is also only checking for the LevelDown not found. This will need to be updated like mfs.
I've got an initial implementation of some of the base codes over at interface-datastore#22. There's a link there to the datastore-fs update that includes those.
@jacobheun 👍 to option 2
Thanks for your analyze @jacobheun ! 🙂
I agree with the Option 2!
Yes, IPNS publish also uses the levelDown not found error! I will create a PR for that as well, adding the error code validation.
All of the PRs should be open now, we're just blocked on some releases. I updated the package.json of this example locally to test against and didn't hit any errors with a yarn install.
"dependencies": {
"datastore-fs": "~0.5.0",
"ipfs": "github:ipfs/js-ipfs#fix/ipns-datastore-get-not-found",
"ipfs-repo": "~0.23.1"
},
"resolutions": {
"interface-datastore": "github:ipfs/interface-datastore#feat/err-codes",
"datastore-fs": "github:ipfs/js-datastore-fs#feat/err-codes",
"ipfs-mfs": "github:ipfs/js-ipfs-mfs#fix/err-code"
}
I have also verified the changes against the s3 example, https://github.com/ipfs/js-datastore-s3/tree/master/examples/full-s3-repo
@jacobheun thanks for being so thorough 🥇 ❤️
js-ipfs#1558 PR is ready. After that and js-ipfs-mfs#10 get merged and new ipfs-mfs version gets released, I think that this issue will be solved.
I will keep an eye on those PRs and try it once they get merged.
Once we get the ipfs-mfs version released and bumped in the #1558 PR, I can run through a final test of the custom repo and the s3 datastore example for final validation.
@vasco-santos @jacobheun could one if you confirm the example is fixed so we can close this one 🙏
Sure, on it!
Updated to the following:
"dependencies": {
"datastore-fs": "~0.6.0",
"ipfs": "file:../../",
"ipfs-repo": "~0.24.0"
}
I am having a problem with ipfs-mfs now 😕
([{hash}], next) => next(null, new CID(hash))
TypeError: Cannot destructure propertyhashof 'undefined' or 'null'in js-ipfs-mfs#core/utils/with-mfs-root.js#L35
After debugging it, I managed to find that this problem was caused by js-ipfs#core/mfs-preload.js#L37
@achingbrain any clue on this?
The code tries to create the MFS root if it doesn't exist. Looks like ipfs.files.add({path: '/' }) is passing an empty array to it's callback instead of passing an array containing objects with a hash property.
Does ipfs.files.add behave differently with the custom repo?
@vasco-santos what do you run when you see that error? If I do node index.js with your dep changes above, I see:
$ node index.js
Swarm listening on /ip4/127.0.0.1/tcp/4003/ws/ipfs/QmWmadHLZVmt5mQLdWdAq13MDQgQ28aExAqdvHgXgyKfFe
Swarm listening on /ip4/127.0.0.1/tcp/4002/ipfs/QmWmadHLZVmt5mQLdWdAq13MDQgQ28aExAqdvHgXgyKfFe
Swarm listening on /ip4/192.168.1.13/tcp/4002/ipfs/QmWmadHLZVmt5mQLdWdAq13MDQgQ28aExAqdvHgXgyKfFe
Ready
Version: 0.32.3
Added file: test-data.txt QmZcvUvxNc8P9NebseeCfrDHn4xQwNTmrwWtuY8PDr9c3C
Fetched file content:
We are using a customized repo!
Stopping the node
Check "/tmp/custom-repo/.ipfs" to see what your customized repository looks like on disk.
I did a clean install of modules from ipfs master with the custom-ipfs-repo package.json matching @vasco-santos 's changes.
The example works properly there, I'm seeing what @achingbrain is getting.
I also ran the s3 example, using the latest ipfs and ipfs-repo, and it is also working properly.
Strange, I think I had a clean install. Let me double check.
Ok, now that I also removed the package-lock from the example, it worked properly! Thanks @achingbrain @jacobheun
I will create a PR for bumping the examples version then
Most helpful comment
So the end error that's showing up here is a bit misleading as it's a result of different errors wrapping each other. Fun stuff!
The root issue here is around the work we need to do with adding error codes to all errors. In
js-ipfs-msit does a check to see if the repo has its mfs key. When it doesn't exist, mfs will create it and everyone is happy. The problem is with the current check, https://github.com/ipfs/js-ipfs-mfs/blob/v0.3.2/src/core/utils/with-mfs-root.js#L25. This check is specific to a LevelDown not found error, so datastore-fs and datastore-s3 are breaking.The reason they break is that since the error isn't properly identified, they go ahead and try to create a cid with the undefined result, https://github.com/ipfs/js-ipfs-mfs/blob/v0.3.2/src/core/utils/with-mfs-root.js#L41, which throws an error, making everyone sad.
Options
I have verified these work with some quick node_module modifications.
error.code === 'ENOENT'.err-codemodule we are using elsewhere to standardize the interface-datastore error codes in interface-datastore and have all other datastores use those.2.1 Update js-ipfs-mfs to check for the correct error code ie:
ERR_NOT_FOUND.@alanshaw @vasco-santos what do you think? I could probably get PRs out for option 2 to all the datastores and possibly releases (provided I have permissions to all of them) tomorrow.