Go-ipfs: /api/v0/get doesn't respect the output argument

Created on 8 May 2015  ยท  20Comments  ยท  Source: ipfs/go-ipfs

http://127.0.0.1:5001/api/v0/get?arg=QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ%2Fcat.jpg&o=%2Ftmp%2Fcat.jpg

This http request will discard the "&o=/tmp/cat.jpg" part and return the file content in the http answer.

As far as I can tell, there is no way to actually write the plain file on disk using the API. Am I correct on this one ?

neeverification topiapi topihttp-api

Most helpful comment

This further shows that we need some sort of separation from HTTP API and CLI options.

All 20 comments

i believe so, because the http request cannot trigger a file to be saved at a particular location?

When using the CLI ipfs get, the output argument works and the daemon log the corresponding HTTP request (not sure how it works internally).

12:47:48.600 DEBUG commands/h: Incoming API request: /api/v0/get?arg=QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ%2Fcat.jpg&encoding=json&stream-channels=true

Not being able to actually write files on disk make it difficult to build apps on top of IPFS, I could probably retrieve the file from the http answer and write it on disk myself, but it's not very elegant and may not works with big files.

@MichaelMure the output option is part of the post run. The daemon it self does not write anything to disk (via this api). In the CLI the data is returned and the command itself writes it to the disk. This is because we have no way of knowing where the daemon is running.

@MichaelMure I'd love to be able to solve your problem here. Would having a small tool that can take an archived version and write it to disk unarchived be enough for you?

By "archived", I guess you mean "in the datastore". That would work for my project, as long as you can run this locally.

But the ideal solution for me would be something as described here : https://github.com/ipfs/go-ipfs/issues/875#issuecomment-77864791, that is a datastore that could track plain file on disk. That would allow user to keep sharing something without the double disk space cost.

To do something like that could be done in two ways. The easiest would be to have a wrapper around the add command that would (with ipns mounted)

  • Move the file to /ipns/local
  • Create a sym link from the original location to the ipns entry

This would not remove blocks, so any changes to the file would "duplicate it".

To get around the duplication you need to basically do the above but manually (and possiblly modify the daemon)

  • create a watch system (as you described above), or better yet, use fuse in ipns.
  • on writes unpin blocks that change
  • rebuild dag structure and update ipns entries (as would normally happen)

Kind of ruff, did this on my phone on the bus.

The main issue is unpinning blocks and referencing the file in ipns.

hmm maybe i misunderstand. if you're ok with it being in the datastore, you don't need get, you just need pin/add.

@MichaelMure what are you trying to do exactly?

Sorry for not being really clear. English is not my first language.

This is my project. Have a look on the mockup.. What I want to achieve is a P2P software dedicated to two kind of scenario:

  • low-diffusion data (for instance, vacation picture), shared with encryption, to specific recipient
  • public sharing of signed data from identified source (kind of like Twitter of data).

This can be seen as a mix of P2P software and social network. I believe it would reduce the need for the general population to use services like Facebook, Dropbox, Drive and all, as well as push the usage of encryption.

To be accepted by the final user, this software should be as easy to use as possible. Run the installer and done. Having Fuse and symbolic link doesn't fit well here, especially with the multi-platform constraint.

I need a way to export plain files from IPFS once downloaded from the network. It would help if this files could still be made available on IPFS once exported, without the double disk space cost. My best case scenario would be to have IPFS track plain files on disk, with the required meta-data (DAG of hashes, file size, ...) stored on the regular datastore. Kind of like what a classic torrent client would do. An command would allow to import/export data between the disk and the regular datastore. I also believe it would benefit IPFS and avoid the usage of Fuse trickery most of the time.

The less optimal scenario would be to be able to:

  • retrieve meta-data (with ls/refs)
  • share data (with add)
  • trigger a download (with pin)
  • export a file (<-- this is missing)
  • clean the datastore (pin rm/repo gc would work but that's not really targeted)

In the end, I'm somewhat dependent of what IPFS can do, and Arbore is currently a free-time one developer project so I won't be able to help much on the IPFS dev.

To complete the picture, here are the feature that are currently missing in IPFS for the Arbore project.

I need a way to export plain files from IPFS once downloaded from the network. It would help if this files could still be made available on IPFS once exported, without the double disk space cost. My best case scenario would be to have IPFS track plain files on disk, with the required meta-data (DAG of hashes, file size, ...) stored on the regular datastore. Kind of like what a classic torrent client would do. An command would allow to import/export data between the disk and the regular datastore. I also believe it would benefit IPFS and avoid the usage of Fuse trickery most of the time.

You dont have to use FUSE. IPFS comes with an HTTP API and a set of commandline commands that obviate the need for fuse.

Exporting today has to take double space. There are possible ways to not do this -- like you mention -- but the overhead complexity is quite big: any file in the raw fs is susceptible to movement and mutation _without_ IPFS being aware. Even if we setup watching, there are many cases where no IPFS processes are running and the files may move. Lots of the time, the content IPFS is supposed to be aware of simply won't be there. We hope to address this someday and provide a repo that points to external data, but right now that would be taking on waaay too much complexity to provide a good UX. (this is -- by the way -- why dropbox restricts use to "magic folders" that sync only while dropbox is online. We could do something similar, but again, the UX overhead is huge.

avoid the usage of Fuse

Do you say this because fuse has bad UX / is flakey?

  • export a file (<-- this is missing)

it exists. Try:

ipfs get -o=<desired-fs-path> <ipfs-path> 

You dont have to use FUSE. IPFS comes with an HTTP API and a set of commandline commands that obviate the need for fuse.

I was answering @travisperson's suggestion to use FUSE combined with ipns.

(this is -- by the way -- why dropbox restricts use to "magic folders" that sync only while dropbox is online. We could do something similar, but again, the UX overhead is huge.

Unless I'm mistaken, unlike dropbox, IPFS doesn't need to track change in files as soon as they occur. IPFS could just check for change/deletion when the data is requested. No need for inotify of things like that.

The UX could be simple IMHO, ipfs object export and ipfs object import to move files in and out of the regular datastore, and maybe ipfs add --track to have IPFS being aware of files without copying them in the datastore.

it exists. Try: ipfs get -o=<desired-fs-path> <ipfs-path>

That's the origin of this bug, it doesn't work through the http API.

I was answering @travisperson's suggestion to use FUSE combined with ipns.

ah sounds good

IPFS doesn't need to track change in files as soon as they occur. IPFS could just check for change/deletion when the data is requested.

Sounds possible -- I'm definitely open to exploring. Show me a good UX and I'm sold! I just don't have a ton of time right now to investigate myself and the rest of the core team is pretty loaded too.

That's the origin of this bug, it doesn't work through the http API.

Could you pull it down as an archive and untar it manually? /api/v0/get?a=true or something. Sorry don't know all the plumbing stuff there very well but i'm certain there's a way to get the stuff out the way you want to. It will unfortunately take up double space for now, but at least it can work to grab the stuff.

Show me a good UX and I'm sold!

I'm confused, I think I just did. What do you want ?

Could you pull it down as an archive and untar it manually? /api/v0/get?a=true or something.

I could, but:

  • it would be perf intensive, especially for big files. IPFS --> tar --> http --> untar --> disk looks excessive to just do IPFS --> disk
  • there is still the double disk usage (and thus the incentive for the user to _not_ share data)

I'm confused, I think I just did. What do you want ?

I mean a working implementation we can play with and get a feel for. Much of IPFS's UX is determined with working prototypes. That way we can test the UX in real use and through a bunch of different scenarios.

Also-- I'm curious does Arbore really need to have the raw data in disk, and not just use IPFS as a fs/db? You can embed IPFS in an application (and we're making this much easier). The UI mockups I see all show the data being accessed via the App, which is easy to do on top of IPFS itself. I imagine the desire is to put it on the native fs so users can manipulate it as usual. Maybe this can work like dropbox, with a managed Arbore directory, or something. /brainstorming.

I imagine the desire is to put it on the native fs so users can manipulate it as usual.

This. I imagine it working as a regular torrent client would.

  • you may use the Arbore UI to access your data, but using your regular photo viewer would be nicer
  • Ideally, Arbore would work on Windows, Linux, mobile... Plain files are the common language to interact with external apps.
  • I don't think the obscure datastore would be a good UX, especially for the no-tech target audience.

These are the reason for the Arbore project, but I think it would be usefull for other IPFS use-case. For instance, the use-case described in #1216. Having this tracking datastore would allow to share the data through HTTP and IPFS without much overhead.

the output argument is a flag used only by the local client. It should not be sent to the daemon (similar to how we don't send the api flag to the daemon over the http api)

Wait this is actually a valid concern. @whyrusleeping is right that the
client chooses to use json and converts it back to output. But Michael is
right that The API should be able to output the other encodings if the
users request it in the API.

Therefore the fixes needed are:

  • [ ] cli client should remove the output flag before requesting from api
    (using json instead)
  • [ ] if the flag is there, the API should respect it

This should be reopened. (Am on mobile and cannot)

Cc @richardlitt for API spec
On Fri, Jan 1, 2016 at 22:42 Jeromy Johnson [email protected]
wrote:

the output argument is a flag used only by the local client. It should not
be sent to the daemon (similar to how we don't send the api flag to the
daemon over the http api)

โ€”
Reply to this email directly or view it on GitHub
https://github.com/ipfs/go-ipfs/issues/1210#issuecomment-168361490.

Lets verify if this is still an issue

I appear to be running into a similar issue with version 0.4.11 using the archive argument:

$ ipfs --version
ipfs version 0.4.11

With archive=true:

$ curl "http://localhost:5001/api/v0/get?archive=true&arg=/ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme"
readme0000644000000000000000000000210313165756415010435 0ustar0000000000000000Hello and Welcome to IPFS!

โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—  โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ•โ• โ–ˆโ–ˆโ•”โ•โ•โ•  โ•šโ•โ•โ•โ•โ–ˆโ–ˆโ•‘
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘
โ•šโ•โ•โ•šโ•โ•     โ•šโ•โ•     โ•šโ•โ•โ•โ•โ•โ•โ•

If you're seeing this, you have successfully installed
IPFS and are now interfacing with the ipfs merkledag!

 -------------------------------------------------------
| Warning:                                              |
|   This is alpha software. Use at your own discretion! |
|   Much is missing or lacking polish. There are bugs.  |
|   Not yet secure. Read the security notes for more.   |
 -------------------------------------------------------

Check out some of the other files in this directory:

  ./about
  ./help
  ./quick-start     <-- usage examples
  ./readme          <-- this file
  ./security-notes

Without it, I get the same response, including the preamble.

$ curl "http://localhost:5001/api/v0/get?archive=false&arg=/ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme"
readme0000644000000000000000000000210313165756405010434 0ustar0000000000000000Hello and Welcome to IPFS!

โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—  โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ•โ• โ–ˆโ–ˆโ•”โ•โ•โ•  โ•šโ•โ•โ•โ•โ–ˆโ–ˆโ•‘
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘
โ•šโ•โ•โ•šโ•โ•     โ•šโ•โ•     โ•šโ•โ•โ•โ•โ•โ•โ•

If you're seeing this, you have successfully installed
IPFS and are now interfacing with the ipfs merkledag!

 -------------------------------------------------------
| Warning:                                              |
|   This is alpha software. Use at your own discretion! |
|   Much is missing or lacking polish. There are bugs.  |
|   Not yet secure. Read the security notes for more.   |
 -------------------------------------------------------

Check out some of the other files in this directory:

  ./about
  ./help
  ./quick-start     <-- usage examples
  ./readme          <-- this file
  ./security-notes

When I use the CLI, I get:

$ ipfs get /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme
Saving file(s) to readme
 1.08 KB / 1.08 KB [=====================================================] 100.00% 0s
$ cat readme 
Hello and Welcome to IPFS!

โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•— โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ–ˆโ–ˆโ•—โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•โ–ˆโ–ˆโ•”โ•โ•โ•โ•โ•
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•”โ•โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—  โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•—
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•”โ•โ•โ•โ• โ–ˆโ–ˆโ•”โ•โ•โ•  โ•šโ•โ•โ•โ•โ–ˆโ–ˆโ•‘
โ–ˆโ–ˆโ•‘โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ•‘     โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ•‘
โ•šโ•โ•โ•šโ•โ•     โ•šโ•โ•     โ•šโ•โ•โ•โ•โ•โ•โ•

If you're seeing this, you have successfully installed
IPFS and are now interfacing with the ipfs merkledag!

 -------------------------------------------------------
| Warning:                                              |
|   This is alpha software. Use at your own discretion! |
|   Much is missing or lacking polish. There are bugs.  |
|   Not yet secure. Read the security notes for more.   |
 -------------------------------------------------------

Check out some of the other files in this directory:

  ./about
  ./help
  ./quick-start     <-- usage examples
  ./readme          <-- this file
  ./security-notes

The archive is same, it is just a boilerplate for the CLI. I think it is always sent as tar stream.

This further shows that we need some sort of separation from HTTP API and CLI options.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jonchoi picture jonchoi  ยท  3Comments

ArcticLampyrid picture ArcticLampyrid  ยท  3Comments

Jorropo picture Jorropo  ยท  3Comments

magik6k picture magik6k  ยท  3Comments

djdv picture djdv  ยท  3Comments