# ipfs version --all
go-ipfs version: 0.7.0-rc1-901c58b99
Repo version: 10
System version: amd64/linux
Golang version: go1.15
# ipfs repo stat -H
NumObjects: 130060
RepoSize: 30 GB
StorageMax: 20 GB
RepoPath: /home/ipfs/.ipfs
Version: fs-repo@10
I updated my node to v0.7.0-rc1 and now she is slowly eating all my disk space (when I gone to sleep it were at 15GB full and now it's at 30GB (with a limit at 20GB) (and it's not bigger because there is no more space on my VPS)).
Running ipfs repo gc temporary fix the problem but that not something I like doing obviously.
I have an other peer running v0.7.0-rc1 in the same cluster and it's working just fine.
The node is not starving ressource (enough ram, cpu and FD) (io are not great and it can happend lots of iowaits while stressing the node but that shouldn't be a problem).
{
"API": {
"HTTPHeaders": {}
},
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Announce": [],
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip4/0.0.0.0/udp/4001/quic/",
"/ip6/::/udp/4001/quic/",
"/ip6/::/tcp/4001"
]
},
"Bootstrap": [
"..."
],
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "20GB"
},
"Discovery": {
"MDNS": {
"Enabled": false,
"Interval": 10
}
},
"Experimental": {
"FilestoreEnabled": false,
"Libp2pStreamMounting": false,
"P2pHttpProxy": false,
"PreferTLS": true,
"QUIC": true,
"ShardingEnabled": true,
"StrategicProviding": false,
"UrlstoreEnabled": false
},
"Gateway": {
"APICommands": [],
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range",
"User-Agent"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"*"
]
},
"NoFetch": false,
"PathPrefixes": [],
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "QmeSn1aFaDAtnM2ZjADu3F1LvuMsf63QGMRkd5hJjn8hZU"
},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Plugins": {
"Plugins": null
},
"Provider": {
"Strategy": ""
},
"Pubsub": {
"DisableSigning": false,
"Router": "",
"StrictSignatureVerification": false
},
"Reprovider": {
"Interval": "12h",
"Strategy": "all"
},
"Routing": {
"Type": "dht"
},
"Swarm": {
"AddrFilters": [],
"ConnMgr": {
"GracePeriod": "20s",
"HighWater": 2500,
"LowWater": 600,
"Type": "basic"
},
"DisableBandwidthMetrics": true,
"DisableNatPortMap": true,
"DisableRelay": false,
"EnableAutoNATService": true,
"EnableAutoRelay": false,
"EnableRelayHop": false
}
}
@Jorropo your GCPeriod is set to the default 1h which means it's entirely possible that you will exceed your storage capacity in between GC periods.
Additionally automatic gc is not enabled by default and requires you to pass the --enable-gc flag to ipfs daemon (described in the docs for setting the datastorage maximum https://github.com/ipfs/go-ipfs/blob/0ad6a92716423dc23d944bce1fec2ad18f163907/docs/config.md#datastorestoragemax)
@Jorropo your GCPeriod is set to the default 1h which means it's entirely possible that you will exceed your storage capacity in between GC periods.
Additionally automatic gc is not enabled by default and requires you to pass the
--enable-gcflag toipfs daemon(described in the docs for setting the datastorage maximum https://github.com/ipfs/go-ipfs/blob
/0ad6a92716423dc23d944bce1fec2ad18f163907/docs/config.md#datastorestoragemax)
@aschmahmann
Ok thx for --enable-gc, I was exceding over 6h period (so 1h is enough), but what is the motivation for not this being default ? This is a dangerous space hog (for me at least).
but what is the motivation for not this being default ? This is a dangerous space hog (for me at least).
The short version is that the way in which we cleanup blocks we no longer have pinned (whether directly or indirectly pinned) can be quite slow on large repos and has some significant downsides. The two biggest are that it takes a long time on large repos and that the blockstore is locked during GC which combined can severely disrupt usage. There are a number of issues describing the existing GC problems and some proposed solutions (e.g. taking a reference counting approach) but solving this has not yet made it to the top of the todo list.
One issue with some info on this is https://github.com/ipfs/go-ipfs/issues/4382