
My build takes less than 30 seconds, but as you can see, I have files that are old.
System:
OS: Windows 10
CPU: x64 Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz
Binaries:
Yarn: 1.9.4 - C:\Program Files (x86)\Yarn\bin\yarn.CMD
npm: 5.5.1 - C:\Program Files\nodejs\npm.CMD
Browsers:
Edge: 42.17134.1.0
npmPackages:
gatsby: ^2.0.19 => 2.0.21
gatsby-image: ^2.0.17 => 2.0.17
gatsby-plugin-catch-links: ^2.0.4 => 2.0.4
gatsby-plugin-emotion: ^2.0.5 => 2.0.5
gatsby-plugin-google-analytics: ^2.0.6 => 2.0.6
gatsby-plugin-manifest: ^2.0.6 => 2.0.6
gatsby-plugin-offline: ^2.0.5 => 2.0.6
gatsby-plugin-react-helmet: ^3.0.0 => 3.0.0
gatsby-plugin-sharp: ^2.0.8 => 2.0.8
gatsby-plugin-sitemap: ^2.0.1 => 2.0.1
gatsby-plugin-typescript: ^2.0.0 => 2.0.0
gatsby-plugin-typography: ^2.2.0 => 2.2.0
gatsby-remark-images: ^2.0.4 => 2.0.4
gatsby-source-filesystem: ^2.0.3 => 2.0.3
gatsby-transformer-remark: ^2.1.7 => 2.1.7
gatsby-transformer-sharp: ^2.1.5 => 2.1.5
This is by design as if you deploy often and delete old job files then those people who might have opened the site on a previous deploy will now get 404s for resources that got deleted.
So basically the public folder is stateful? So I need to check it in my git repository?
Looks like a fragile idea.
So basically how is the process of deploying?
You delete the public folder every now and then?
You never delete the public folder and this folder becomes huge? I got like +1000 files, after clean only 300
I'm really confused.
No you wouldn't check it into your got repo. It's generated files. On each build, most files don't change. You can delete files if you'd like but generally the new files would accumulate pretty slowly.
so if you don't check in, your model is completely broken.
Some cases it will break:
Let's say you build and deploy on the same machine
Dev B will not include the public folder of Dev A build, so it will deploy a version without the files build in Dev A. And so on. Now imagine Dev A deploys, it will contain his previous changes and the new changes, but not the Dev A. Now let's say Dev A needs to reset his machine, you lost all history.
This can't be a proper process, is very fragile. I hope I misunderstood what you said.
Note: I'm not saying to check in generated files because this will become huge. I'm just saying there is no proper solution.
No. If you don't preserve the public directory, gatsby just recreates the files. Where you went wrong is deleting just the public directory and not also .cache. If you delete one of them, you need to delete both of them.
Ok, I think I didn't explain properly, anyway is not related to this issue. Is related to this one: https://github.com/gatsbyjs/gatsby/issues/9676 I will put the comment there.
This is by design as if you deploy often and delete old job files then those people who might have opened the site on a previous deploy will now get 404s for resources that got deleted.
Is there a way to periodically clear out old files, say after 7 days, upon a new build, and force the site to refresh for users with the tab still open if they return? My GH-pages branch often gets huge with tons of old files left there from previous deploys.
I'm not really sure what the best practice is regarding cleaning public but not breaking the site for stale users 馃槙
A quick solution would probably be to to trigger a build with a service like zapier. It has a integration with netlify and some other hosts aswell. Take a look and see if it fits what you intend to accomplish
Most helpful comment
No. If you don't preserve the public directory, gatsby just recreates the files. Where you went wrong is deleting just the public directory and not also .cache. If you delete one of them, you need to delete both of them.