@paulmelnikow I am discontent by the increased strain that you put on the servers in 4b5bf03fea2fd7622140327760aa80060df95783 (#1273) and I'd like us to discuss it before I deploy a new version of the server.
Increased load means both increased costs and increased maintenance, and I disagree with the idea that we need two HTTP servers to run an SVG image service, a glorified string concatenator.
I'd like to discuss our options here.
Equally importantly, that change caused a significant regression that users would be bothered by:
It is not even using maxAge caching, which we were using.
Also, what is this at the end?

Hmm, the maxAge caching should be there as long as LONG_CACHE is set at build time. I didn't set LONG_CACHE on server debug builds, since it makes for better debugging. Though that could be changed.
Completely agreed, the second HTTP server definitely should not run on the server. It's intended for dev only. There were a couple changes to environment variables on the server (badges/ServerScript#2) but otherwise the server startup process is the same as before, node server.
- Search is suddenly very slow,
- Images get reloaded on every keystroke. Resetting the search bar reloads all images. It probably hurts the server, too.
Can certainly work on these. I'm puzzled though, seems like something is not caching correctly.
Are you looking at https://shields.io/ or the build on one of the servers?
The Dynamic section was added in #820! It's a powerful badge that lets you pull dynamic content from JSON endpoints.
Are you looking at https://shields.io/ or the build on one of the servers?
I checked locally before testing with one server in production, and stopped there because of what I saw.
How would I go about having a production build locally? The readme suggests npm run build:production, but it fails with missing script: build:production.
The Dynamic section was added in #820! It's a powerful badge that lets you pull dynamic content from JSON endpoints.
That part's a good idea, and I saw it in the PR, but the focus of my screenshot was on the lost padding between badges, fields and the button. Sorry if I was unclear.
(Unrelated to this issue, and not a blocker: could you avoid printing all the GitHub tokens to the logs please? I saw that you added an endpoint to fetch them, which is a better solution, especially since the tokens are too numerous and get partially cut by console.log with a ... 80 more items. Also, could you use the password as the place to put the password, instead of the username, for that endpoint?)
How would I go about having a production build locally? The readme suggests
npm run build:production, but it fails withmissing script: build:production.
Looks like those docs were missed. Sorry about that. It's LONG_CACHE=true npm run build.
That part's a good idea, and I saw it in the PR, but the focus of my screenshot was on the lost padding between badges, fields and the button. Sorry if I was unclear.
Gotcha. Probably there was whitespace there before which is getting stripped out in JSX. I will look into it.
(Unrelated to this issue, and not a blocker: could you avoid printing all the GitHub tokens to the logs please? I saw that you added an endpoint to fetch them, which is a better solution, especially since the tokens are too numerous and get partially cut by console.log with a
... 80 more items. Also, could you use the password as the place to put the password, instead of the username, for that endpoint?)
That's configurable in the environment with GITHUB_DEBUG_ENABLED=false. How about setting that in production? Totally understand if you don't want it running on the server, though I feel like it's a useful default for self-hosting, for one of the more complex parts of the server. Heh, if it's cut off though, it's pretty useless… should fix that.
For what it's worth: while the API endpoint gets the real tokens, the logging output is sanitized. You're seeing hashes of the GitHub tokens, not the actual tokens.
- Search is suddenly very slow,
- Images get reloaded on every keystroke. Resetting the search bar reloads all images. It probably hurts the server, too.
This is puzzling. I would think they'd be pulled from the browser cache for the life of the page. Clearly that doesn't seem to be the case.
Probably this is not an issue when LONG_CACHE=true.
Thank you for working on those issues.
I added GITHUB_DEBUG_ENABLED environment variables to the setup script.
That said, I feel it should be opt-in, instead of on by default.
The default production configuration is not debugging stuff, after all.
I made a PR for that.
Did you find a pleasant compromise for the performance of the search?
The trick I used was to simply hide needless rows in the table.
React by default tends to reconstruct DOM nodes, which is costly.
Can you rely on my trick without too much work?
It is the last thing that makes me hesitate to push the red button.
Yea, that's certainly possible. It sounds like you feel this is a release blocker. DOM updates are part of the reason it feels frustratingly slow to start, though I think the bigger part is that the browser is loading lots of badges. Once the badges are loaded, it's slower than the current site, though not bad. It's also throttled, so the re-render fires at most once every 500 ms.
I would love to get this out, especially since once we have server logging, I should be able to deploy!
However it's likely that tomorrow morning, NY time, is the soonest I will be able to solve this.
though I think the bigger part is that the browser is loading lots of badges
All the badges loading and displayed on the shields.io webpage are actually fresh generated "live" badges?
Merged an alternate fix #1393 instead.
I hope we can see a fresh deploy after all this performance improvements...
The live shields.io page is way outdated. ;)
Do you feel more comfortable now @espadrine? Or do you still think there are some road blocks/issues?
All swell!
I deployed master to s0 for testing, tweaked some rate limiting values, and saw no major issue. So, master is now live everywhere.
Glad to hear it!
Most helpful comment
All swell!
I deployed master to s0 for testing, tweaked some rate limiting values, and saw no major issue. So, master is now live everywhere.