For the last few days, the example on our home page has been showing server unresponsive.
Their website is not working either: http://codetally.com/
Tests failing:
1) Codetally
Codetally
[ GET /triggerman722/colorstrap.json ]:
Error: Request timed out after 5000 ms
at Timeout.handleRequestTimeout [as _onTimeout] (node_modules/icedfrisby/lib/icedfrisby.js:1098:35)
Is the service gone?
I submitted a quick issue on their main GitHub repository.
The website is back up and the tests are showing a new error.
Yes, the website is back up, but things are not working properly. The API systematically returns formattedshield as plaintext no matter what we query, obviously we can't parse that.
I know we're reliant on upstream support in codetally to resolve this, but just want to call out that once it is done all the service tests should be passing (this is the last consistently failing test)!
1171 passing (3m)
1 failing
1) Codetally
Codetally
[ GET /triggerman722/colorstrap.json ]:
ValidationError: child "value" fails because ["value" with value "invalid" fails to match the required pattern: /\b\d+(?:.\d+)?/]
at Object.exports.process (node_modules/joi/lib/errors.js:203:19)
at internals.Object._validateWithOptions (node_modules/joi/lib/types/any/index.js:764:31)
at module.exports.internals.Any.root.validate (node_modules/joi/lib/index.js:147:23)
at Object.pathMatch.matchJSONTypes (node_modules/icedfrisby/lib/pathMatch.js:303:9)
at _expect (node_modules/icedfrisby/lib/icedfrisby.js:563:10)
at IcedFrisbyNock._invokeExpects (node_modules/icedfrisby/lib/icedfrisby.js:1261:26)
at start (node_modules/icedfrisby/lib/icedfrisby.js:1244:12)
at Request.runCallback [as _callback] (node_modules/icedfrisby/lib/icedfrisby.js:1131:16)
at Request.self.callback (node_modules/request/request.js:185:22)
at Request.<anonymous> (node_modules/request/request.js:1161:10)
at IncomingMessage.<anonymous> (node_modules/request/request.js:1083:12)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:139:11)
at process._tickDomainCallback (internal/process/next_tick.js:219:9)
I'm getting a 503 error @ https://www.codetally.com/formattedshield/triggerman722/colorstrap
503 ERROR
The request could not be satisfied.
The Lambda function associated with the CloudFront distribution is invalid or doesn't have the required permissions.
If you received this error while trying to use an app or access a website, please contact the provider or website owner for assistance.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by following steps in the CloudFront documentation (http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/http-503-service-unavailable.html).
Generated by cloudfront (CloudFront)
Request ID: j0JEh2gisYbLE7_o2WLpryarRPn1p7vjGhoLjdssAOW1jAVQnIzuww==
Tests are passing for me in my local environment
(edited as 503 error disappeared and test started passing)
Confirmed!
All green :smile: https://circleci.com/gh/badges/daily-tests/38

Amazing! :beers:
I think that's probably the first time since we started running the daily tests that I've seen all of them pass at the same time :D
Out of interest, how have we got round the timeouts? We used to have a bunch of tests failing with timeouts on a full test run. I probably didn't read something I should have..
There were a handful of live tests that would intermittently take longer than the default 5 second timeout, so on those tests we added an increased timeout period (adding a .timeout(10000) to the test.
Some of those discussions took place here
It's amazing to have everything passing! Getting a count of the number failed would be nice, though. It might help to reward ourselves for fixing one or two of these. It seems pretty likely that, out of 100 services, someone will be having downtime.
@paulmelnikow do you mean a count of the number of tests that failed with timeout issues? or a different total
I just did a quick count and got 13 individual service tests that had an increased timeout period, across 7 services.
Codetally - 1 testdub - 1 testf-droid - 1 testgithub - 3 testsjenkins - 1 testlibrariesio - 2 testswheelmap - 4 testsIn addition to the handful of increased timeouts I know there were some efforts like this that updated the tests to make them more stable, some tests were failing due to upstream service issues (like Codetally 馃槃), some of the failures were coming from upstream services being deprecated (like nsp), and a few tests started passing again after the service was updated and/or migrated to the new service model
That's 2 in a row --> https://circleci.com/gh/badges/daily-tests/39
Most helpful comment
All green :smile: https://circleci.com/gh/badges/daily-tests/38