I'm creating this as a place to discuss how to create tests for this project, so as not to dominate #37185
@ojeytonwilliams For URL Shortener, I need to POST URL (google for example) to shorten it and
it should return{ "original_url": "www.google.com", "short_url": 1 }
. When I access$.get(getUserInput(''url'') + ''/api/shorturl/1'')
, I should be redirected to google.com.I will need to either 1) store the original_url as a variable OR 2) POST and then GET in one function. Is there a way to do the first one because that would be cleaner and more readable? And where does the first getUserInput get called from actually?
Here's an excerpt of my test, second test incomplete:
tests: - text: 'It should handle a URL as parameter and return shortened URL' testString: 'getUserInput => $.post(getUserInput('url') + '/api/shorturl/new').then(data => assert.exists(data.short_url), xhr => { throw new Error(xhr.responseText)})' - text: 'It should handle a shortened URL and redirect to original link' testString: 'getUserInput => $.get(getUserInput(''url'') + ''/api/shorturl/3'').then()
Noted. I was wondering if anyone has a clue. If not, I'll just make a PR first with what I have including the test to ensure the url given is not the example url.
I put a link on the main issue to some examples, or possibly stuff that can just be copy/pasted - looks like whoever made them stores some variables on the window object.
And where does the first getUserInput get called from actually?
That's called by the browser as the test is run. Basically what happens is, for each test the user's code is evaluated and then the test code is evaluated. All getUserInput('url')
does is grab the url that the user has entered as a solution.
looks like whoever made them stores some variables on the window object.
That or declare a variable in the test and set it in the then
of the first promise. Either should be fine.
declare a variable in the test and set it in the then of the first promise.
Would that allow for use of the variable in the next test?
So the first test would be to post a url, get a short url number in the response -> set it to something in the first then
-> use it for the get
in the next test?
You said it should be fine - just trying to confirm that's exactly what you mean. I suppose I could go test it out.
Would that allow for use of the variable in the next test?
It shouldn't - the tests should be entirely independent. If the tests are influencing each other, that's a nasty bug!
Well, I gave it a test locally - and it looks like it works.
test1: window.test = 'test';
test2: console.log(window.test); //logs 'test'
So, for this challenge, you kind of need the short_url
from the response of one test to make a GET
request in another test (unless you string it all into one test - which isn't that great). Looks like attaching it to the window object will make it available there. What do you think about this approach @ojeytonwilliams ?
There may be multiple projects on the back end that need to make requests like this - meaning using a response from one in another test.
Thanks for testing that. So, what I think is that the window
object should be off-limits as the potential for creating bizarre and unpredictable bugs is rather high.
Right now we run the tests sequentially, but what about if we decide to run them in parallel to speed things up? After all, waiting for multiple 5 second timeouts is a pain. Suddenly these kinds of tests can fail, more or less at random. Even if we keep the test-runner synced, it makes it hard to maintain as you can't simply change one test - you might have to change all the dependent tests.
So, while we can do this, I think we'd regret it.
I think running the tests in parallel is a good idea, but how would it help us with this problem?
I also just thought of something, the 5 second limit might not actually be enough with these - since glitch projects have to "wake up"/restart the server and install dependencies. Not sure if that is an issue or not. Probably fine for now.
I think running the tests in parallel is a good idea, but how would it help us with this problem?
It wouldn't help here. I was just talking about what might go wrong if we started running them in parallel.
I also just thought of something, the 5 second limit might not actually be enough with these - since glitch projects have to "wake up"/restart the server and install dependencies. Not sure if that is an issue or not. Probably fine for now.
It'll be fine most of the time, since they'll be working on the projects and keeping them awake. Could be worth popping a warning somewhere, though.
Okay I have finished the test on shortening url and redirecting it:
- text: 'It should handle a shortened URL and redirect to original link'
testString: 'getUserInput => {
const data = $.post(getUserInput('url') + '/api/shorturl/new').then(data => { if (data) { return data },
xhr => { throw new Error(xhr.responseText)});
return $.get(getUserInput('url') + `/api/shorturl/${data.short_url}`).then(res => { if (res.url) { return assert.strictEqual(res.url, `${data.original_url}`) },
xhr => { throw new Error(xhr.responseText)}); }'
How do I test if it is working before doing a PR? I have looked at the package.json and couldn't figure out. I'm assuming there's a parser to convert the yml code to test. But where can I test my test cases locally?
@jenlky npm run test:curriculum
and npm run develop
to see the actually site live.
@RandellDawson I npm install
in the root folder and npm run develop
threw me:
Cannot find module '../build/Release/sharp.node'
- Remove the "node_modules/sharp" directory, run "npm install" and look for errors
- Consult the installation documentation at https://sharp.pixelplumbing.com/en/stable/install/
- Search for this error at https://github.com/lovell/sharp/issues
⠋ load plugins
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! @freecodecamp/[email protected] develop: `node --inspect=0.0.0.0:9228 node_modules/gatsby-cli develop`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the @freecodecamp/[email protected] develop script
When I run npm run test:curriculum
in the root folder, it threw me:
0 passing (5ms)
1 failing
1) Uncaught error outside test suite:
Uncaught Error: unhandledRejection: YAMLException, bad indentation of a sequence entry at line 5, column 15:
testString: assert($(“style”).text().repla ...
^ in file C:\Users\jenss\Documents\Open Source\freeCodeCamp-1\curriculum\challenges\chinese\01-responsive-web-design\responsive-web-design-principles\create-a-media-query.chinese.md
at process.on.err (test\test-challenges.js:86:11)
at emitPromiseRejectionWarnings (internal/process/promises.js:119:20)
at process._tickCallback (internal/process/next_tick.js:69:34)
Is it just me? Or does anyone else get the same problem?
@jenlky To avoid taking this issue off-topic, join the collaborator's room on Gitter and we can discuss your issue further.
@jenlky there are a few things needing fixing. As the code is represented as a string you have to escape '
or just use "
inside the string. So $.post(getUserInput('url')
-> $.post(getUserInput("url")
and so on.
Once you've done that there are some syntax errors, so I recommend first creating the test inside a .js file, so your editor can flag these things up for you.
Also $.post
needs to send a URL to the api endpoint, so it has to have this form:
$.post(getUserInput("url") + "/api/shorturl/new", {url: "example-url-goes-here"})
Finally, jQuery's ajax functions (like get and post) return jqXHR objects, since they're asynchronous. Those objects are fancy Promises and you can get the results out using .then
or via async
and await
.
Hi. I've just been doing this project as a learner. It also needs a test/user story that will require the use of a persistent database (eg Mongo), because I can pass all the tests just using an in-memory data store.
I've added tests to this Microservice project in this PR #39311
Most helpful comment
Thanks for testing that. So, what I think is that the
window
object should be off-limits as the potential for creating bizarre and unpredictable bugs is rather high.Right now we run the tests sequentially, but what about if we decide to run them in parallel to speed things up? After all, waiting for multiple 5 second timeouts is a pain. Suddenly these kinds of tests can fail, more or less at random. Even if we keep the test-runner synced, it makes it hard to maintain as you can't simply change one test - you might have to change all the dependent tests.
So, while we can do this, I think we'd regret it.