Describe your problem and how to reproduce it:
I have been working on #37185 (Add Tests for Backend Projects) as I've worked though the QA curriculum. I have evaluated the user stories for the Sudoku Solver and American-British Translator and determined that it is not possible to create an FCC hosted test for these projects with any degree of coverage.
The problem is that these two projects are effectively "frontend" projects. Neither has an API nor any significant server-side content. All of the user interaction takes place on the UI, typically in response to user interaction via event handlers.
During the time I spent attempting to write my personal unit/functional tests for Sudoku Solver I explored a number of possibilities to enable virtual DOM/headless testing of functional interactions.
JSDOM (current solution)
Lacks the ability to trigger events or, if it's possible, I was unable to get it to work with my code. I tried manually injecting events, but they never fired my handlers. Even if this did work, while it is theoretically possible to run JSDOM natively in the client, doing so opens up the very real possibility of XSS attacks on a user's fCC profile, assuming we could overcome the CORS issues.
On the server hosted tests I was able to make this work by exporting and manually executing individual handler functions, but only for the unit testing. I was unable to make the Functional tests run with jsdom
as they require events to trigger.
Zombie.js
I had no better luck getting events to trigger with Zombie.js
. The same client-side security issues potentially exist.
Puppeteer (Server Side Only)
I ultimately used Puppeteer for functional testing - this launches a headless version of Chrome and programmatically simulates user interaction. I don't think this would scale well for fCC hosting testing.
/_api/get-tests
end point and parse for passed
on various tests. This would require changing the user stories which are displayed on the fCC landing pages to more closely match the verbiage found in the 1_unit-tests.js
and 2_functional-tests.js
files. This might create some confusion? It is also only slightly better than the "honor system", although it's slightly harder to fake that output if we're counting assertions.An aside about the front-end projects and the "honor system".
I've been wondering if it would be possible to digitally "sign" the front-end tests in some meaningful way. Since the code is hosted by fCC, it might be possible to dynamically generate it with a user-specific id.
Imagine this scenario:
When the script is fetched there is a GET parameter on the filename, maybe an e-mail address?
<script src="https://cdn.freecodecamp.org/testable-projects-fcc/v1/[email protected]"></script>
or a unique ID which is provided by the site?
<script src="https://cdn.freecodecamp.org/testable-projects-fcc/v1/bundle.js?camperId=54a3b21a4d246f"></script>
These scripts could be dynamically generated with some sort of camper specific signature.
Then, when all functional tests complete, a checksum based on a combination of the passing tests and camper signature could be created. The user is given the option to SUBMIT to the fCC site, from the client-side script. The signature could be compared against the currently logged in user and the signed checksum compared against the computed checksum on the server side.
It would certainly be POSSIBLE to hack this, but my expectation is that it would require much more effort then just completing the tests.
Frankly, this may be overkill. It adds a lot of overhead to the server side to dynamically create a script every time, compute checksums, etc.
The lesser alternative might just be to provide the option to submit the completed tests from the script, once all of the functional tests pass. Then we just do a POST to a known fCC endpoint per-project.
https://github.com/freeCodeCamp/freeCodeCamp/issues/39692#issuecomment-698950043
Thanks for this recommendation, you are correct. It would be difficult to scale it on our available resources. So, I am going to rule it out at least for now sadly.
Let discuss our options here:
Write two new modules for the front-end testing framework. ...
This one seems like a good interim solution and the least resistant path? I would let @mot01 and @RandellDawson leave some feedback though.
Rewrite these projects as backend with an API. ...
I am actually in favor of this, because if we had at least one backend component it would be useful, and not to forget that we are presenting both these projects as backend projects!
Rewrite these projects as backend with an API. ...
I can see this getting complicated soon enough as contributors try to improve the tests or modify the user stories for whatever reason.
I can see this getting complicated soon enough as contributors try to improve the tests or modify the user stories for whatever reason.
You can see an example of what I'm talking about here:
https://github.com/freeCodeCamp/freeCodeCamp/pull/39615/files#diff-40bcd8c9d801a2a7814de509807dfa2dR177
I agree, it's it unsatisfactory.
In terms of putting a backend API on the either of the projects, I think it would be fairly straightforward and would improve our external test coverage.
Thinking out loud -
/api/solve
with form data containing puzzle
which will consist of the text representation of a puzzle. The returned object will contain puzzle
with the submitted puzzle and solution
with the solved puzzle.puzzle
contains values which are not numbers or periods, the returned value will be { error: 'invalid characters in puzzle' }
puzzle
is greater or less than 81 characters, the returned value will be { error: 'expected puzzle to be 81 characters long' }
puzzle
is invalid or cannot be solved, the returned value will be { error: 'puzzle cannot be solved' }
There would need to be al least 4 more API specific functional tests:
There is the question of if the front-end for this project should be re-written to take advantage of the API. That would require some additional changes to the user stories, but would probably simplify them somewhat. For example, we could treat the grid/text area transactions as completely client side and all solving activity would be via the API.
Validation of the text string would happen server side during a solve operation. The returned error message would be echoed to the #error-msg
box. It might make sense to capitalize/reformat the errors I described above to be properly capitalized and expanded so they could be dropped right in to the error output.
The project should restructured to move the solving logic server-side, keeping only the client side grid input and translation from grid to text and visa versa.
POST
to /api/translate
with a body containing text
with the text to translate, direction
with either american-to-british
or british-to-american
, and highlight
with a Boolean indicating if the translated terms should be wrapped in a span
with the highlight class. The returned object should contain the submitted text
and translation
with the translated text. { error: 'required fields missing: <comma separated list of missing fields (in the above order)>' }
.text
is empty, return { error: 'no text to translate' }
direction
does not match one of the two specified directions, return { error: 'invalid value for direction field' }
highlight
is not a Boolean, return { error: 'invalid value for highlight field
}`At least 5 more functional tests would be added:
This project is easier to recast as a largely back-side project. It might make sense to have a "highlight translation" checkbox on the front-end. The translation libraries would need to be converted to CommonJS modules to be included server-side.
As with Sudoku Solver above, returned errors would go in the #error-msg
container. Again, it might make sense to format the returned errors for direct display on the client side.
For both of these projects the user stories would need to be significantly rewritten and restructured to make sense with the new backend requirements. It might make sense to break the user stories into "Front End" and "API" blocks to simplify the explanation.
I do think the added complexity is commiserate with these being the last two projects for the cert.
Rewrite these projects as backend with an API. ...
This seems to make the most sense to me. It's going to be more work initially - but being able to run the tests on the website makes it worth it I think. I think we should always try do this when we can. As Mrugesh said, "we are presenting both these projects as backend projects!" - making these adjustments would satisfy that and more closely align them with the other projects in this section. Do we have any concerns about backwards compatibility here? Users have likely already completed these projects and, although there wasn't any actual tests, certainly those projects wouldn't pass if we make these adjustments.
Write two new modules for the front-end testing framework. ...
Although this would be an easier solution, I don't think it's better. If we just want to have tests somewhere, this would work. But I vote to adjust the projects.
Just an FYI @SaintPeter - we are looking into ways to simplify some things with the projects to make them a little easier to maintain. The user stories, for instance, are in many places (example projects, boilerplates, on the /learn page), and we will likely be moving most of them just be on the website so we don't need to worry about updating them in all these places.
Just out of curiosity, do you have any interest in adjusting these projects @SaintPeter?
Do we have any concerns about backwards compatibility here? Users have likely already completed these projects and, although there wasn't any actual tests, certainly those projects wouldn't pass if we make these adjustments.
That is correct, so any one who was in the middle of a project before we make the change would be affected. The users who have submitted it before the date of change should not be affected.
I think this is a small inconvenience that is fine?
Just out of curiosity, do you have any interest in adjusting these projects @SaintPeter?
Yeah, I think I'd be willing to make the changes. I have some time over the next month before I start my new job full time and this seems like an entertaining diversion. I've already completed both projects, so I'd likely convert the projects then work backwards to the boilerplate.
. . . we are looking into ways to simplify some things with the projects to make them a little easier to maintain. The user stories, for instance, are in many places (example projects, boilerplates, on the /learn page), and we will likely be moving most of them just be on the website so we don't need to worry about updating them in all these places.
There will always need to be a bit of duplication - I don't feel like the presentation of of the user stories in the tests on the /learn page is the best, so having them cleanly formatted in the body of the project description on the same page would probably be necessary. There is also not always a 1:1 correspondence between the tests and the user stories. The predicate of this entire conversation is that we CAN'T test (and won't be able to test) some of the client side only features.
Just let me know what you'd like and I should be able to accommodate it.
I am wondering what we should do about the "functional" tests on the Sudoku Solver. As I mentioned in my original message, I ultimately went to Puppeteer in order to get a "true" functional test and that was non-trivial. Using JSDOM to achieve similar results means that, typically, we need to manually "trigger" our event handlers. In some cases that's not even possible.
For example, in order to prevent non-numeric input in the grid I added a handler on the keydown
function on input boxes (technically I put it on the document and used e.target
to determine if it was a cell). I used e.preventDefault()
to disallow certain characters. It's not possible to mock that on JSDOM (that I am aware of, I'd love to be corrected).
In short: should we have a user story that requires a test that neither we nor the user can easily perform? The new "invalid character" error should capture bad inputs if they just na茂vely accept all inputs to the grid. Or mark the story as optional?
@raisedadead, @moT01 -
We seem to have consensus on these and no serious objections to my proposed user stories. I'll probably change and reduce the scope of the front-end facing stories as well.
If you give me the go-ahead then I'll start work on these this weekend.
Here is a rough plan:
I'd like to move on this somewhat soon, as I will not have much free time in the future.
Sorry @SaintPeter, I'm hesitant here because the projects we have work pretty good - we just can't test them on the fCC side. If you want to just go for it, we can review what you come up with - but I can't promise they will be integrated. Like I said, I do like the idea - and @raisedadead seemed to be on board, as well - so we can likely make the changes. I would be sure to add users stories for the tests... e.g. All # functional tests should pass
You could also possibly break up some of the users stories you came up with...
I can POST /api/solve with form data containing puzzle which will consist of the text representation of a puzzle. The returned object will contain puzzle with the submitted puzzle and solution with the solved puzzle.
to
I can POST /api/solve with form data containing puzzle which will consist of the text representation of a puzzle in the form of numbers and periods
A solvable puzzle will return the solution to that puzzle in the form of numbers and periods
Something along those lines maybe, but probably worded a little better. And something similar for that first test of the translator.
You could add a user story for a specific puzzle to try and make it more clear as well...
Posting the puzzle "1..2..3..4..55..." should return "123414151234"
^^ Same with the translator
This is a large return message:
If one or more of the required fields is missing, return { error: 'required fields missing: <comma separated list of missing fields (in the above order)>' }
Perhaps something more generic would be easier - "required fields missing" or something.
These are some ideas, they're not set in stone.
I think the projects would be an improvement so go for it. As for the deployments - please work with the current state of master.
The new hosted versions can't go live as of now because of the deployment freeze, when the do it would be our responsibility to audit and update the links, etc.
The deployment freeze had to be extended because of oversight with the parser that we had not thought of.
You could also possibly break up some of the users stories you came up with...
Sure, I'll break the user stories up and make them a bit more atomic. The wording was just off the cuff, so we can polish them a bit.
I'll see what I can do over the weekend.
This morning I met with @nhcarrigan and @Sky020 on Discord and we had a detailed discussion about the Sudoku Solver project, in an effort to move it to a more back-end focused project, as discussed in this thread.
We came up with a new concept for the solver which will focus more on understanding the algorithm enough to test it properly by adding a new endpoint /api/check
which takes a puzzle, coordinate, and value and determines if it is a valid placement. This seems to be in line with the sort of functionality that an interactive Sudoku solve might have and aligns closely with the goals of the "Quality Assurance" section.
This would also result in a more minimalist front-end which checks the two routes separately. It would have a bit more fCC supplied code to enable the grid->text and text->grid movement, but that's more of a convivence for the tester then a "feature" of the project and we won't test for it.
POST
/api/solve
with form data containing puzzle which will consist of the text representation of a puzzle. The returned object will contain solution
with the solved puzzle.puzzle
, the returned value will be { error: 'Required field missing' }
{ error: 'Invalid characters in puzzle' }
{ error: 'Expected puzzle to be 81 characters long' }
{ error: 'Puzzle cannot be solved' }
POST
to /api/check
an object containing puzzle
, coordinate
, and value
where the coordinate
is the letter A-I followed by a number 1-9 and the value
is a number from 1-9.valid
, which is true
if the number may be placed at the provided coordinate and false
if the number may not. If false, the returned object will also contain conflict
which is an array cotaining the strings "row"
, "column"
, and/or "region"
depending on which makes the placement invalid. { error: 'Invalid characters in puzzle' }
{ error: 'Expected puzzle to be 81 characters long' }
puzzle
, coordinate
or value
, the the returned value will be { error: 'Required field(s) missing' }
{ error: 'Invalid coordinate'}
value
is not a number between 1 and 9, the returned values will be { error: 'Invalid value' }
/tests/1_unit-tests.js
for the expected behavior you should write tests for./tests/2_functional-tests.js
for the functionality you should write tests for./check
route@nhcarrigan and I also poked at the American British Translator stories and came up with the following:
POST
to /api/translate
with a body containing text
with the text to translate, locale
with either american-to-british
or british-to-american
, The returned object should contain the submitted text
and translation
with the translated text./worldsbestfoldername
for the different spelling and terms your application should translate. /api/translate
route should handle the way time is written in American and British English. For example, ten thirty is written as "10.30" in British English and "10:30" in American English./api/translate
route should also handle the way titles/honorifics are abbreviated in American and British English. For example, Doctor Wright is abbreviated as "Dr Wright" in British English and "Dr. Wright" in American English. See /public/american-to-british-titles.js
for the different titles your application should handle.<span class="highlight">...</span>
tags so they appear in green.{ error: 'Required field(s) missing' }
.text
is empty, return { error: 'No text to translate' }
locale
does not match one of the two specified directions, return { error: 'Invalid value for direction field' }
/tests/1_unit-tests.js
for the sentences you should write tests for.All 2+e31 functional tests are complete and passing. See /tests/2_functional-tests.js
for the functionality you should write tests for.
At least 5 more functional tests would be added:
@SaintPeter Awesome. Congratulations. These user stories seem exhaustive and are crisply-worded.
Thanks again for everything you've been doing to help the community recently. I feel really grateful that you have a bit of downtime before your next job starts, and are investing that time and energy into helping improve the community.
@QuincyLarson Thank you! I am always glad to give back to this community which has given me so much knowledge and so much joy!
As a matter of fact, I just got the Data Visualization cert last month and it turns out that I'll be using it in my new job almost immediately.
I'll start work on refactoring the Sudoku Solver with these updated user stories soon. I'll be reaching out to other folks on the Discord for sanity checking.
@nhcarrigan and I spent the morning refactoring the Sudoku Solver boilerplate.
Here is a Repl.it of it:
https://repl.it/@SaintPeter/fcc-sudoku-solver-boilerplate-refactor
Live Code:
https://boilerplate-project-sudoku-solver.saintpeter.repl.co
Git Repo Branch:
https://github.com/SaintPeter/boilerplate-project-sudoku-solver/tree/feat/backend-refactor
We would appreciate any feedback before we move forward with making a sample project and also proceeding with American / British Translator.
@SaintPeter One thing
<input
readonly
type="text"
size="1"
maxlength="1"
id="A1"
class="sudoku-input"
/>
- Probably best to remove all of the inputs and just change the contents, because it is not acting as an input:
Humm, we forgot about that. I think we had intended to make that work. I talk with Nicholas and figure out one way or the other.
Thanks @nhcarrigan for removing the input elements. The Repl.it has been updated, as has my branch.
Once we get some consensus we'll move on to the American / British Translator rework. Additional, likely both Nicholas and I will rework our existing Sudoku projects to work with the new boilerplate.
Overall, this looks real good @SaintPeter @nhcarrigan 馃帀 Here's some observations and things to consider...
The user stories on the boilerplate don't match what you have on replit as far as I can tell (the boilerplate still has the old user stories). Perhaps that was intentional since we are going be moving all those to /learn.
Correct if I'm wrong - I believe this is how it is supposed to work...
api/solve
takes a puzzle
and returns a solution or one of those error messagesapi/check
takes a puzzle
, coordinate
and value
, and returns true
, false
with an array, or one of those error messages.The user stories on the replit project:
I can POST `/api/solve` with form data containing a string of numbers and periods representing a Sudoku puzzle. The returned object will contain `solution` that is a string representing the solved puzzle.
api/solve
and the second two are for api/check
. Seems like there could be some clarification there. You could possibly change the start of each of those to "If the puzzle submitted to api/check
" / "If the puzzle submitted to api/solve
" - This may be something to add to others as well to limit potential confusion... "If the object submitted to api/solve
is missing..."containing
the
twice in a row.This sudoku solver is not expected to be able to solve every incomplete puzzle. See `/public/puzzle-strings.js` for a list of puzzle strings it should be able to solve along with their solutions.
I wonder if we should add in this user story from the original? If not, should we delete that file? I don't think it gets used anywhere.
Do you think we need to add any of the additional notes in there? I'm thinking we could probably move those over to /learn as well.
If I correctly build the routes and click solve (POST
to api/solve
), will the puzzle fill in? What happens if I build all the routes and submit to api/check
? Does it say true or false on the interface or something like that? No need to answer these.
In general, it looks great :tada: and I don't think you need to change any of this to continue on if you don't want - just my findings.
@moT01 -
The user stories on the boilerplate don't match what you have on replit as far as I can tell (the boilerplate still has the old user stories). Perhaps that was intentional since we are going be moving all those to /learn.
I'm not quite sure what you mean by this. We did temporarily put the user stories in the README.md file, but since we understood the plan was to ultimately move them just to /learn
, they'll likely be removed before I submit a PR. The user stories should match what we shared a few messages above.
User story 1 - I wonder if this should be removed since the front end will be built out? And we won't really be able to test it? (I don't think)
We were not really sure where to put this. This is more "meta" information about how the puzzles are represented by the two API endpoints. I suppose we could include it in the /learn
body text? While I understand the rationale behind wanting the user stories to only exist in one place, it's challenging to add non-user-story information. Ditto for the /public/puzzle-strings.js
comment and the "additional notes".
We can certainly fold it into the second user story.
If I correctly build the routes and click solve (POST to api/solve), will the puzzle fill in? What happens if I build all the routes and submit to api/check? Does it say true or false on the interface or something like that?
Nick (Mostly) and I did some minimalist AJAX on the front-end that submits the values to the solve
endpoint and, if it gets a valid puzzle back, updates the grid. For the check
endpoint we just display the returned JSON object. It's about the same level as the Imperial/Metric project, just enough rigging to do basic endpoint testing.
The user stories on the boilerplate don't match what you have on replit as far as I can tell (the boilerplate still has the old user stories). Perhaps that was intentional since we are going be moving all those to /learn.
My bad, not sure what I was looking at there.
We were not really sure where to put this. This is more "meta" information about how the puzzles are represented by the two API endpoints. I suppose we could include it in the /learn body text? While I understand the rationale behind wanting the user stories to only exist in one place, it's challenging to add non-user-story information. Ditto for the /public/puzzle-strings.js comment and the "additional notes".
Yea, I'm not sure. We may have to write up some instructions for the body of the challenge page to include some of those things. For now, maybe just leave it in there and move on.
Today @nhcarrigan and I updated the user stories on the Sudoku Solver as you had suggested @moT01. Our expectation is that these will ultimately end up in the /learn
text, but for the moment they'll live in the README.md
. The Repl.it and GitHub links above have been updated.
We also refactored the American/British Translator project.
Repl.it with code:
https://repl.it/@SaintPeter/fcc-american-british-translator-boilerplate-refactor
GitHub Branch:
https://github.com/SaintPeter/boilerplate-project-american-british-english-translator/tree/feat/backend-refactor
Note: We do have additional text for the /learn
body, which is currently parked in the README.md
file.
Please advise what our next steps should be. We are prepared to update the /learn
user stories and write the functional tests. Additionally, we can produce an updated example project which will pass those tests using the new boilerplate framework.
Nice job @SaintPeter @nhcarrigan 馃帀
Things left to do:
You can do them however you want. I feel like creating the examples before tests might be a good idea so you have something to run against the tests.
We are ready to push out the Sudoku Solver update. Linked above are the updates to the boilerplate, the tests for the site, and the updated demo project.
I am hosting my version of the demo project here:
https://sudoku-sample.pearlygatesoftware.com/
Ok, everything is in place to roll this change out.
[x] Boilerplate Updated, PR Created
[x] fCC Tests Updated, PR Created
[x] New Project Created, PR Created against demo-projects
[x] Boilerplate Updated, PR Created
[x] fCC Tests Updated, PR Created
[x] New Project Created, PR Created against demo-projects
Nice job again @SaintPeter and @nhcarrigan 馃帀 I think we have this stuff rolled out, so we should be good to close this 馃槃 Let me know if I am mistaken.
Most helpful comment
This morning I met with @nhcarrigan and @Sky020 on Discord and we had a detailed discussion about the Sudoku Solver project, in an effort to move it to a more back-end focused project, as discussed in this thread.
We came up with a new concept for the solver which will focus more on understanding the algorithm enough to test it properly by adding a new endpoint
/api/check
which takes a puzzle, coordinate, and value and determines if it is a valid placement. This seems to be in line with the sort of functionality that an interactive Sudoku solve might have and aligns closely with the goals of the "Quality Assurance" section.This would also result in a more minimalist front-end which checks the two routes separately. It would have a bit more fCC supplied code to enable the grid->text and text->grid movement, but that's more of a convivence for the tester then a "feature" of the project and we won't test for it.
User Stories
POST
/api/solve
with form data containing puzzle which will consist of the text representation of a puzzle. The returned object will containsolution
with the solved puzzle.puzzle
, the returned value will be{ error: 'Required field missing' }
{ error: 'Invalid characters in puzzle' }
{ error: 'Expected puzzle to be 81 characters long' }
{ error: 'Puzzle cannot be solved' }
POST
to/api/check
an object containingpuzzle
,coordinate
, andvalue
where thecoordinate
is the letter A-I followed by a number 1-9 and thevalue
is a number from 1-9.valid
, which istrue
if the number may be placed at the provided coordinate andfalse
if the number may not. If false, the returned object will also containconflict
which is an array cotaining the strings"row"
,"column"
, and/or"region"
depending on which makes the placement invalid.{ error: 'Invalid characters in puzzle' }
{ error: 'Expected puzzle to be 81 characters long' }
puzzle
,coordinate
orvalue
, the the returned value will be{ error: 'Required field(s) missing' }
{ error: 'Invalid coordinate'}
value
is not a number between 1 and 9, the returned values will be{ error: 'Invalid value' }
/tests/1_unit-tests.js
for the expected behavior you should write tests for./tests/2_functional-tests.js
for the functionality you should write tests for.Boilerplate Functionality
/check
route