A bunch of our tests contain hardcoded output that we match against. This is OK for simple cases, but we've got fairly complex outputs to test, so it'd be much better to separate them out.
Let's follow the basic pattern of "golden files" used in stdlib - keep the output in a pre-named file that's committed, and have a flag we can use to update that file's output when needed.
Ok - I'll take this one on, being a newbie in the area (but having a pretty hardcore verification background in other languages & technical fields). My plan, to mimic stdlib, would be to create testdata (if needed) and cmd/dep/testdata directories (definitely needed) and move all hard-wired files in current tests to those directories. In the process, I'll learn the integration test scheme you all have set up, and can then start adding more e2e tests based on the learnings and the spec. Is this okay? Steer me as needed.
sounds exactly like what i had in mind! awesome!
Most helpful comment
Ok - I'll take this one on, being a newbie in the area (but having a pretty hardcore verification background in other languages & technical fields). My plan, to mimic stdlib, would be to create
testdata(if needed) andcmd/dep/testdatadirectories (definitely needed) and move all hard-wired files in current tests to those directories. In the process, I'll learn the integration test scheme you all have set up, and can then start adding more e2e tests based on the learnings and the spec. Is this okay? Steer me as needed.