Drake: spring_mass_system directory doesn't match its unit test directory

Created on 22 Nov 2016  路  10Comments  路  Source: RobotLocomotion/drake

Right now, we have:

  • drake/systems/plants/spring_mass_system/spring_mass_system.h
  • drake/systems/plants/spring_mass_system/spring_mass_system.cc
  • drake/systems/plants/test/spring_mass_system_test.cc

The unit test should be in a subdirectory of the tested class, so either:

  • drake/systems/plants/spring_mass_system.h
  • drake/systems/plants/spring_mass_system.cc
  • drake/systems/plants/test/spring_mass_system_test.cc

... or ...

  • drake/systems/plants/spring_mass_system/spring_mass_system.h
  • drake/systems/plants/spring_mass_system/spring_mass_system.cc
  • drake/systems/plants/spring_mass_system/test/spring_mass_system_test.cc

... or some other directory tree entirely. As long as the unit test is a subdir of the system itself.

medium manipulation cleanup

Most helpful comment

@naveenoid Regardless of the final choice of home, moving the test to match the device-under-test is obvious and well-agreed, and would solve the issue in the OP.

All 10 comments

i'm not sure that it even belongs in systems/plants . i think drake/examples/spring_mass_system ?

It started off line in drake/examples, but then see #3254 and #3653 where it moved homes twice already.

i think that this is a systemic problem. examples are useful as tests. not being able to reference them is a real pain.

The debate there is whether the file structure of the source tree should be organized by

  • (1) dependencies, which means that cross-cutting semantic classifications like "list of example systems" have to be solved via documentation tags instead, or
  • (2) semantic classification, which means that dependencies are tracked via annotations in the build systems instead.

To me (1) is a superior choice, because it makes it easier to reason about pieces of the software in isolation, and is unambiguous. Because there will always be different semantic ontologies to label our code with, it's difficult to turn (2) into a recipe for code layout.

At the end of the day, either way can be made to work, but (1) matches the way I'm most comfortable working with software. Others' opinions may vary.

While I appreciate that view, the examples (if they are good) will have many dependencies into the code. They are effectively tests. But some of the files/classes associated with those tests might be useful in other, smaller tests. For example, the urdf files associated with an example (acrobot, etc) could be useful in a test for inverse kinematics.

in your model, perhaps you would propose that we promote the urdf to systems/plants, but leave the methods which e.g. develop a controller to run on the robot described in that urdf in examples?

I think promoting broadly useful and stable SDFs to lower in the dependency stack is great (e.g., systems/plants/models).

I also think that having several (possibly even divergent) copies of, e.g., an Acrobot SDF in the tree (for different purposes) is also fine.

Taking the "Acrobot SDF is used in IK unit testing" scenario -- the SDF used in IK should be appropriately curated for that purpose (minimally phrased so as to scope the testing, well-documented for software developers to maintain, tested so that we know the SDF is correct even without trying to do IK on it) which may sometimes be at odds with the needs of an example.

Possibly even in the IK unit test shouldn't use an Acrobot SDF at all, but instead construct a multibody tree in-memory through code, which would then let it synthesize related variants, e.g. different intertias, for an even broader range of coverage. In general, I am skeptical of "unit test requires a datafile" except perhaps for parsers. Code offers many of levers for abstraction; SDFs do not.

For big examples with big SDFs (like Atlas or something), tests that share those files or controller software or etc. aren't really unit tests anyway, and are best left as something like regtests near the big example itself, not as part of the IK library in systems or more central.

Whats the conclusion? Im personally in favour of reverting #3254 and #3653 to return this basic system into examples. In either case the test nesting required by this issue can be done.

i would vote against having divergent sdfs. for something as simple as the acrobot, it seems benign, but even there it seems like a short path to confusion and errors. we have been burned with multiple versions of urdfs for other projects.

taking the urdf/sdf example further, a number of the examples have base classes (like pendulum_plant.h/.cc) which would be useful in tests. your model would be to promote those, too.

in both cases, the downside is, imho, a loss of tutorial value in the examples directory. but I agree it might be cleaner from a dependency point of view.

I think models should be promoted to a dependency of Drake so they can be referenced from anywhere within Drake. This is because programmatically instantiating a model is tedious, error prone, and verbose. In #3257, I proposed storing models in separate repositories, which are then configured as submodules in drake-distro/externals. In #3986 I added the HSRb model as a submodule in drake-distro/ros since it is structured as a ROS package.

I am also against having multiple divergent copies of SDF. #4074 mentions the use of embedded Ruby to avoid needing multiple divergent copies.

@naveenoid Regardless of the final choice of home, moving the test to match the device-under-test is obvious and well-agreed, and would solve the issue in the OP.

Was this page helpful?
0 / 5 - 0 ratings