I came across a design question when working on elm-test recently:
What if you could write tests in the same file as the code they're testing?
There are a few testing systems that support this:
-
clojure.spec
(to vastly oversimplify) lets you define tests inline in the same file as your business logic. - Rust allows putting tests in the same file as the code being tested, although the documentation recommends putting them in a separate file (without elaboration as to why).
- Doctests (such as in Python, Elixir, Go, and Haskell) allow writing tests in the same file—theoretically for documentation purposes, although potentially for testing purposes instead.
-
EUnit
in Erlang runs all functions in a given module that end in_test
as tests, including business logic modules. - Racket supports inline tests.
- Pyret supports "testing blocks" which can live in any source file.
-
Test::Inline
adds support for this to Perl.
I've never used any of these, but I'm wondering how the experience would be in practice. Is it amazing? Unexciting? An outright bad idea? What are the pros and cons compared to putting tests in a separate file?
If you've tried writing tests like this before, what did you think?
Top comments (17)
I'm a pretty big fan of
doctest
for simplistic testing (as you mentioned, it's good for documentation so show a working example and simple edge case behavior) and having those tests integrated during builds. However for many testing scenarios (I largely deal with parallelizing and parsing) the amount of code around tests makes keeping them inline cumbersome to people trying to focus on the actual production code: By moving the test code to separate files you can separate production and test changes both during writing (it's harder to get confused about what's a test function and what's a production function) and during code review (it's easier to parse the behavior of the change, and then identify that the tests have been updated correctly). Additionally, depending on your language, if you have separate files you can decrease your release size by excluding tests from production pushes.I am working mainly in scheme these days, and I include testing code in the source files. This serves several purposes: it makes it quite easy to check that changes to code don't have unintended consequences, it provides in situ examples of the tested uses of the code, and the tests throughout the code can be incorporated into more complex testing routines.
Personally, I like to keep the code and its tests close together: Keep your friends close, but keep your rivals closer ;-)
Are you referring to unit tests or E2E?
I have some experience with a lib called CatJS by HP. the challenge is that it's hard to reuse when doing things inline (they had annotations on the HTML).
6 years ago (Ember project) I had a folder comprising everything related to a component (the js, the sass, the template, and the unit test).
As @billperegoy stated, most people at the time had a parallel tree, but my guess is that for compiled languages (e.g. Java) it was easy to compile a complete folder, and not intuitive for people to filter out the unit tests, which they didn't want to see in deployed Jar (also my intuition is that it's doable and not that hard).
Nowadays, I'm even more extreme, especially for E2E tests, where I like the tests NOT to be in the repository itself, but only the link to it are. Why? Because I prefer even support teams to create E2E automated tests to be filled with a bug report. If it's not reproducible, it's much harder to find and there's always the back and forth friction of how to reproduce.
My reasoning is that I also have baseline screenshots, videos and other files, which I don't want to be under my code project (e.g. git doesn't handle large files nicely. NOTE: I still want to have it under version control).
In all my Java projects, tests live in a separate tree - src/main vs src/test. In my node projects, tests live in separate files next to the file being tested.
In the Java case, it's trivial to have extra test code that is there to support your tests. It's also easy to see what is test code and what is production code.
In the Node case, it originally started because of the require syntax - when I did it with separate trees I had hellish require paths. However, it's kinda nice having them so close together. You can see what is and isn't tested trivially, and flip between them really easily.
I like the idea of having them in the same file, but haven't tried it out properly yet.
Basically I agree on the "it depends" argument that was given already. However, that approach can only work in a 1:1 mapping. Hence, not with integration/system tests.
I think it's much more interesting, to research better tooling for production/test code. One thing I'm particularly working on is a ReSharper plugin (github.com/matkoch/TestLinker) that provides navigation between the two entities, plus a few other things, like generating test stubs and detection of affected tests through production code changes.
I personally love doc tests in Elixir. They're not good for things that require a lot of setup because their repl based. I think it's great because it helps you remember to write them and keep them up to date. Also helps them act as documentation. Especially if they are example based tests.
The first answer that pops in to my mind is: wherever you prefer but at least in your mind. If "test location" is somehow something that might stimulate a good testing behavior, well if testing is not in your mind it doesn't matter that much. Anyway, to stay on track, I'd rather keep it away from the code, in order to avoid influences, especially if you are adding tests on existing code.
I've always worked on projects where the tests were in a separate parallel tree from the code. This makes it easy to run the tests but figuring out where a test for a particular method lives can be a challenge. We always try to make the two trees as similar as possible, but they always diverge and that makes working on legacy code difficult.
I've recently seen some JavaScript projects putting the in the same directory as the code. I liked this as it's really clear what test file goes with what code file. I imagine after a while, this would encourage one to add more levels of hierarchy to the code base to avoid too much clutter. So I wonder if this would solve one problem but create another for large codebases.
I've never actually put code and tests in the same file except for code katas and demos. One one hand, it feels like it might make files too big. On the other hand, maybe it would also encourage breaking up the code into more reasonable module sizes. I probably wouldn't know how it felt until I tried it on something large.
In the end, I'm all for getting the tests closer to the code rather than in a separate parallel tree. Most of the test frameworks out there seem to encourage a separate tree, so I find myself still working in that way.
Kept in separate source files, but I always put them right next to where my source code lives. They should be immediately discoverable right there, not relegated to some far away distant corner of the project.
I write a lot of Rust (and Elm!) and having the option to write tests in the same file comes in handy sometimes, even if it's just temporary. I tend to move my tests into a separate file eventually, but if it's just a test or two, sometimes the extra file/boilerplate isn't justified.
One of the nice things about putting tests in the same file is that naturally you have access to all the module's non-exported symbols and can test internals if desire. You get this with Rust either way since the test module is typically a conditionally-compiled child module of the thing you're testing, but it seems that with Elm this wouldn't be possible otherwise.
Speaking of conditional compilation, how would putting tests in the same file work in Elm? It seems that test logic would be included in the compiler output unless some sort of conditional compilation mechanism was added.