I'm trying to learn TDD approach. I have no troubles with a simple code where I do not need to mock or stub any external methods or dependencies, b...
For further actions, you may consider blocking this person and/or reporting abuse
I run integration tests against a separate (empty) database. Maintaining fixtures is enough work when you don't have to worry about whether they actually reflect your current data model. Node doesn't really have a formal separation between unit and integration tests like Maven-style Java projects do so it all winds up being "just tests" anyway.
I'm pretty sure there is a big distinction between unit tests and integration tests, in any language, framework or paradigm.
With every JS test runner I've used, the way you can tell which is which is that some fail when your external dependencies aren't available.
Ah sorry, my bad, now I think I understand what you said, you meant how that framework calls them.
Basically yes, they are all automatic tests.
I replied for an abstract/conceptual level, how you write them, what they are for and when they are run (and how long they should take).
And to answer to the the post question, unit tests are not supposed ti hit any dependency, outside of that function, so it is ok. But you may want to check the other 2 types of tests
Hi,
I have same issue.
I'm new to writing test cases for my project - Typescript, Node js, MySQL.
Integration Testing:
from my own experience, it is easier to setup a separate database for testing and you run several scripts to prepare database before running.
If you are in deep need to run without database and you are using mongodb and moongoose, I know a library called mockgoose which is pretty good to mock db
I'm fairly new to testing. Joined an existing project where the guy who wrote the code has left the company and now us who maintain don't have an entire system knowledge. And of course, there are no tests to easily understand what's happening in the code. So now when we change a part of the code either for bugfix or feature change we first reverse engineer tests so we have confidence we don't have regressions.
I find that for this existing system it's better to use the real DB for testing a.k.a make integration tests. I write down that does this feature X do. Then test each of the functionality instead of just mainly worrying about a particular function.
Example, have a function that deletes a user, but the user model in a database has set up a cascade delete on different tables. I need to run it against the database to sure it gets deleted as of a "side-effect". As just reading the tested code won't make it obvious what is happening on the database level. For each test, I first created test records written in the database, then call the function in testing, then assert results and clean the records. With this setup, if the assertion fails the data isn't cleaned, I didn't want to create nested describe scopes with before/after for each separate test. But when looking at the test I can easily see that is going on there without scrolling.
But mocking has its place for working with 3rd-party libraries where you can't call them.
I saw that the new trend is to make the integration tests with real databases, that are spawned for each test, using a docker integration.
Power of the containers to the rescue!
How can one do that?
A good place to start is learning Docker
How time has gone by, now familiar with the tools. Thanks.