DEV Community

slns
slns

Posted on

Code Testing: A Technical Guide to Testing Everything vs. Focusing on Services

Testing is at the heart of software quality, but deciding what to test can be tricky. This article explores the pros and cons of testing every component versus focusing just on services, with detailed insights into each phase and component.


🧊 What Does Testing Everything Mean?

Testing everything involves covering all system components: enums, DTOs, controllers, services, repositories, traits, and utilities. This exhaustive approach brings safety but also complexity.

✅ Benefits

  1. Maximum coverage: Each component is validated individually.
    • Example: Testing the OrderStatus enum ensures invalid values like COMPLETED_WRONG are rejected.
  2. Confidence in changes: Refactoring doesn’t break unrelated parts of the system.

❌ Drawbacks

  1. High cost: Writing and maintaining all these tests takes significant time.
  2. Test redundancy: Logic might be tested multiple times (e.g., in the model and service).

🛠ïļ What Does Testing Only Services Mean?

This approach assumes that services, which orchestrate other components, adequately represent system behavior.

✅ Benefits

  1. Time-saving: Focuses on critical functionality.
    • Example: A test for OrderService ensures the entire order creation process works correctly.
  2. Essential coverage: Prioritizes the tests that matter most for end-users.

❌ Drawbacks

  1. Hidden bugs: Issues in DTOs or validations may go unnoticed.
    • Example: A mapping error in a DTO might not be caught until it’s too late.
  2. Harder debugging: Finding the source of failures becomes more complex.

ðŸ§Đ How to Test Each Component: A Technical Guide

1. Enums and Models

  • What to test? Behavior, validations, and data integrity.
  • Example: An OrderStatus enum should only accept values like PENDING, COMPLETED, and CANCELED.

2. DTOs and ViewModels

  • What to test? Data consistency during serialization/deserialization.
  • Example: An OrderDTO should correctly map JSON data into an expected model.

3. Controllers

  • What to test? Routing, HTTP responses, and basic validations.
  • Example: A GET /orders endpoint should return correctly formatted data with a 200 status.

4. Services and Repositories

  • What to test? Business rules and data persistence.
  • Example: The OrderService should calculate an order total correctly, including discounts.

5. Factories and Traits

  • What to test? Reusable functionality.
  • Example: A factory creating Order objects should populate default values as expected.

🏗ïļ The Testing Pyramid: Structuring Your Approach

A balanced approach follows the testing pyramid:

  1. Unit Tests (Base): Cover isolated components like enums, DTOs, or small services.
  2. Integration Tests (Middle): Ensure components interact properly.
  3. End-to-End Tests (Top): Validate complete system behavior from the user’s perspective.

Image description


ðŸŽŊ Conclusion: Striking the Ideal Balance

The decision to "test everything" or "only services" depends on your project’s complexity and goals. A pragmatic balance includes:

  1. ✅ Detailed tests for critical components.
  2. 🛠ïļ Service-focused tests for global functionality.
  3. ðŸŽŊ Avoiding redundancy and prioritizing quality over quantity.

Final Question:

How do you structure your tests? Share your thoughts in the comments! 🚀

Top comments (1)

Collapse
 
nfpsaraiva profile image
Nuno Saraiva

First of all, great article!
I completely agree that writing tests should always be a cornerstone of software quality.

In my experience, I usually aim for 80-90% coverage, as I believe the amount of testing should balance the cost-benefit tradeoff. Aiming for 100% coverage often results in diminishing returns, requiring significant effort to test edge cases or less impactful parts of the code. While 100% coverage sounds ideal, for me it’s not always practical, especially in real-world projects with tight deadlines.

Regarding testing components I don’t see much value in testing simple data structures, like the individual values of an Enum for example. Instead, I prefer testing the outcome of the methods that use the Enum. If there’s a bug in the Enum, it will naturally surface in the relevant test case. This approach reduces test redundancy.

About the testing pyramid, I also like an alternative version where unit tests form the majority at the base, while end-to-end tests are at the top, representing a smaller but critical portion of the overall tests.

Unit tests, being fast and covering isolated components, offer great value for validating most of the application’s functionality. End-to-end tests, while essential, are often slower and more complex, so I reserve them for high-level and critical workflows

So in summary, I focus on tests that validates core functionality as I feel that it strikes the best balance between value and the time invested.
Either way, and like you said, this balance will always depend on our project's complexity and goals!