It’s been a long and hopefully interesting journey so far. We have learnt how SObjectizer works and which of its essential abstractions enable us to craft message passing-styled applications. We have covered topics like:
- agents and cooperations
- subscriptions, messages and signals
- agent states
- message boxes and message chains
- delayed and periodic messages
- delivery filters
- message sinks
and there are still missing things, some of which we’ll meet in the future. However, the topic of this post is a bit different. We are almost out of time to face another essential topic: testing. As software engineers, we usually introduce tests at the beginning of new projects but this time we do that a bit late because some concepts of SObjectizer were kind of preparatory.
Testing agents
This discussion will be limited to unit and integration tests. In the former, testing is confined to a single component, whereas the latter comes into play when testing involves collaborators. In our context, we can approximate these two forms as follows:
- unit tests focus on the behavior of a single agent;
- integration tests focus on the functionality of a group of agents.
Since agents are built on messaging, we can easily simulate behaviors by sending messages. For example, unit testing the stream_detector
can be accomplished by manually producing and sending fake images to it from a test case. Never mind if images are real or fake, we only check its behavior.
An example of integration test is one involving both the virtual_image_producer
and the stream_detector
. When virtual_image_producer
produces (real) images then the stream_detector
should work as expected.
On first exposure, testing agents is more difficult than testing normal objects for a few reasons:
- asynchronicity: sending messages is asynchronous and so it’s difficult to know when a certain message has been delivered. Also, agents are meant to run concurrently and possibly on multiple threads. For such reasons, testing agents requires some synchronization to place assertions properly and to avoid deadlocks and concurrency issues;
- statelessness: agents expose behaviors by sending messages, they do not allow accessing their internal state;
- dependency on the framework: agents run in SObjectizer’s environment that need to be created (and maybe configured properly) for the test. Also, message boxes can’t be subscribed by non-SObjectizer agents.
Often, message passing frameworks offer some support for testing. For example, Akka provides the akka-testkit module containing tools that make testing actors a lot easier, like TestActorRef
that allows access to the underlying actor instance.
SObjectizer has some support for testing but, seeing as how it is experimental, we have not used it so much. Instead, we are introducing a test suite based on GoogleTest (using any other testing framework will be fine) and we’ll discuss some issues you might encounter when testing agents without support from the framework and some possible mitigations.
Another thing to mention about testing is that sometimes agents require a specific context to work. For example, a database or a gRPC service. Mocking or faking such context is often doable but can be very complicated and error-prone. For this reason, in many of such cases, adding integration tests is usually preferred and easier. Well, this observation actually applies to any software component (objects, micro-services, etc).
Hence, from this post, calico
is split into three projects:
-
calico_core
, the library containing all the core components (the folder’s name is stillcalico
); -
calico_tests
, the test project; -
calico
, the executable (the folder’s name iscalico_main
).
Our first unit test
First of all, we add a test suite to verify that virtual_image_producer
works properly. The first story under test is the essential capability of that agent to send images found into a certain folder.
First of all, we introduce a google test entry instantiating a SObjectizer wrapped_env_
t at the beginning:
TEST(virtual_image_producer_tests, should_send_images_from_folder_after_receiving_start_acquisition_command)
{
const so_5::wrapped_env_t sobjectizer;
// ...
}
You know that test fixtures are also possible here to avoid duplicating the environment for every test. However, as a matter of style towards readability, we tend to limit the use of that feature unless complex initialization is needed.
At this point, we need to introduce a cooperation and add a virtual_image_producer
that requires three ingredients to be created:
- image destination channel
- command input channel
- path to find images to replay
However, we need some control over such things in order to set up test expectations properly. In particular:
- we need to receive from the image channel from the test;
- we need to send commands to the command channel from the test;
- we need to use images we can verify somehow.
The second bullet is straightforward since message boxes can be ingested from anywhere. Likewise, we can cover the third point by using images committed in the repository. We might load and compare them (expected) – pixelwise – with the ones received from the agent (actual).
The first point is actually more interesting: we can’t receive from message boxes directly but we need to be a SObjectizer agent (more formally, a message sink). However, we don’t need to add a fake agent for that…you got what I mean? We can use a message chain! Again, message chains unleash their multi-purpose power.
Indeed, a message chain can show itself as a message box (calling as_mbox()
) and can be received from anywhere. Then, we give shape to the test:
TEST(virtual_image_producer_tests, should_send_images_from_folder_after_receiving_start_acquisition_command)
{
const so_5::wrapped_env_t sobjectizer;
const auto fake_input = create_mchain(sobjectizer.environment());
const auto commands = sobjectizer.environment().create_mbox();
sobjectizer.environment().introduce_coop([&](so_5::coop_t& c) {
c.make_agent<virtual_image_producer>(fake_input->as_mbox(), commands, R"(test_data/replay)");
});
// ...
}
Now, to be strict, we should verify that nothing is sent to fake_input
until the start command is sent. Unfortunately, we can only approximate this test by receiving from fake_input
for a certain amount of time, although awaiting in tests is usually bad. At this point, another feature of message chains comes to the rescue. We never used the result of receive
that contains information about the receiving operation:
const auto rec_result = receive(from(fake_input).handle_all().empty_timeout(10ms), [&](const cv::Mat&) {
// ... handle image
});
rec_result
contains the outcome of the receive operation including: the extraction status (no messages, at least one message extracted, chain closed), the number of **handled messages*, and the **number of extracted* messages. The difference between the two counters is essential:
- every message in the chain is extracted, regardless of the type;
- on the other hand, messages are handled only if a handler for that message type is found and called.
For example, in the receive above, sending a std::string
to fake_input
would not increase the counter of handled messages but only that of extracted, assuming it’s received within 10 milliseconds. Thus, the test expectation could be written as follows:
EXPECT_EQ(receive(from(fake_input).handle_all().empty_timeout(10ms)).extracted(), 0);
This way, if any message is sent to fake_input
by the agent, the test fails.
The common (and good) practice of unit tests is to follow the rule one story, one test. Then we might isolate this responsibility of testing this condition into another self-contained test:
TEST(virtual_image_producer_tests, should_not_produce_images_when_no_commands_are_sent)
{
const so_5::wrapped_env_t sobjectizer;
const auto fake_input = create_mchain(sobjectizer.environment());
sobjectizer.environment().introduce_coop([&](so_5::coop_t& c) {
c.make_agent<virtual_image_producer>(fake_input->as_mbox(), sobjectizer.environment().create_mbox(), R"(test_data/replay)");
});
EXPECT_EQ(receive(from(fake_input).handle_all().empty_timeout(10ms)).extracted(), 0);
}
Then we are relieved of the necessity of checking this condition in other tests.
Getting back to the original test, we can finally send the start command and receive some images from the fake input channel:
TEST(virtual_image_producer_tests, should_send_images_from_folder_after_receiving_start_acquisition_command)
{
const so_5::wrapped_env_t sobjectizer;
const auto fake_input = create_mchain(sobjectizer.environment());
const auto commands = sobjectizer.environment().create_mbox();
sobjectizer.environment().introduce_coop([&](so_5::coop_t& c) {
c.make_agent<virtual_image_producer>(fake_input->as_mbox(), commands, R"(test_data/replay)");
});
// send start command
so_5::send<start_acquisition_command>(commands);
// wait until 5 images are received
std::vector<cv::Mat> actual;
receive(from(fake_input).handle_n(5).empty_timeout(100ms), [&](cv::Mat img) {
actual.push_back(std::move(img));
});
ASSERT_THAT(actual.size(), testing::Eq(5)) << "expected exactly 5 images";
// images are strictly the same as the baselines (and are read in the same order)
EXPECT_THAT(sum(actual[0] != cv::imread(R"(test_data/replay/1.jpg)")), testing::Eq(cv::Scalar(0, 0, 0, 0)));
EXPECT_THAT(sum(actual[1] != cv::imread(R"(test_data/replay/2.jpg)")), testing::Eq(cv::Scalar(0, 0, 0, 0)));
EXPECT_THAT(sum(actual[2] != cv::imread(R"(test_data/replay/3.jpg)")), testing::Eq(cv::Scalar(0, 0, 0, 0)));
// next images are just sent cyclically
EXPECT_THAT(sum(actual[0] != actual[3]), testing::Eq(cv::Scalar(0, 0, 0, 0)));
EXPECT_THAT(sum(actual[1] != actual[4]), testing::Eq(cv::Scalar(0, 0, 0, 0)));
}
Some details:
- we receive exactly 5 images and we give the producer at most 100 milliseconds to send every image (aka: the chain can’t be empty for more than 100ms, otherwise the receive operation is stopped);
- another option is to just wait until exactly 5 images arrive, however, if for some reason the file system is stuck or some other bad conditions happen on the machine, the test might hang (yes, another good reason to avoid accessing system resources from a test). Similarly, if the system is under pressure, 100ms might not be enough. Well, we don’t expect such conditions but you never know what happens on a build agent…pros and cons, as always;
- in OpenCV, checking if two images are equal (pixel-by-pixel) can be done in different ways. The one chosen here consists in summing the “delta” of different pixels of the two images. It’s expected to be strictly zero;
- to test that images are sent cyclically, we just compare the first with the fourth and the second with the fifth.
One critical issue we protect against here is infinite waiting. It’s common to make mistakes in agents, such as sending a message to the wrong channel or not sending a certain message at all. In such cases, as explained, the test can hang and this is critical both locally and on build agents. In our test, we mitigated the problem by setting a timeout on the receive
function but, in general, we should set a time limit on the whole test execution. Since google test does not support that mechanism, we can resort to some custom implementations that, eventually, abort the test program. For example, SObjectizer’s tests make extensive use of this helper.
Finally, you might ask why we don’t send the stop command before the series of expectations. Well, it’s not part of the test. We only verify that 5 images arrive after sending the start command, that’s all. The stop condition might be tested in another case.
An approach to testing failure conditions
Another useful test should verify that passing a nonexistent folder should break the agent startup. Indeed, so_evt_start()
throws an exception if the replay folder is invalid. Testing this condition gives us the opportunity to see another tool of SObjectizer.
Think about what happens when an exception escapes from so_evt_start()
: by default, the program is aborted. However, we have learnt in the second post that we can customize that SObjectizer reaction. This means, just for this test, we can set the reaction to deregister_coop_on_exception
that deregisters the cooperation in case of uncaught exception. Then we can combine that with another feature of cooperations: a cooperation notificator. We can set a lambda that gets called when the cooperation is deregistered. One of the arguments of the lambda is the reason why the deregistration happened. The test should just expect that the reason is so_5::unhandled_exception
, that is exactly what we need:
TEST(virtual_image_producer_tests, when_folder_is_nonexistent_should_throw_exception_after_startup)
{
const so_5::wrapped_env_t sobjectizer;
std::atomic cooperation_deregistered_reason = 0;
sobjectizer.environment().introduce_coop([&](so_5::coop_t& c) {
c.set_exception_reaction(so_5::deregister_coop_on_exception);
c.make_agent<virtual_image_producer>(sobjectizer.environment().create_mbox(), sobjectizer.environment().create_mbox(), R"(C:/geppo)");
c.add_dereg_notificator([&](so_5::environment_t&, const so_5::coop_handle_t&, const so_5::coop_dereg_reason_t& why) noexcept {
cooperation_deregistered_reason = why.reason();
cooperation_deregistered_reason.notify_one();
});
});
cooperation_deregistered_reason.wait(cooperation_deregistered_reason.load());
EXPECT_THAT(cooperation_deregistered_reason, testing::Eq(so_5::dereg_reason::unhandled_exception));
}
A few details:
-
c.set_exception_reaction(so_5::deregister_coop_on_exception);
is for setting the exception reaction on the cooperation; -
c.add_dereg_notificator(lambda)
sets a lambda that gets called when the cooperation is deregistered, passing, among others, the reason; - since the test body and the notificator run in different threads, using an atomic is a simple way to keep things synchronized;
- hopefully,
C:/geppo
is not an existing folder on your system 🙂
Well, this test is not perfect either. The type of the exception is not stored anywhere and we rely on a feature of the cooperation to test the (failing) behavior of the agent. The difficulty to test this condition indicates the design of the agent: the error is considered unrecoverable. Is it really like that? Well, for the virtual camera it is likely. However, when using a real camera, a start failure is quite common for plenty of reasons. Thus, the test is actually showing a possible design flaw that needs some attention. This is another reason why tests are useful, in addition to other more blatant selling points.
It’s not the purpose of this post to discuss the design further but we’ll get back to that in the future.
Testing through fake messages
As said, agents have this splendid property of not being coupled directly to other agents but only to message channels. This means, agents work regardless of who is on the other side of channels. It might be another agent like in the real program or a test function. Never mind.
The stream_detector
is a perfect fit for putting this in practice. This agent is a state machine that generates two signals: when it receives an image for the first time, it sends stream_up
. If no other images are received for a certain amount of time, it sends stream_down
. We wrote this agent in the past here.
As before, being more or less strict is possible, meaning that a proper test should verify that no messages are sent before receiving the first image and after the stream is closed. But bear in mind that this is not the only way to go:
TEST(stream_detector_tests, should_send_stream_up_and_stream_down_properly)
{
const so_5::wrapped_env_t sobjectizer;
const auto input = sobjectizer.environment().create_mbox();
const auto output = create_mchain(sobjectizer.environment());
sobjectizer.environment().introduce_coop([&](so_5::coop_t& c) {
c.make_agent<stream_detector>(input, output->as_mbox());
});
// ensure no messages arrive at this point...
EXPECT_EQ(receive(from(output).handle_all().empty_timeout(10ms)).extracted(), 0);
// should detect new stream
so_5::send<cv::Mat>(input, cv::Mat{});
bool stream_up_received = false;
receive(from(output).handle_n(1).empty_timeout(100ms), [&](so_5::mhood_t<stream_detector::stream_up>) {
stream_up_received = true;
});
EXPECT_THAT(stream_up_received, testing::IsTrue());
// nothing should be sent because of these
so_5::send<cv::Mat>(input, cv::Mat{});
so_5::send<cv::Mat>(input, cv::Mat{});
so_5::send<cv::Mat>(input, cv::Mat{});
// stream down should arrive after some time of inactivity...
bool stream_down_received = false;
receive(from(output).handle_n(1).empty_timeout(700ms), [&](so_5::mhood_t<stream_detector::stream_down>) {
stream_down_received = true;
});
EXPECT_THAT(stream_down_received, testing::IsTrue());
// ensure no messages arrive at this point...
EXPECT_EQ(receive(from(output).handle_all().empty_timeout(10ms)).extracted(), 0);
}
As you see, lines like this so_5::send(input, cv::Mat{});
correspond to generation of fake images. Again, there is some “paranoiac code” to verify that no data is produced in certain moments.
Frankly speaking, this test inherently depends on time, indeed stream_detector
has its own timeout to detect a stream is ended. This is nasty. A tiny improvement is making this timeout configurable, but it does not not eradicate the issue at its core.
An example of integration test
At this point, since virtual_image_producer
and stream_detector
should work together as we expect, setting up an integration test makes some sense. Basically, the test should be very similar to the last one except for the fake images which will be truly sent by virtual_image_producer
:
TEST(integration_tests, stream_detector_should_detect_stream_activity_when_virtual_image_producer_sends_images)
{
const so_5::wrapped_env_t sobjectizer;
const auto channel = sobjectizer.environment().create_mbox();
const auto commands = sobjectizer.environment().create_mbox();
const auto output = create_mchain(sobjectizer.environment());
sobjectizer.environment().introduce_coop([&](so_5::coop_t& c) {
c.make_agent<virtual_image_producer>(channel, commands, R"(test_data/replay)");
c.make_agent<stream_detector>(channel, output->as_mbox());
});
so_5::send<start_acquisition_command>(commands);
bool stream_up_received = false;
receive(from(output).handle_n(1).empty_timeout(100ms), [&](so_5::mhood_t<stream_detector::stream_up>) {
stream_up_received = true;
});
EXPECT_THAT(stream_up_received, testing::IsTrue());
so_5::send<stop_acquisition_command>(commands);
bool stream_down_received = false;
receive(from(output).handle_n(1).empty_timeout(700ms), [&](so_5::mhood_t<stream_detector::stream_down>) {
stream_down_received = true;
});
EXPECT_THAT(stream_down_received, testing::IsTrue());
}
As explained before, it’s possible to make the test a bit more defensive to avoid hanging in case of file system problems.
Contributing
Some agents are harder to test or it does not make sense to do it, in particular:
- those printing something (e.g.
error_logger
,image_tracer
,stream_heartbeat
), - those using GUI functions (e.g.
image_viewer
family,remote_control
), - “real” camera producers.
On the other hand, others can be both unit tested and/or made part of an integration test:
-
face_detector
, -
image_resizer
, -
image_saver
, -
maint_gui::image_viewer
family (thanks to the fact they does not depend on OpenCV anymore).
Help wanted here!
If you feel like giving a hand here, please open a pull request on the repository page or create an issue for any discussion or idea!
Takeaway
In this episode we have learned:
- testing agents is more difficult than testing ordinary objects because of asynchronous behavior, statelessness, and some details of the framework;
- since agents are not coupled with each other, we can easily simulate behaviors by sending fake messages from a test case;
- message chains can be passed to agents that take message boxes through the function
as_mbox()
, this way we can receive their output from a test case; - the result of
receive
can be used to check if messages are sent to a certain channel, regardless of their type; - cooperation dereg notificators can be used to catch exceptions escaping from handlers;
- issues caused by asynchronous behavior can be softened by adding a receive timeout or, more in general, by setting a time-limit on the test execution (if the testing framework does not support that, we should resort to custom implementations).
What’s next?
While we add some tests, Ekt gets back to us for a new request: he wants to command the camera from other remote programs. He should have something in mind…
In the next episode we’ll make calico
able to communicate over the network!
Thanks to Yauheni Akhotnikau for having reviewed this post.
Top comments (0)