moto is an alternative to boto Stubber which mocks actual AWS services instead of simple response JSON. It can also handle credentials provisioning without needing to setup mock clients. To show off how moto works, I'll be primarily demonstrating through tests against AWS SDK sample code.
Basic moto usage
moto will need to be installed on the system first using either a simple pip install moto
or adding it to any package management system. Then it can be used to mock specific services. To take the example from the project's README.md:
import boto3
class MyModel:
def __init__(self, name, value):
self.name = name
self.value = value
def save(self):
s3 = boto3.client("s3", region_name="us-east-1")
s3.put_object(Bucket="mybucket", Key=self.name, Body=self.value)
import boto3
from moto import mock_s3
from mymodule import MyModel
@mock_s3
def test_my_model_save():
conn = boto3.resource("s3", region_name="us-east-1")
# We need to create the bucket since this is all in Moto's 'virtual' AWS account
conn.create_bucket(Bucket="mybucket")
model_instance = MyModel("steve", "is awesome")
model_instance.save()
body = conn.Object("mybucket", "steve").get()["Body"].read().decode("utf-8")
assert body == "is awesome"
The first thing to notice here is the @mock_s3
decorator. This lets moto know you want to mock the S3 service. If you had another service besides S3 in MyModel
you would also need to add it:
@mock_s3
@mock_sns
If you really need it there's also a @mock_all
decorator which will mock all services. Given how much code that would add it's not recommended though. Next there is a setup phase:
conn = boto3.resource("s3", region_name="us-east-1")
# We need to create the bucket since this is all in Moto's 'virtual' AWS account
conn.create_bucket(Bucket="mybucket")
It's important to note that unlike boto Stubber moto tends to be more particular about having a region defined. You'll want to make sure the ability to define a region is available in the backend code. This setup process can be thought of as using IAC to setup testing infrastructure. Instead you're setting up AWS services via API calls. Then the actual backend code is called:
model_instance = MyModel("steve", "is awesome")
model_instance.save()
Note that the client didn't have to be passed in. Everything is kept in the moto backend for any connection to utilize. Finally, another call is made by the same client that setup the mock service state:
body = conn.Object("mybucket", "steve").get()["Body"].read().decode("utf-8")
assert body == "is awesome"
This reads the data after all the calls are done to ensure that the write was indeed made.
Service Implementation
One thing to keep in mind with moto is that mocking AWS services programmatically is a considerably difficult task and the project is a volunteer basis. The moto documentation has an overview of the supported AWS services that can be mocked. From there you can drill down into a service you're interested in and see what calls it supports. A list for AutoScaling, for example, supports attach_instances but not attach_traffic_sources.
Another issue is state transition. Due to the fact that operations are done in memory an EC2 instance will be up close to instantaneously. Moto does come with a State Transition Manager but easy support for it will only be available in a few select services. Otherwise you'll need to create your own State Transition Manager implementation.
Mocking dynamodb
For this exercise I'll start out with something to test table creation:
import boto3
from moto import mock_dynamodb
from python_sample.movies import Movies
@mock_dynamodb
def test_movie_table_create():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table('MoviesTable')
assert dynamo_resource.Table('MoviesTable').name == 'MoviesTable'
The mock_dynamodb
decorator is called to ensure all DynamoDB calls are mocked. Then a DynamoDB ServiceResource is created which the Movies
constructor requires as part of the constructor. region_name
is passed in as regions are necessary in moto mocking. The call to create_table
is made to do the actual backend boto calls. After this is the validation process:
assert dynamo_resource.Table('MoviesTable').name == 'MoviesTable'
The assertion on .name
is needed as just assert dynamo_resource.Table('MoviesTable')
wouldn't work as intended since it still returns a valid object. In general I prefer making boto calls for validation purposes even if backend code has an operation to do so. Now to add in more tests for most of the Movies
methods:
Note: the code duplication is on purpose
import boto3
from decimal import Decimal
from moto import mock_dynamodb
from python_sample.movies import Movies
TABLE_NAME = 'MoviesTable'
MOVIE_LIST = [
{'year': 2000, 'title': 'Test Movie', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}},
{'year': 2001, 'title': 'Test Movie2', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}},
{'year': 2002, 'title': 'Test Movie3', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}}
]
@mock_dynamodb
def test_movie_table_create():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
assert dynamo_resource.Table(TABLE_NAME).name == TABLE_NAME
@mock_dynamodb
def test_movie_write():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
movies.add_movie('Test Movie', 2000, 'Some Plot', '13')
item = dynamo_resource.Table(TABLE_NAME).get_item(Key={'year': 2000, 'title': 'Test Movie'})
assert item['Item']
@mock_dynamodb
def test_movie_write_batch():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
movies.write_batch(MOVIE_LIST)
for movie_entry in MOVIE_LIST:
item = dynamo_resource.Table(TABLE_NAME).get_item(Key={'year': movie_entry['year'], 'title': movie_entry['title']})
assert item['Item']
@mock_dynamodb
def test_movie_update():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
dynamo_resource.Table(TABLE_NAME).put_item(Item={
'year': 2000,
'title': 'Test Movie',
'info': {'plot': 'Something', 'rating': Decimal(str('20'))}
})
movies.update_movie('Test Movie', 2000, '21', 'Something2')
item = dynamo_resource.Table(TABLE_NAME).get_item(Key={'year': 2000, 'title': 'Test Movie'})
assert item['Item']
assert item['Item']['info']['rating'] == 21
assert item['Item']['info']['plot'] == 'Something2'
@mock_dynamodb
def test_movies_scan_and_query():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
table = dynamo_resource.Table(TABLE_NAME)
with table.batch_writer() as writer:
for movie in MOVIE_LIST:
writer.put_item(Item=movie)
items = movies.scan_movies(year_range={'first': 2000, 'second': 2003})
assert len(items) == len(MOVIE_LIST)
items = movies.scan_movies(year_range={'first': 3000, 'second': 3003})
assert not items
item = movies.query_movies(2000)
assert item
item = movies.query_movies(3000)
assert not item
@mock_dynamodb
def test_movie_delete():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
table = dynamo_resource.Table(TABLE_NAME)
table.put_item(Item={
'year': 2000,
'title': 'Test Movie',
'info': {'plot': 'Something', 'rating': Decimal(str('20'))}
})
movies.delete_movie('Test Movie', 2000)
item = table.get_item(Key={'year': 2000, 'title':'Test Movie'})
assert 'Item' not in item.keys()
As seen the process is fairly simple of:
- Create the table
- Perform an operation
- Validate the operation with a boto call
I'm also able to test operations such as batch writer which would not have been as feasible with the boto stubber method given the nature of the call. Since state is held, I'm even able to combine scan
and query
into a single method given the similarities in their functionality. However, there's a few things that aren't quite efficient here:
- The
create_table
method is being used to create tables, when preference should be to use raw boto calls - There's a lot of duplication in the resource creation
Fixtures
Thankfully pytest has a feature called fixtures which makes dealing with this a lot easier. Let's see how this will work:
TABLE_NAME = 'MoviesTable'
MOVIE_LIST = [
{'year': 2000, 'title': 'Test Movie', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}},
{'year': 2001, 'title': 'Test Movie2', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}},
{'year': 2002, 'title': 'Test Movie3', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}}
]
@pytest.fixture
def movies():
with mock_dynamodb():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
table = dynamo_resource.create_table(
TableName=TABLE_NAME,
KeySchema=[
{'AttributeName': 'year', 'KeyType': 'HASH'}, # Partition key
{'AttributeName': 'title', 'KeyType': 'RANGE'} # Sort key
],
AttributeDefinitions=[
{'AttributeName': 'year', 'AttributeType': 'N'},
{'AttributeName': 'title', 'AttributeType': 'S'}
],
ProvisionedThroughput={'ReadCapacityUnits': 10, 'WriteCapacityUnits': 10}
)
table.meta.client.get_waiter('table_exists').wait(TableName=TABLE_NAME)
movies = Movies(dynamo_resource)
movies.table = table
yield movies
So a fixture essentially gives something that can be passed in as an argument to test cases like so:
def test_movie_write(movies):
movies.add_movie('Test Movie', 2000, 'Some Plot', '13')
item = movies.table.get_item(Key={'year': 2000, 'title': 'Test Movie'})
assert item['Item']
In this case movies
in the arguments maps to the fixture with the function call of the same name. In the fixture with mock_dynamodb()
context manager is used to reduce decorator noise. Otherwise movies
fixture would look like:
@pytest.fixture
@mock_dynamodb
def movies():
Not only that but the order of decorators matters as well. Next up the table is created using a straight boto API call. A Movies
class is instantiated, and then the table
property for it is set. Finally this is returned to the caller as a generator yield. The reason why movies.table
is needed is because create_table
is no longer being called and self.table
is set to Null
without it. That means the other underlying methods that work off self.table
world break. Something else of importance:
def test_movie_write(movies):
movies.add_movie('Test Movie', 2000, 'Some Plot', '13')
Is that the test methods no longer use @mock_dynamodb
. This is because it's using the fixtures instead, as the way the call order works with fixtures ends up being:
movies_fixture()
create movies instance
call test_movie_write(movies instance)
Meaning that we only need the mock at the fixture level. That said the create_table
test doesn't use a fixture and does it the old way as the fixture code would simply be duplicated:
@mock_dynamodb
def test_movie_table_create():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
assert dynamo_resource.Table(TABLE_NAME).name == TABLE_NAME
The resulting final test code looks like:
import boto3
import pytest
from decimal import Decimal
from moto import mock_dynamodb
from python_sample.movies import Movies
TABLE_NAME = 'MoviesTable'
MOVIE_LIST = [
{'year': 2000, 'title': 'Test Movie', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}},
{'year': 2001, 'title': 'Test Movie2', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}},
{'year': 2002, 'title': 'Test Movie3', 'info': {'plot': 'Something', 'rating': Decimal(str('20'))}}
]
@pytest.fixture
def movies():
with mock_dynamodb():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
table = dynamo_resource.create_table(
TableName=TABLE_NAME,
KeySchema=[
{'AttributeName': 'year', 'KeyType': 'HASH'}, # Partition key
{'AttributeName': 'title', 'KeyType': 'RANGE'} # Sort key
],
AttributeDefinitions=[
{'AttributeName': 'year', 'AttributeType': 'N'},
{'AttributeName': 'title', 'AttributeType': 'S'}
],
ProvisionedThroughput={'ReadCapacityUnits': 10, 'WriteCapacityUnits': 10}
)
table.meta.client.get_waiter('table_exists').wait(TableName=TABLE_NAME)
movies = Movies(dynamo_resource)
movies.table = table
yield movies
@mock_dynamodb
def test_movie_table_create():
dynamo_resource = boto3.resource('dynamodb', region_name='us-west-2')
movies = Movies(dynamo_resource)
movies.create_table(TABLE_NAME)
assert dynamo_resource.Table(TABLE_NAME).name == TABLE_NAME
def test_movie_write(movies):
movies.add_movie('Test Movie', 2000, 'Some Plot', '13')
item = movies.table.get_item(Key={'year': 2000, 'title': 'Test Movie'})
assert item['Item']
def test_movie_write_batch(movies):
movies.write_batch(MOVIE_LIST)
for movie_entry in MOVIE_LIST:
item = movies.table.get_item(Key={'year': movie_entry['year'], 'title': movie_entry['title']})
assert item['Item']
def test_movie_update(movies):
movies.table.put_item(Item={
'year': 2000,
'title': 'Test Movie',
'info': {'plot': 'Something', 'rating': Decimal(str('20'))}
})
movies.update_movie('Test Movie', 2000, '21', 'Something2')
item = movies.table.get_item(Key={'year': 2000, 'title': 'Test Movie'})
assert item['Item']
assert item['Item']['info']['rating'] == 21
assert item['Item']['info']['plot'] == 'Something2'
def test_movies_scan_and_query(movies):
with movies.table.batch_writer() as writer:
for movie in MOVIE_LIST:
writer.put_item(Item=movie)
items = movies.scan_movies(year_range={'first': 2000, 'second': 2003})
assert len(items) == len(MOVIE_LIST)
items = movies.scan_movies(year_range={'first': 3000, 'second': 3003})
assert not items
item = movies.query_movies(2000)
assert item
item = movies.query_movies(3000)
assert not item
def test_movie_delete(movies):
movies.table.put_item(Item={
'year': 2000,
'title': 'Test Movie',
'info': {'plot': 'Something', 'rating': Decimal(str('20'))}
})
movies.delete_movie('Test Movie', 2000)
item = movies.table.get_item(Key={'year': 2000, 'title':'Test Movie'})
assert 'Item' not in item.keys()
Conclusion
This concludes a look into how moto can be used for boto call testing. While it does cover a considerable amount, limitations on service and API call availability as well as state transitions should be taken into consideration. Fixtures can also help with centralizing a bulk of the work to further simplify the testing process.
Top comments (0)