[Photo by Possessed Photography on Unsplash, modified(cropped)]
Last time we have shown the importance of data persistency with the help of a database.
This time we will explore an alternative "DB" that I think is very fitting for a micro-service architecture.
The code for this tutorial can be found in this repository: github.com/davidedelpapa/rocket-tut, and has been tagged for your convenience:
git clone https://github.com/davidedelpapa/rocket-tut.git
cd rocket-tut
git checkout tags/tut4alt
Redis
We will use Redis as database for this tutorial. Redis is not a database in the strict sense, yet it is not something different from a database.
What?
Properly speaking Redis is a key-value store, that is it stores data, just as a DB, but this data is stored as a (key = value)
pair. That is, the data is retrieved mostly by key.
The Redis server is meant to be run in-memory, BUT it syncs (almost instantaneously) with a persistent storage on the hard drive. Since it is in-memory Redis can be run on the same machine (or virtual machine, or container) as the micro-service that uses it, and be blazing fast with a small footprint; yet it will act as a persistent store, so the data is saved.
Redis is a good choice for fast response times, and it can be used as a (persistent) cache or a message broker. It is quite flexible too, but it cannot compare to a proper database in fact of user management and scalability (yes, it can be scaled, but it is nothing compared to proper DBMS).
It can be a good solution if the API server does not need to store much data, or if it just needs a simple cache to exchange data with other services. We will explore it as an alternative solution to MongoDB in order to have more choices to architecture the solution that best fit our problem.
A new branch
Notice that in order to be a replacement for MongoDB, we need to branch off from the old commit, before the changes made to adapt to MongoDB. If you have been following on your own repo, here are the instructions to revert from the older commit.
First let's check the ID of the commit with
git log --pretty=oneline
When you have found it, you have to branch off from the older commit (fill in your own commit ID)
git checkout -b tut04alt <Commit-ID>
With checkout -b
we create the branch and switch to it at the same time, otherwise branch
will just create the new branch so that we can switch to it later on.
Now we are at the same point we were after the tutorial 3 (or 3 part II).
Prerequisites
We need Redis, in case it's not already installed on our system.
sudo apt install redis-server
Now we need to let Redis interface with systemctl
. We need to modify /etc/redis/redis.conf, for example using nano
sudo nano /etc/redis/redis.conf
If you have never used nano
before (shame on you; ahahah, just kidding) to search the text we just need ctrl+w
(on the bottom the instructions say ^W Where Is
). We search for "supervised"
which is the configuration we need to change. Once found we need to pass from
supervised no
to
supervised systemd
We save with ctrl+o
and leave the same file name (just press enter
). Now we can quit nano with ctrl+x
.
Let's restart the Redis server:
sudo systemctl restart redis
Now when we need it we can start it with sudo systemctl start redis
and stop it with sudo systemctl stop redis
We can check that all went well with the cli (installed by default with the server):
redis-cli
127.0.0.1:6379>
If we get the prompt with local address and Redis port, everything went smooth. We can even check if the server is alive with the command ping
:
127.0.0.1:6379> ping
PONG
Perfect! (Command quit
to exit...).
You can play with Redis and familiarize with its data structures if you want. Take a look at the interactive tutorial and the list of all commands to learn some more.
Let's get going with the Rust part now:
cargo add dotenv r2d2 r2d2_redis anyhow
We'll see the use of anyhow later on.
Now we can use Redis on our Rust server.
Let's put Redis to use
As for the MongoDB, we need to change src/data/db.rs in order to let update_password()
and update_user()
return an instance of the User
changed.
pub fn update_password(&mut self, password: &String) -> Self {
self.hashed_password = hash_password(password, &self.salt);
self.updated = Utc::now();
self.to_owned()
}
pub fn update_user(&mut self, name: &String, email: &String) -> Self {
self.name = name.to_string();
self.email = email.to_string();
self.updated = Utc::now();
self.to_owned()
}
In src/data/ we need a redis_connection.rs
use std::ops::{Deref, DerefMut};
use std::env;
use dotenv::dotenv;
use r2d2;
use r2d2::PooledConnection;
use r2d2_redis::RedisConnectionManager;
use rocket::http::Status;
use rocket::request::{self, FromRequest};
use rocket::{Outcome, Request, State};
type Pool = r2d2::Pool<RedisConnectionManager>;
type PooledConn = PooledConnection<RedisConnectionManager>;
pub struct Conn(pub PooledConn);
impl<'a, 'r> FromRequest<'a, 'r> for Conn {
type Error = ();
fn from_request(request: &'a Request<'r>) -> request::Outcome<Conn, ()> {
let pool = request.guard::<State<Pool>>()?;
match pool.get() {
Ok(database) => Outcome::Success(Conn(database)),
Err(_) => Outcome::Failure((Status::ServiceUnavailable, ())),
}
}
}
impl Deref for Conn {
type Target = PooledConn;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for Conn {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
pub fn init_pool() -> Pool {
dotenv().ok();
let redis_address = env::var("REDIS_ADDRESS").expect("REDIS_ADDRESS missing");
let redis_port = env::var("REDIS_PORT").expect("REDIS_PORT missing");
let redis_db = env::var("REDIS_DB").expect("REDIS_DB missing");
//let redis_password = env::var("REDIS_PASSWORD").expect("REDIS_PASSWORD missing");
let manager = RedisConnectionManager::new(format!("redis://{}:{}/{}", redis_address, redis_port, redis_db)).expect("connection manager");
// Otherwise, with password:
//let manager = RedisConnectionManager::new(format!("redis://user:{}@{}:{}/{}", redis_password redis_address, redis_port, redis_db)).expect("connection manager");
match r2d2::Pool::builder().max_size(15).build(manager) {
Ok(pool) => pool,
Err(e) => panic!("Error: failed to create database pool {}", e),
}
}
As you can see it is mostly similar to mongo_connection.rs: we define a Pool
, a PooledConn
and a Conn
; we impl a FromRequest
for Conn
, as well as a Deref
. This time we need also a DerefMut
because all the redis
commands are traits implemented on the connection, and they all take a &mut self
as first argument. If the connection cannot be dereferenced as mutable, it won't work.
NOTE: see the full list of Commands; most of Redis' are implemented.
In the init_pool()
we are still taking info from our .env
file, but this time if we want a secure login we only need the password... Redis is not a proper database management system, and up to recently it did not have a way to register users, but only authenticate passwords...
Here is an example .env
file,
I did not commit it, as usual...
REDIS_ADDRESS=127.0.0.1
REDIS_PORT=6379
REDIS_DB=1
REDIS_PASSWORD=password
Speaking about the DB... Redis uses integers for the DBs names, and the cli connects directly to DB 0
. We are going to use a separate DB (1
) in order to have a series of keys that is unique for the server we are using, and not have a polluted name-space.
Another alternative that gets used a lot: just use a prefix for each key, in the form "prefix:actual_key"
. I'm going for the less complicated code-wise, a different DB.
Time to move on to src/lib.rs:
#![feature(proc_macro_hygiene, decl_macro)]
#![allow(unused_attributes)]
#[macro_use] use rocket::*;
use rocket_contrib::serve::StaticFiles;
use rocket_contrib::helmet::SpaceHelmet;
pub mod routes;
pub mod data;
pub fn rocket_builder() -> rocket::Rocket {
rocket::ignite().attach(SpaceHelmet::default())
.mount("/", routes![routes::ping::ping_fn])
.mount("/api", routes![
routes::user::user_list_rt,
routes::user::new_user_rt,
routes::user::info_user_rt,
routes::user::update_user_rt,
routes::user::delete_user_rt,
routes::user::patch_user_rt,
routes::user::id_user_rt
])
.mount("/files", StaticFiles::from("static/"))
.manage(data::redis_connection::init_pool())
}
This too is almost identical to the one for MongoDB. Using r2d2
clearly has its advantages, and the similarity between the various interfaces is striking.
What is absolutely different between redis
and mongodb
is the handling of the commands.
As we have seen, the mongodb
crate relied heavily on the pattern of returning Result<Option<T>, E>
, whereas redis
relies on the sole Result<T, E>
. THis means there's less to match for us.
The user routes
Some architectural considerations are due, before even taking a look at the code of src/routes/user.rs:
Since the crate's interface requires less matching, and since the Redis commands are more simple and less nuanced than those of MongoDB (for one, we do not have to use BSON docs!), we will resort to a strategy that for MongoDB would have been much more complicated and requiring many more exceptions: we are implementing two methods over User, one to retrieve a user from Redis, and the other one to save the User to Redis. That is, we are serializing and deserializing our Rust structs using the data types available in Redis.
We will use a Redis data type called hashes to serialize the User. In practice, for each hash key, the value is an object with many fields: it is like a nested key-value. The number of keys each hash stores can be variable, like MongoDB documents, but we will use it just to serialize a struct conveniently (that is, we will know for now the number of fields at compile time).
We will use an index lookup, that is, we will make a sort of index where checking for the email will return the corresponding user. This was absolutely not needed in MongoDB. Notice however, that this is a technique heavily used in relational databases in order to speed-up the response times of intensive queries. Here we will use it to the same purpose, in order not to search inside every user to check if it corresponds to the email we need to find (that would be a dumb and slow process).
A note here: since we were using BSON to serialize the User to MongoDB, in reality we did use this strategy also with MongoDB... But due to the complexity of the communication, we could not take it to the lengths we will see in Redis.
Serializing and deserializing the User
Before everything the use
section:
use std::collections::HashMap;
use chrono::{DateTime, Utc};
use rocket::*;
use rocket_contrib::json::{Json, JsonValue};
use rocket_contrib::json;
use rocket_contrib::uuid::Uuid;
use uuid::Uuid as Uuid2;
use rocket::response;
use rocket::http::{ContentType, Status};
use rocket::response::{Responder, Response};
use r2d2_redis::redis as redis;
use redis::Commands;
use anyhow::Result as AnyResult;
use anyhow::anyhow;
use crate::data::db::{User, InsertableUser, ResponseUser, UserPassword};
use crate::data::redis_connection::Conn;
const LOOKUP: &str = "email_lookup";
We define also the lookup key.
Notice that we rename redis and uuid, the first for convenience, the other not to pollute the name-space; for this same reason we rename the Result
of anyhow as well... (in reality there should be no need to rename it, but I prefer this way).
We use the standard collection HashMap
to get a hash from Redis.
Next we implement a default error response for the "Internal server error" as we did for the MongoDB tutorial; nothing new there, so check it in the repo if you need a reminder on how to do it.
What we are interested mostly is the impl User{}
part.
impl User {
fn to_redis(self, connection: &mut Conn) -> AnyResult<()> {
let id = self.id.to_string();
let email = self.email.clone();
let r_user = [
("name", self.name),
("email", self.email),
("hashed_password", self.hashed_password),
("salt", self.salt),
("created", self.created.to_string()),
("updated", self.updated.to_string())
];
connection.hset_multiple(&id, &r_user)?;
// Add email lookup index
let _ = connection.zadd(LOOKUP, format!("{}:{}", email, id), 0)?;
Ok(())
}
}
In order to serialize the User, we first convert the id from Uuid
to String
, and we clone the email in order to borrow it two times.
We construct an array out of the info, the array being a list of tuples in the form (key: String, value: String)
.
Then we pass the array to the hset_multiple()
setting an hash with key corresponding to the User's id, and the value being the array we constructed.
After this we add the email and id also to the lookup index with zadd()
. The lookup index is a Redis sorted set; you can read more on its use as lookup index in this article: Secondary indexing with Redis.
The index works as this: the key is the lookup index name, the value is the pair email:id
, and we have to attach to it a score of 0
; this is because Redis has a function to search the by value for all the values that have the same score, in this way, by searching the email (first part of the value), we get also the id (second part).
All along we use anyhow to manage the errors, so we can use the ?
instead of matching or (never to be done in production) unwrap
-pping.
Now we need also to deserialize:
impl User {
fn from_redis(connection: &mut Conn, id: &String) -> AnyResult<Self> {
let r_user: HashMap<String, String> = connection.hgetall(id)?;
let r_user_id = Uuid2::parse_str(&* id)?;
let r_user_name: &String = r_user.get(&"name".to_string()).ok_or(anyhow!(""))?;
let r_user_email: &String = r_user.get(&"email".to_string()).ok_or(anyhow!(""))?;
let r_user_hashed_password: &String = r_user.get(&"hashed_password".to_string()).ok_or(anyhow!(""))?;
let r_user_salt: &String = r_user.get(&"salt".to_string()).ok_or(anyhow!(""))?;
let r_user_created: &String = r_user.get(&"created".to_string()).ok_or(anyhow!(""))?;
let r_user_updated: &String = r_user.get(&"updated".to_string()).ok_or(anyhow!(""))?;
let created: DateTime<Utc> = r_user_created.parse()?;
let updated: DateTime<Utc> = r_user_updated.parse()?;
Ok(User {
id: r_user_id,
name: r_user_name.to_owned(),
email: r_user_email.to_owned(),
hashed_password: r_user_hashed_password.to_owned(),
salt: r_user_salt.to_owned(),
created,
updated,
})
}
}
hgetall()
returns a HashMap<String, String>
.
We get from it all the info we serialized; for error handling we use the macro anyhow!()
that lets us create a String
error as a one-off error message.
We parse the id
string to a Uuid with the Uuid crate, since the one provided by rocket_contrib does not have the parse_str()
function. It would be nice if rocket_contib would export the current version of uuid
but for now, let's get the best out of it...
We have to parse also the two timestamps, because in the HashMap we have only String values.
Once we have everything in order we can return the User well constructed.
Finally, the routes
The GET /users
will not use the serialization/deserialization mechanism. It will not even use the regular redis
Commands trait, because the command we need is not implemented, We need to make a manual call.
#[get("/users")]
pub fn user_list_rt(mut connection: Conn) -> ApiResponse {
let connection = &mut *connection;
let connection_raw: &mut r2d2_redis::redis::Connection = &mut *connection;
let users_keys: Result<i32, _> = redis::cmd("DBSIZE").query(connection_raw);
match users_keys {
Ok(mut user_size) => {
if user_size >= 2 {user_size -=1 };
ApiResponse::ok(json!([user_size]))
},
Err(_) => ApiResponse::internal_err(),
}
}
In order to perform a manual command, we need to get a Connection
object from the Conn
we crated, which involves a two orders of dereferencing. Once we have it, we construct the command with the cmd()
that is a lower command, and it is used with a builder pattern. So we have to pass to it also the Connection
we will use with the query()
method. If the command had some arguments, those would be passed with arg()
.
Notice that we need to decrease the number we get by 1 if it is 2 or more. This is so, because if even 1 user is set, the lookup key gets set as well, so we have either 0 or 2 keys in the DB.
We have to think of all the edge cases.
To create a new user with POST /users
we will get first a User from the InsertableUser
in the body of the request, as usual, then we will just serialize it with our custom serialization method:
#[post("/users", format = "json", data = "<user>")]
pub fn new_user_rt(mut connection: Conn, user: Json<InsertableUser>) -> ApiResponse {
let ins_user = User::from_insertable((*user).clone());
match ins_user.clone().to_redis(&mut connection){
Ok(_) => ApiResponse::ok(json!(ResponseUser::from_user(&ins_user))),
Err(_) => ApiResponse::internal_err(),
}
}
That was easy.
Also the GET /users/<id>
is easy:
#[get("/users/<id>")]
pub fn info_user_rt(mut connection: Conn, id: Uuid) -> ApiResponse {
let id = id.to_string();
match User::from_redis(&mut connection, &id){
Ok(user) => ApiResponse::ok(json!(ResponseUser::from_user(&user))),
Err(_) => ApiResponse::err(json!(format!("id {} not found", id))),
}
}
In fact, it is just a matter of deserialization.
Next, the PUT /users/<id>
:
#[put("/users/<id>", format = "json", data = "<user>")]
pub fn update_user_rt(mut connection: Conn, user: Json<InsertableUser>, id: Uuid) -> ApiResponse {
let id = id.to_string();
match User::from_redis(&mut connection, &id){
Ok(user_from_redis) =>{
let mut user_to_redis = user_from_redis.clone();
if user_to_redis.match_password(&user.password) {
let _res_lookup: Result<i32, _> = connection.zrem(LOOKUP, format!("{}:{}", user_from_redis.email, id));
let insert_user = user_to_redis.update_user(&user.name, &user.email);
match insert_user.clone().to_redis(&mut connection) {
Ok(_) => ApiResponse::ok(json!(ResponseUser::from_user(&insert_user))),
Err(_) => ApiResponse::internal_err(),
}
}
else { ApiResponse::err(json!("user not authenticated")) }
},
Err(_) => ApiResponse::err(json!(format!("id {} not found", id)))
}
}
This requires first to deserialize a user, then to match if the password we got in the request body is correct, then to update the user. Finally we will serialize the updated user. Notice that if the key already exists Redis will just overwrite it with the new information.
Notice though, that before each time we serialize again the user we need to remove its email from the lookup table, otherwise there will be more than one email registered. Worst of all if the user updates the email field: in that case we have the old and the new email both pointing to the same uuid.
The method we employ is prone to fail, and there is no check that it actually failed. However this is a problem that can happen more in bigger databases settings.
We could go about in a more secure way here, but let's face it: if you think that it will be a big problem, because it can actually happen many times, maybe then you have to consider that Redis is also not the correct solution. Time to switch to a full-fledged DBMS.
In any case, you could write a script that once in a while (a cron once every week, maybe?) goes over the whole DB and prunes it, signals incorrectness, inconsistency, missing or fragmented data, and so on... It's actually good practice.
As for the DELETE /users/<id>
route, we extract and match the password as for the PUT
:
#[delete("/users/<id>", format = "json", data = "<user>")]
pub fn delete_user_rt(mut connection: Conn, user: Json<UserPassword>, id: Uuid) -> ApiResponse {
let id = id.to_string();
match User::from_redis(&mut connection, &id){
Ok(user_from_redis) =>{
if user_from_redis.match_password(&user.password) {
let res: Result<i32, _> = connection.del(&id);
let _res_lookup: Result<i32, _> = connection.zrem(LOOKUP, format!("{}:{}", user_from_redis.email, id));
match res {
Ok(_) => ApiResponse::ok(json!(ResponseUser::from_user(&user_from_redis))),
Err(_) => ApiResponse::internal_err(),
}
}
else { ApiResponse::err(json!("user not authenticated")) }
},
Err(_) => ApiResponse::err(json!(format!("id {} not found", id)))
}
}
As you can see, if the password matches, we send a del(&id)
over to the connection to delete the key altogether (with all its value). The lookup key is taken care of as well.
The PATCH /users/<id>
route is very similar to the PUT /users/<id>
route:
#[patch("/users/<id>", format = "json", data = "<user>")]
pub fn patch_user_rt(mut connection: Conn, user: Json<UserPassword>, id: Uuid) -> ApiResponse {
match &user.new_password {
Some(passw) => {
let id = id.to_string();
match User::from_redis(&mut connection, &id){
Ok(mut user_from_redis) =>{
if user_from_redis.clone().match_password(&user.password) {
let insert_user = user_from_redis.update_password(&passw);
let _res_lookup: Result<i32, _> = connection.zrem(LOOKUP, format!("{}:{}", user_from_redis.email, id));
match insert_user.clone().to_redis(&mut connection) {
Ok(_) => ApiResponse::ok(json!("Password updated")),
Err(_) => ApiResponse::internal_err(),
}
}
else { ApiResponse::err(json!("user not authenticated")) }
},
Err(_) => ApiResponse::err(json!(format!("id {} not found", id))),
}
},
None => ApiResponse::err(json!("Password not provided"))
}
}
The only difference with the PUT
route is that in this case we do not update()
the user but update_password()
. Here too, since we serialize again the User, we need to cleanup the lookup index. Of course, we start off by matching if the new_password
is given at all, otherwise we abort the procedure with an error, without even making a call to the Redis service.
The last route, the second ranking GET /users/<email>
route, which gets us a user based on the email, uses finally the lookup index to retrieve stuff:
#[get("/users/<email>", rank = 2)]
pub fn id_user_rt(mut connection: Conn, email: String) -> ApiResponse {
let get_item: Result<Vec<String>, _> = connection.zrangebylex(LOOKUP, format!("[{}", &email), format!("({}\\xff", &email));
match get_item {
Ok(lookup_vector) => {
if lookup_vector.is_empty(){
return ApiResponse::err(json!(format!("user {} not found", &email)));
}
let split = lookup_vector[0].split(":").collect::<Vec<&str>>();
let id = split[1].to_string();
match User::from_redis(&mut connection, &id){
Ok(user) => ApiResponse::ok(json!(ResponseUser::from_user(&user))),
Err(_) => ApiResponse::err(json!(format!("user {} not found", &email))),
}
},
Err(_) => ApiResponse::internal_err()
}
}
At the start we make a call to the lookup key. We use the zrangebylex()
which gets a range of sub-string o search for. it is similar as a regex, but it as a min argument, that sets the beginning of the research, and a max that says how to stop the range.
We start with the email followed by the colon. This way it will search for all keys starting with the email and colon, for example [me@mail.com
will get all keys tat start with those characters, so it can get, for example:
me@mail.com
me@mail.com:A
me@mail.com:AA
me@mail.com:B
me@mail.com:BA
(...)
The second parameter will set the stop. If we would give it a (me@mail.com:B
value it would get of the above the following subset:
me@mail.com
me@mail.com:A
me@mail.com:AA
If we set it wit a \xff
, as in our case (backslash escaped), it will get all the keys starting with [me@mail.com
, whatever the characters afterwards. This means, it will search all the uuid
that are set for that email (remember, each email is unique, so it will get us the pair email:uuid
) we are searching. It will get it as a String, though, not yet split up; but we will take care of that.
The actual zrangebylex()
method will get us a Result wrapping a Vec<String>
, because there might be more than a result.
The rest is easy: if we do not have an error (connection) we either have an empty vector, which means no user was found corresponding to that email, or we have a user in the vector. We pick up the String and we split it to the colon; we will find the uuid
in the second sub-string, and we use it to get user, as in the other GET /users/<email>
. At the end of the whole process, we return that user.
And that closes our routes, and our tour of integration between Rocket and Redis.
What is Wrong with our server so far...
Now we should build, then run the tests and see that everything works correctly!
Well, actually not, the tests FAIL!!!
How come?
There are two reasons these tests fail:
First of all the tests do not delete the database they fill, so there's always need to do it by hands. We should really delete each User we insert, after testing it.
But more importantly, our system has a requirement that the email is unique, but it does not enforce it, rendering everything useless.
Apart from this, since we used in both basic_test.rs and failures_test,rs some repeated user (my bad copypasta!), we already have users with duplicated info. In MongoDB this is not a problem, since we pick up the first user with the correct info... but with Redis it is not the case, since we are enforcing an email-based index...
Therefore, if we simply run:
cargo test
we see a series of failures. The point is that every time the to_redis()
and the email is used again, it keeps adding a new record with the same email, and another email with different id in the index. This leads to the failure of get_rank_fail
and id_user_rt_fail
, since the system find a user, when it actually should have found none!
We can see it easily in the redis-cli:
127.0.0.1:6379[1]> ZRANGE email_lookup 0 -1
1) "j.doe@m.com:9f3d9ab7-7d8f-4369-9861-90b2d14aaa66"
2) "j85@m.com:fb886446-256d-4723-8745-ad6369c0d212"
3) "jack.doe@m.com:76cc101c-74e3-439a-ae72-520faf31a391"
4) "jane.doe@m.com:472921bb-4cf7-4cdc-b9dc-4ba3263b34fb"
5) "jane.doe@m.com:63db7cce-1c0c-4111-96a9-38ce1b3f706a"
6) "janet.doe@m.com:08fc9080-cf8c-4acd-baf4-6156b1d8b7dc"
7) "janet.doe@m.com:c375b7dd-006b-44f9-ac93-8eb973643ba0"
8) "jkd@m.com:b0dbca02-e6a4-4b45-87f8-1d7ac71e623f"
9) "jondon@m.com:647f8aa3-4f57-4c29-8fb4-3d750b6c1726"
10) "jondon@m.com:683b7520-3fd1-42fe-8239-bf13b2ce4a22"
A ZRANGE
on the email lookup reveals that there are at least three different emails, each used twice!
To run the tests, and make them pass, we should run each suite separately, while flushing the db with redis-cli in betweens with:
127.0.0.1:6379[1]> flushdb
OK
So we run separately and flush:
cargo test --test basic_test # now FLUSHDB on redis-cli
cargo test --test failures_test # now FLUSHDB on redis-cli
cargo test --test persistency_test
Every test passes. Yeah!
But it is not the desired result!
We should change the tests to remove each user inserted after testing it, but also, we should enforce in our code the rejection of already used emails, as it is usually done in these cases.
However, this will be a matter for the next tutorial, together with another useful feature we will implement... so stay tuned!!!
Top comments (0)