I have a lot of conceptual doubts about the microservices architecture and the implementation on an MVC framework, in this case Laravel.
Suppose that I have a Blog platform with the given ER Database.
And I want to change this system to become a microservices architecture. Based on what I been reading I could go with something like this:
In this case each Microservice is a Laravel installation (or it can be Node.js, .net-core, etc).
Now my question comes on how to access each service, for example, suppose that Blog App is a Laravel application and a user enters into http://blog.test/1, so the user wants to see the post with the id 1.
How does the controller should behave?
For what I understand maybe the controller looks like this:
<?php
namespace App\Http\Controllers;
use Illuminate\Routing\Controller as BaseController;
use Illuminate\Http\Request;
class BlogController extends BaseController
{
public function show(Request $request, $id) {
$client = new \GuzzleHttp\Client();
// Get the blog information
$blogResponse = $client->request('GET', "http://post-api.blog.test/api/blog/{$id}");
$blogString = $blogResponse->getBody();
$blog = json_decode($blogString);
// Get the comments information
$commentsResponse = $client->request('GET', "http://comments-api.blog.test/api/comments/{$id}");
$commentsString = $commentsResponse->getBody();
$comments = json_decode($commentsString);
// Get the the author information
$userResponse = $client->request('GET', "http://users-api.blog.test/api/user/{$blog['users_id']}");
$userString = $userResponse->getBody();
$user = json_decode($userString);
return view('blog.show', ['blog' => $blog,
'comments' => $comments,
'author' => $user]);
}
}
My questions are:
- Is this the correct way to implement it?
- I know I can cache some responses in order to access the resources faster but are Http APIs one way to implement it or there's another piece of technology I'm not seeing?
I know that probably you don't need to do microservices for a blog but just this is just an example as how to implement this paradigm.
Thanks!
Top comments (7)
For a really basic implementation that should be OK.
Usually, though, each microservice will publish some client (as a package/library) that the other services will use.
As far as caching, yes - HTTP is a good start. You could play around with Redis etc. also. But many times HTTP does the trick and is nice because it's just built-into the protocol you are using.
Also, consider that each service should define it's own view components. So just like each service exposes a client, they would expose some view library also (there are many ways to do that depending on the language, framework, etc.)
It's hard stuff!
I personally am a fan of an approach mentioned by Udi Dahan's which is to make everything run in the same run-time. But that's another topic for another day π Check this out as a primer if interested done by Simon Brown
Thank you very much for your answer, I was wondering if I was missing a piece of technology on the stack but as far I can see the communication to the services can be done through http requests.
Yes - just using HTTP works fine. You will find that microservices which are built with statically typed languages - like Java or C# - will typically expose client libraries as I had mentioned.
That way, any specific service doesn't need to know the url to another service and all the parameters, etc. They can just use the client which acts as a contract for accessing it.
But using HTTP calls directly in code like you're doing is OK for smaller projects. But then, microservice aren't really useful for smaller projects... lol
Other issues that come up are service discovery (usually services don't have a hard-coded IP or machine name and are on a local network vs being exposed on public urls), dealing with failures (when service A is down and service B is trying to call it - what should it do?), using a pub/sub or message bus, etc.
Okay well, they are basically two major ways of inter communication between your microservices
it can either be by using RPC(Remote Procedure Call) (which is been used here in yours, that is making HTTP requests to other microservices), I would advise you use this method of inter communication when what you need to do with the other microservice is synchronous and needed immediately eg fetching all Posts from the Post microservice.
Advanced Messaging Queuing Protocol, which I would advise you use when one microservice needs to communicate with another in an asynchronous manner, eg Your User microservice needs to send an email to a registered user, this email should be sent in the background (asynchronously) by the Email microservice, hence you can use AMQP to communicate/send this newly registered email to the email microservice(there is library for this for php, and a wrapper package for Laravel/Lumen).
But in General if your application is small then just go for the regular monolith, if it's complex then consider the Microservice architecture.
Your problem is that you got in the wrong direction of designing microservices. In fact you are trying to solve the problem that most microservices system architects try to avoid: inter-communication among microservices.
Why do they try to avoid this? let me be clear that there are more than one way to implement this communication BUT doing so makes your system less resilient as one microservice depends on the other(s). Another important reason is that no matter what kind of solution we have, IT IS SLOW. One should not assume that the network is reliable, the servers are highly available and responsive. This hasn't mention that you are talking in the PHP context where every external service call requires your application framework, i.e. Laravel/Lumen to boot up and that's really really slow.
Now, what are the communication options do we have? What you did is using HTTP request/response communication which is one of the 2 options. HTTP is synchronous and participants in this communication is 1-1. In some cases, this is the right choice, i.e you rely on an external service to query data. But, again, avoid this as much as possible. In your case, your main service is trying to compose data from multiple sources, puts them in a big bag and throws it back to the client. If you revise the situation, your client can make 3 calls or 1 composite call to 3 separate APIs.
Each microservice has to be self-container and autonomous.
The other option is sending asynchronous messages. This communication pattern is used when you need to propagate data update through the system. Kafka or RabbitMQ are popular message brokers used to facilitate this kind of communication.
Did you finish this project? Could you share me the code?
You can also implement this using Kubernetes as stateless application on ingress load balancer. I also use this approach but the response time will be the downfall. Like, trying to get all comments directly to the comment API and get millisecond response time but when in blog service (like the one on your above example) it may take 2-4 seconds.
You may also try creating an API Gateway but it may defeat the purpose of microservice, however I havenβt tried APIGEE before, I think it worth a short.