Table of Contents
- How Simple Engineering Works
- What Happens When We Decide to Scale or Mature the Project
- What the Heck is BFF?
- How BFF Helps with Scaling the Project
- Using BFF in the Real World
- Benefits of Using BFF
How Simple Engineering Works
- Frontend makes a call to Backend APIs.
- Backend APIs process the request and send a response back.
- The Frontend renders the response accordingly.
This works pretty well when you're building an MVP.
What Happens When We Decide to Scale or Mature the Project 🚀
- Backend APIs start following the Single Responsibility Principle.
- APIs can be grouped together in services, or microservices can be introduced.
- Client-side code becomes more complex.
- Frequent changes in request-response structures become common between frontend and backend.
- More data gets loaded into the frontend.
- When addressing multiple client types (e.g., web and mobile), APIs become more bloated.
- Multiple concurrent requests go to the backend to address page UI demands.
This can become as overwhelming as my girlfriend’s never-ending list of demands. So, to help, one of my senior team-mates introduced me to the concept of BFF. Whether in real life or code, BFFs always have your back.
What the Heck is BFF? 🤔
A Backend for Frontend (BFF) is a mediator that tailors backend services to the specific needs of different clients. Think of it as the best friend who filters all the chaos and serves you only what you need. The frontend calls a BFF server API endpoint, and then the BFF does the heavy lifting, calling all the backend services required.
How BFF Helps with Scaling the Project 🛠️
Why maintain another mediator server, BFF, in this case? Why not stick to the old and proven two-tier architecture?
Because now we care about user experience! We’re no longer an MVP and can’t be running around like headless chickens.
Our backend APIs follow the Single Responsibility Principle, but our UI needs a lot of data, which means making calls to many backend APIs. The catch is that most browsers only allow 6 concurrent requests per domain. You can read more on this limitation over this Stackoverflow thread. So, if you have a dashboard showing data from 3-4 services, each with many APIs, the browser can only handle 6 at a time. Once those are done, it processes the next 6. And this is without counting the initial script loading!
Enter BFF: The client makes a single call to the BFF, and the BFF makes all the calls to all the services without the parallel connection limitations. Problem solved!
But what if the number of concurrent connections wasn’t the issue? Should you still use the BFF pattern? My answer is a resounding yes!
Imagine you have a dashboard showing data from 3-4 services. You’ll likely be doing some client-specific computation, adaptation, or transformation of data before showing it to the end user. For small chunks, doing this on the client works well. But as the data transformations get more complex, it becomes a bottleneck for user experience. JavaScript is single-threaded, and the UI starts to feel laggy or jittery because the browser is busy transforming data instead of serving the UI.
A simple fix: Move all the presentation, adaptation, or transformation logic to the BFF. This way, the client's browser is free to focus on rendering content. Plus, the BFF adapts to any changes in backend API response structures, so the frontend sees a consistent response structure.
But what about the BFF API becoming too generalized or bloated? Doesn’t it need to follow the Single Responsibility Principle too?
The single responsibility principle is a bit different for BFF. A single BFF server endpoint should ideally load or update data related to a single UI page. Different client types (like mobile and web) can have their own BFF servers. Each BFF server sends back a thin, tailored response to the client. Mobile has less real estate than web, so the BFF server can easily trim the response.
So, while BFF server APIs might seem bloated and generalized compared to backend service APIs, they still follow the Single Responsibility Principle.
Using BFF in the Real World 🌍
Setting Up and Exploring an Open-Source Project
-
Project Setup:
- Follow the steps in the project's README to set up the project.
- Once the setup is complete and the server is running, visit https://localhost:3000/dora-metrics.
-
Open Network Tab:
- Open your browser's developer tools (press
F12
orCtrl+Shift+I
). - Go to the 'Network' tab to monitor API calls.
- Open your browser's developer tools (press
-
Observe API Calls:
- Notice that the page loads most of its data from the response of
/internal/team/${team_id}/dora_metrics
. - This is a BFF API endpoint responsible for loading all the required data for the
/dora-metrics
route page.
- Notice that the page loads most of its data from the response of
-
Examine the Codebase:
- Navigate to the code for the
/internal/team/[team_id]/dora_metrics
module. - Observe the concurrent invocation of multiple backend services using
Promise.all
.
- Navigate to the code for the
-
Understand the Benefits:
- Under the hood, we are calling 5 different backend services and making a total of 18 API calls.
- If the same number of calls were made from the frontend, it could take 3x more time than using the BFF approach.
Following these steps will help you understand how the BFF pattern is implemented in a real-world project and appreciate the benefits it brings in terms of performance and user experience.
Benefits of Using BFF 🌟
- Improved User Experience: By offloading complex data transformations and aggregations to the BFF, the client's browser can focus on rendering, resulting in a smoother UI experience.
- Optimized API Calls: The BFF can aggregate multiple backend API calls into a single call, reducing the number of requests the client needs to make.
- Tailored Responses: Different clients (web, mobile) can have their specific BFF servers, ensuring that each client receives data tailored to their needs.
- Adaptability: The BFF can adapt to changes in backend API structures, providing a consistent interface to the frontend.
In conclusion, adopting the BFF pattern can significantly improve the scalability, maintainability, and user experience of your application as it evolves from an MVP to a mature product. And remember, whether in code or life, a good BFF always has your back!
If you found this article helpful and enjoyed exploring the BFF pattern, please consider starring middleware repository. Your support helps us continue improving and sharing valuable content with the community. Thank you!
middlewarehq / middleware
✨ Open-source DORA metrics platform for engineering teams ✨
Open-source engineering management that unlocks developer potential
Join our Open Source Community
Introduction
Middleware is an open-source tool designed to help engineering leaders measure and analyze the effectiveness of their teams using the DORA metrics. The DORA metrics are a set of four key values that provide insights into software delivery performance and operational efficiency.
They are:
- Deployment Frequency: The frequency of code deployments to production or an operational environment.
- Lead Time for Changes: The time it takes for a commit to make it into production.
- Mean Time to Restore: The time it takes to restore service after an incident or failure.
- Change Failure Rate: The percentage of deployments that result in failures or require remediation.
Table of Contents
Top comments (26)
Didn't know about the concurrent requests limit at browser level. Every day one learns something new. Thanks.
Thanks @aloisseckar , glad i was able to help 🙂
I remember joining as an intern and asking why the heck do we have a backend server like BFF apart from the services that serve data using business logic, wish I had this article back then. This is simply the best article explaining Backend for front end type architecture!
Absolutely! This is awesome stuff. 👌
HTTP is having the limitation to send a max request upto 6 - 8 depends on browsers.
HTTP 1 is having this limitation but HTTP 2 by default allows concurrent requests upto 100 and as it's supporting multiplexing, hence many says unlimited too.
HTTP 2 has the same limitations too.
Except that due to multiplexing you're unlikely to create multiple separate connections and hence hit that limit.
But I'm being pedantic.
You're not wrong.
Pure class 🔥
Thanks @sankha_fb73cb9670f857fb60, learning from you!!!
This definitely works if you have just one frontend app but most orgs that have multiple ones typically use an API Gateway pattern, where it's a separate service, not tied to the frontend code. In those cases BFF usually becomes redundant.
@rcls Interesting perspective! API Gateways are indeed robust solutions for managing multiple frontends by providing a centralized entry point for API requests, handling tasks like routing, authentication, and rate limiting. They work well for general API management across different applications.
However, a BFF can still be very beneficial. By managing specific frontend logic and delivering tailored responses, a BFF customizes data and services for each client type, whether it's a web app, mobile app, or another interface. This helps optimize performance by reducing data transmission and minimizing client-side processing.
For instance, a mobile app might need different data or formats compared to a web application. A BFF can preprocess and transform data to fit these needs, ensuring each frontend gets precisely what it requires. This offloads specific logic and data aggregation tasks from the frontend, simplifying development.
Moreover, a BFF can handle client-specific use cases like caching, error handling, and response shaping, enhancing user experience and application performance. While an API Gateway provides a strong foundation for API management, incorporating a BFF can offer optimized and specific support for each frontend, improving overall system performance and scalability.
Epic read ser!
This image made it pretty intuitive to understand. Thanks!
yes, exactly!
Interesting. But in mentioning moving presentation logic to BFF for different classes of devices I would imagine for responsive design when you resize the screen like of desktop, that has to remain outside BFF. otherwise it's eventing all over to the BFF
Would you consider like AWS api manager a BFF
Great point! @bobbied For responsive design, presentation logic for resizing should stay on the client side to avoid constant communication with the BFF. The BFF should handle data aggregation and transformation, not UI-specific adjustments.
AWS API Gateway can act as a BFF by managing, scaling, and securing APIs, and aggregating data from backend services. However, a dedicated BFF often includes more tailored logic specific to client needs, which might be beyond what an API Gateway alone handles.
Thanks
what about the drawbacks guys ?
Hi @nghia0coder,
Thanks for asking. Here are a few drawbacks I have personally observed when working with BFF:
Increased Complexity: Introducing a BFF adds an additional layer to the architecture, which can increase the overall complexity of the system. Managing and maintaining this layer requires additional effort and resources.
Duplication of Logic: Different BFFs for different clients (web, mobile, etc.) might lead to duplication of business logic. Ensuring consistency across these BFFs can be challenging. You can fix this by creating packages for shared business logic.
Additional Maintenance: Each BFF requires regular updates and maintenance, especially when backend APIs change. This can increase the workload for the development team.
Latency: Although the BFF can reduce the number of requests from the client, it introduces an additional hop in the network. This can potentially increase the latency for some operations, although it will be super minor.
thank you so much
Thanks a lot for asking that question :)