Table of Contents
- Introduction
- Inheriting a Mess
- Reinventing from the Ground Up
- Ports and Adapters
- The Technology Behind Our Overhaul
- Did SOLID Principles Guide Our Design?
- Building Confidence with the Testing Pyramid Strategy
- Ensuring Testability with Pure Functions
- Deployment on Google Cloud Platform
- In Summary: Was the Backend Transformation Successful?
Introduction
The day we took over the operations of a legacy e-commerce backend system from the global protection brand POC, one thing was certain: this was going to be a formidable challenge.
The codebase we inherited from the previous development team was riddled with issues:
- It was fragile, often breaking with (or even without) the slightest modification.
- Changes couldn’t be made with confidence, as the system was completely untested.
- It lacked any coherent design principles, leaving us without a solid foundation to build on.
Given the state of the system, it became clear that a simple cleanup wouldn’t suffice. What we needed was a complete overhaul — a new application designed from the ground up, drawing inspiration and guidance from various timeless software engineering principles. This approach would allow us to address every flaw we encountered, laying a solid foundation for the future.
Inheriting a Mess
The use case that led to the now-legacy solution involved transferring data—such as stock, orders, and tracking events—between the client's ERP (Enterprise Resource Planning) system and their Shopify e-commerce platform.
Let’s take a look at the inventory flow as an example:
The problem was that their ERP, Microsoft Dynamics AX, is a relic from the stone age, offering none of the modern amenities like a REST or GraphQL API. Instead, it resorted to dropping literal XML files onto an SFTP server, to later be picked up for processing.
This processing was handled by a no-code platform called Make. While Make offered a nifty solution for simpler workflows, its limitations became painfully obvious when dealing with complex business logic and advanced use cases.
On top of that, the technology chosen as the "database" for data on its way to and from Shopify was Google Sheets. Using a spreadsheet for this purpose of course lacked the robustness needed for complex workflows and storage.
The system also relied on Matrixify, a third-party Shopify app, for data imports and exports. While functional, the app's awkward interface and us needing to depend on an external tool introduced additional risks, underscoring the fragility of the entire legacy setup.
Reinventing from the Ground Up
To solve these challenges, we first asked if the client was open to switching to a more modern ERP. They were initially on board, but their IT team estimated the cost at nearly 1 million euros, so that option was off the table. Rather than dwelling on this obstacle, we came up with an idea 💡:
How about we encapsulate the whole legacy system in a new backend service—an ERP adapter—which would then be able to offer a simple API interface for the E-com engine to interact with?
This way, we could deal with the issues inside once, and then no one on the outside would ever have to think about quirky XML file syntax, Google Sheets going down because of not being able to process more than 50 000 rows, or unstable SFTP server interactions ever again.
So we did a major architectural overhaul. Here are a few of the main changes:
- Adapted a proper Postgres database, with Prisma as ORM.
- Got rid of the dependency for an import/export SaaS product and built the functionality ourselves. (Mutation batching, Centra rate-limit handling, logging.)
- Added strong typing with TypeScript for every entity and interaction.
- Exposed a GraphQL API.
- For security, storage, cron jobs, hosting, and more, we used Google Cloud Platform.
- Set up an independent QA environment in GCP, to be able to safely test new features before deploying to production.
- (For the E-com engine we switched from their old Shopify setup that used Liquid templating and barely readable checkout scripts, to a headless setup with Centra and a Next app for the frontend.)
The Technology Behind Our Overhaul
One of our primary goals was to ensure that different parts of the codebase were independent (decoupled), so changes in one area wouldn't affect another. With the legacy system, we never felt free to change something that worked, because we had no idea what would break. This is a very bad situation to be in, as new features can't be added easily, if at all.
To achieve our goal, we chose what we believe is the best backend framework for TypeScript: NestJS.
It’s like Express, but more fleshed out with built-in features that developers from languages like Java or C# will recognize, such as a modular architecture, middleware, and tools for request interception and validation.
Most importantly, it provides a robust Dependency Injection (DI) system, making the code scalable and easier to test by preventing different parts of the codebase from becoming entangled.
Armed with this framework, we were now ready implement the Hexagonal, or Ports and Adapaters, software architecture.
Ports and Adapters
The point of this architecture is to keep the core business logic decoupled from external systems, like third-party services, databases, or file transfers. By organizing the system around interfaces (or "ports") and separating the external integrations into distinct implementations (or "adapters"), we ensure the business logic remains stable even as external dependencies change. This separation also makes testing easier by allowing us inject fake/mock adapters without touching the core logic.
To enforce this separation, we split the system into public modules (business logic) and private modules (adapters). Public modules contain stable core logic, while private modules handle external dependencies, which can evolve without affecting the core.
Adapters (Red)
Adapters connect the core application to external systems, such as SFTP services, XML processing, and network batching. They are part of the private modules, meaning they can change without touching the stable core logic in the public modules. This keeps external system changes isolated.
Ports
Ports define interfaces that the business logic both implements and invokes to interact with external systems. For example, the ISyncInventory
port, implemented by the InventoryService
, handles inventory synchronization, while the ISftpConnector
port, invoked by the business logic, deals with file transfers. Using these ports, the business logic remains decoupled from external system details, ensuring the application is flexible and adaptable to changes.
Application Business Logic (Green)
The business logic lives in the public modules and handles the core rules and processes, such as the inventory service managing data synchronization. By depending only on ports, the business logic stays decoupled from external systems, ensuring it remains stable, maintainable, and easy to test, even when external systems change.
Did SOLID Principles Guide Our Design?
To ensure our architecture is robust, let’s review it against the SOLID principles that Robert C. Martin, famous for his books on Clean Code
and Clean Architecture
, has laid out. Does our system hold up to these timeless software engineering guidelines?
S - Single-responsibility Principle
Each module has one clear purpose. For example, our InventoryService only handles inventory logic, while adapters deal with external interactions like SFTP or APIs.
@Module({
imports: [
ConfigModule.forRoot({
isGlobal: true
}),
GraphQLModule.forRoot<ApolloDriverConfig>({
driver: ApolloDriver,
playground: false,
autoSchemaFile: join(process.cwd(), "src/schema.gql"),
sortSchema: true
}),
AuthModule,
EventEmitterModule.forRoot(),
PrismaModule,
CloudStorageModule,
InventoryModule, // <--- Here
XmlModule,
GraphQLBatchModule,
NetworkRequestRetryModule,
FetchModule,
CentraIntegrationModule,
WebhookModule,
SftpConnectorModule,
OrderModule,
ExceptionModule,
ErrorModule,
LoggerModule,
TrackingModule,
PricingModule,
ProductModule
],
controllers: [AppController],
providers: [
AppService,
...appConfig
],
exports: [ConfigModule]
})
export class AppModule {}
O - Open-closed Principle
Our modules are open for extension but closed for modification. We can add new features, like additional adapters, without altering the existing core logic.
L - Liskov Substitution Principle
This principle ensures that different implementations of an interface can be swapped without breaking the system. Adhering to this, we can replace an adapter like ISftpConnector
with another SFTP implementation, and it will work seamlessly as long as it follows the expected behavior defined by the interface. This way, adapters can be switched out without affecting the business logic.
I - Interface Segregation Principle
We create small, focused interfaces that each handle a single responsibility, and then compose them into larger ones, like ISftpConnector
, ensuring that modules only rely on the specific functionality they need. This prevents the tight coupling often caused by inheritance and keeps dependencies clean and maintainable.
export interface ISftpConnector
extends ISftpConnectorFileGet,
ISftpConnectorFilesGet,
ISftpConnectorFileAdd,
ISftpConnectorFileDelete,
ISftpConnectorIsDirEmpty,
ISftpConnectorPurgeDir {}
ISftpConnector
is composed of smaller interfaces, allowing us to separate concerns and avoid bloated, monolithic interfaces, which can lead to the infamous "God object".
D - Dependency Inversion Principle
As we have seen, our system relies on abstractions (interfaces) rather than concrete implementations. The core logic depends on ports (interfaces), while the adapters implement those ports, keeping the layers decoupled.
export interface IInventory
extends ISyncAxInventoryToAdapterInventory,
ISyncCentraInventoryToAdapterInventory,
ISyncAdapterInventoryToCentraInventory,
IGetWarehouse,
// etc ...
IDeleteInventoryRecord {}
@Resolver("AXInventoryResolver")
export class AXInventoryResolver {
private readonly logger: LoggerService
constructor(
@Inject(INVENTORY_SERVICE_TOKEN)
private readonly inventoryService: IInventory,
private readonly exception: ExceptionService
) {
this.logger = LoggerService.withContext(AXInventoryResolver.name)
}
// etc ...
IInventory
, and thus remains decoupled.
@Injectable()
export class AXInventoryService implements IInventory {
private readonly logger: LoggerService
constructor(
private readonly prisma: PrismaService,
@Inject(XmlService)
private readonly xml: IXMLService,
@Inject(SFTP_CONNECTOR_TOKEN)
private readonly sftp: ISftpConnector,
@Inject(CLOUD_STORAGE_SERVICE_TOKEN)
private readonly cloudStorageService: ICloudStorageService,
// etc ..
private readonly config: ConfigService,
) {
this.logger = LoggerService.withContext(AXInventoryService.name)
}
public syncAxInventoryToAdapterInventory =
(market: Market) =>
(): TE.TaskEither<
InventoryError,
SyncAxInventoryToAdapterInventorySuccessMessage
> =>
pipe(
TE.Do,
TE.bind("inventoryFileIdentifier", () =>
this.getInventoryFileIdentifier(market)
),
// etc ...
ISftpConnector
being injected into the AXInventoryService
. This illustrates the "inversion" principle: at compile time the high-level service depends on an abstract interface, while the concrete implementation is injected only at runtime. This keeps the system flexible and adaptable to changes in external services.
Building Confidence with the Testing Pyramid Strategy
Our system does indeed follow the SOLID principles! That's great, but how does it hold up in testing? One of our main goals was to ensure that changes could be made confidently, with good test coverage.
Fortunately, by adhering to SOLID and the Ports and Adapters architecture, testing becomes much easier as a natural side effect. Like a bonus! The clear separation of concerns allows us to test each layer independently, as shown in the diagram below:
E2E Testing
Simulates real user interactions by calling the server over HTTP, using a test database, and seeding data before tests. It's thorough but slower due to involving external systems.
Integration Testing
Mocks most adapters to avoid calling real external systems. It’s faster and ensures modules work well together without involving full system dependencies.
Unit Testing
Mocks all adapters, ensuring no external systems are touched. It’s ultra-fast, focusing on testing isolated logic within a single module.
Ensuring Testability with Pure Functions
In addition to our testing strategy, we ensure that each service in our Nest modules—whether public or private—follows a functional programming style using the fp-ts library.
To illustrate this, let's take a look at the typical structure of a Nest module.
This approach allows us to write pure functions—functions that 1) always return the same output for the same input, and 2) don’t produce side effects. Side effects occur when a function interacts with the world outside of itself (e.g., calling an API or modifying a global state), making testing and debugging more difficult.
To avoid this, we use TaskEither
, a type from fp-ts that represents an asynchronous operation that can either succeed or fail. Here’s an example from our IOrder
interface:
import { taskEither as TE } from "fp-ts"
type OrderNumber = number
export interface IOrderServiceCreate {
createOrder<T>(axOrderJson: T): TE.TaskEither<OrderError, OrderNumber>
}
IOrder
composes interfaces like IOrderServiceCreate
, where TaskEither
is used for async operations that could fail.
@Injectable()
export class AXOrderService implements IOrder {
// ... (code)
public createOrder = <T>(data: T): TE.TaskEither<OrderError, OrderNumber> => {
// ... (code)
// A value "data" of generic type T goes into the pipeline:
return pipe(
E.Do,
E.bind("data", () => E.right(data)),
E.bind("validatedData", validateData),
E.bind("orderNrAndShipmentId", getOrderNrAndShipmentId),
E.bind("market", getMarket),
TE.fromEither,
TE.bind("id", persist),
TE.bind("xml", createXml),
TE.bind("gcsBucketName", getGCSBucketName),
TE.bind("cloudUploadSuccess", performCloudUpload),
TE.bind("sftpUploadSuccess", performSftpUpload),
TE.chain(persistSuccess)
)
// And comes out transformed on the other side,
// as a type: TaskEither<OrderError, OrderNumber>
}
do
notation. (Link.)
This entire declarative flow in the service, from beginning to end, is lazy and pure. Laziness ensures that nothing happens until exactly when the function is invoked, and purity guarantees that the function’s behavior is deterministic. This predictability makes our services easier to test, as every input will consistently return the same result without causing hidden side effects.
Deployment on Google Cloud Platform
Finally for the deployment, we handle it by running the app in a Docker container in Google Cloud Run, which handles infrastructure and scales automatically to meet demand. We also rely on Google Cloud's built-in authentication, so security is managed behind the scenes, letting us focus on building the app instead of worrying about access control.
In Summary: Was the Backend Transformation Successful?
So, this all sounds great on paper, but what’s been the real outcome? We’re proud to say the system has been running smoothly since deployment, doing its job without a hitch.
For an engineer, there’s little more satisfying than seeing a system you’ve built work seamlessly, reliably, and without constant intervention.
By taking the time to apply timeless software engineering principles, we’ve built a stable backend platform that the client has been highly satisfied with — one that lets them focus on adding new features instead of constantly fixing things. They can finally innovate with confidence, knowing their backend will keep up with whatever comes next.
Dawid Dahl is a full-stack developer at UMAIN | EIDRA. In his free time, he enjoys metaphysical ontology and epistemology, analog synthesizers, consciousness, techno, Huayan and Madhyamika Prasangika philosophy, and being with friends and family.
Top comments (0)