The conventional way of building a Data API through GraphQL involves building everything from scratch. That includes tasks such as:
- writing boilerplate code for basic CRUD functionalities
- implementing and maintaining GraphQL resolvers
- optimizing the resolvers for performance
- adding authentication and authorization
- implementing data validation and error checking mechanisms
One of the most complex and time-consuming tasks is the implementation of GraphQL resolvers, though. Since the resolvers are the ones that generate a response for a GraphQL operation, it's critical to implement them properly.
However, as the complexity of the API increases, so does the effort and difficulty of coding the resolvers. Manually implementing the resolvers can be difficult for several reasons:
- Boilerplate code: It requires a significant amount of boilerplate code for the CRUD operations
- Complexity: The resolvers become complex once you add filtering, aggregations, authorization, and custom business logic
- Performance: Inefficient implementation of resolvers can lead to an underperforming API
- Maintenance: Maintaining the resolvers and keeping them up-to-date with changes in the schema and data sources is time-consuming
Moreover, you must ensure that the API is secure and can scale appropriately. All this work requires lots of effort and resources. What if you could eliminate all these challenges and time-consuming work? This article presents an alternative way of building GraphQL Data APIs without the difficulties of manually-built APIs.
Writing the GraphQL Resolvers manually
In this section, we'll go over a custom-built GraphQL API that represents a digital media store. The API is connected to the Chinook database, which includes tables for artists, albums, and tracks.
The figure below illustrates the GraphQL API.
Since the GraphQL resolvers are the main focus of this article, we'll only focus on them. We'll skip the other details that are not relevant to this article. However, you can see the complete application code on GitHub.
The schema of the GraphQL API looks as follows:
schema {
query: Query
}
type Query {
Album(id: Int!): Album
Albums: [Album!]!
Artist(id: Int!): Artist
Artists: [Artist!]!
Track(id: Int!): Track
Tracks: [Track!]!
}
type Album {
AlbumId: Int!
Artist: Artist
ArtistId: Int!
Title: String!
Tracks: [Track!]
}
type Artist {
ArtistId: Int!
Name: String
Albums: [Album!]
}
type Track {
Album: Album
AlbumId: Int
Bytes: Int
Composer: String
TrackId: Int!
MediaTypeId: Int!
Milliseconds: Int!
Name: String!
}
Looking at the schema, you can see that the API only allows querying the database for artists, albums, and tracks. It doesn't enable mutating data, such as inserting, updating, and deleting records.
As a result, this GraphQL API has a couple of resolvers. A resolver for retrieving:
- a specific album
- all albums
- a specific artist
- all artists
- a specific track
- all tracks
The "resolvers" code looks as follows:
import { readFileSync } from "node:fs";
import { createServer } from "node:http";
import { createSchema, createYoga } from "graphql-yoga";
import { GraphQLError } from "graphql";
import DataLoader from "dataloader";
import { useDataLoader } from "@envelop/dataloader";
import type { Album, Artist, Resolvers, Track } from "./generated/graphql";
import sql from "./db";
import { genericBatchFunction } from "./dataloader";
import { keyByArray } from "./utils";
const typeDefs = readFileSync("../schema.graphql", "utf8");
const resolvers: Resolvers = {
Query: {
Album: async (parent, args, context, info) => {
return (context.getAlbumsById as DataLoader<string, Album>).load(args.id.toString());
},
Albums: async (parent, args, context, info) => {
const albums = await (
context.getAllAlbums as DataLoader<string, Album[]>
).load('1');
for (const album of albums) {
(context.getAlbumsById as DataLoader<string, Album>).prime(
album.AlbumId.toString(),
album
);
}
const albumsByArtistId = keyByArray(albums, 'ArtistId');
for (const [ArtistId, albums] of Object.entries(albumsByArtistId)) {
(context.getAlbumsByArtistId as DataLoader<string, Album[]>).prime(
ArtistId,
albums
);
}
return albums;
},
Artist: async (parent, args, context, info) => {
return (context.getArtistsById as DataLoader<string, Artist>).load(
args.id.toString()
);
},
Artists: async (parent, args, context, info) => {
const artists = await (
context.getAllArtists as DataLoader<string, Artist[]>
).load('1');
if (!artists) {
throw new GraphQLError(`Albums not found.`);
}
for (const artist of artists) {
(context.getArtistsById as DataLoader<string, Artist>).prime(
artist.ArtistId.toString(),
artist
);
}
return artists;
},
Track: async (parent, args, context, info) => {
return (context.getTracksById as DataLoader<string, Track>).load(args.id.toString());
},
Tracks: async (parent, args, context, info) => {
const tracks = await (
context.getAllTracks as DataLoader<string, Track[]>
).load('1');
for (const track of tracks) {
(context.getTracksById as DataLoader<string, Track>).prime(
track.TrackId.toString(),
track
);
}
const tracksByAlbumId = keyByArray(tracks, 'AlbumId');
for (const [AlbumId, tracks] of Object.entries(tracksByAlbumId)) {
(context.getTracksByAlbumId as DataLoader<string, Track[]>).prime(
AlbumId,
tracks
);
}
return tracks;
},
},
Album: {
async Artist(parent, args, context, info) {
return (context.getArtistsById as DataLoader<string, Artist>).load(
parent.ArtistId.toString()
);
},
async Tracks(parent, args, context, info) {
const tracks = await (context.getTracksByAlbumId as DataLoader<string, Track[]>).load(
parent.AlbumId.toString()
);
for (const track of tracks) {
(context.getTracksById as DataLoader<string, Track>).prime(
track.TrackId.toString(),
track
);
}
return tracks
},
},
Artist: {
async Albums(parent, args, context, info) {
const albums = await (context.getAlbumsByArtistId as DataLoader<string, Album[]>).load(
parent.ArtistId.toString()
);
if (Array.isArray(albums)) {
for (const album of albums) {
(context.getAlbumsById as DataLoader<string, Album>).prime(
album.AlbumId.toString(),
album
);
}
}
return albums || [];
},
},
Track: {
async Album(parent, args, context, info) {
return (context.getAlbumsById as DataLoader<string, Album>).load(
parent.AlbumId!.toString()
);
},
}
};
export const schema = createSchema({
typeDefs,
resolvers,
});
const server = createServer(
createYoga({
schema,
plugins: [
useDataLoader(
"getAlbumsById",
(context) =>
new DataLoader((keys: Readonly<string[]>) =>
genericBatchFunction(keys, { name: "Album", id: "AlbumId" })
)
),
useDataLoader(
"getAllAlbums",
(context) =>
new DataLoader(async (keys: Readonly<string[]>) => {
const albums = await sql`SELECT * FROM ${sql("Album")}`;
return keys.map((key) => albums);
})
),
useDataLoader(
"getAlbumsByArtistId",
(context) =>
new DataLoader((keys: Readonly<string[]>) =>
genericBatchFunction(keys, { name: "Album", id: "ArtistId" }, true)
)
),
useDataLoader(
"getArtistsById",
(context) =>
new DataLoader((keys: Readonly<string[]>) =>
genericBatchFunction(keys, { name: "Artist", id: "ArtistId" })
)
),
useDataLoader(
"getAllArtists",
(context) =>
new DataLoader(async (keys: Readonly<string[]>) => {
const artists = await sql`SELECT * FROM ${sql("Artist")}`;
return keys.map((key) => artists);
})
),
useDataLoader(
"getTracksById",
(context) =>
new DataLoader((keys: Readonly<string[]>) =>
genericBatchFunction(keys, { name: "Track", id: "TrackId" })
)
),
useDataLoader(
"getTracksByAlbumId",
(context) =>
new DataLoader((keys: Readonly<string[]>) =>
genericBatchFunction(keys, { name: "Track", id: "AlbumId" }, true)
)
),
useDataLoader(
"getAllTracks",
(context) =>
new DataLoader(async (keys: Readonly<string[]>) => {
const tracks = await sql`SELECT * FROM ${sql("Track")}`;
return keys.map((key) => tracks);
})
),
],
})
);
server.listen(4000, () => {
console.info("Server is running on http://localhost:4000/graphql");
});
This GraphQL API has a single purpose - to read the data from the database. Although the API is quite simple, it requires a considerable amount of code. Now imagine a complex enterprise application and the amount of code you would have to write.
The amount of code you have to write is one of many concerns. There are other things you need to think about, such as:
- the N+1 problem
- extending security rules per field in the schema
- adding gateway features (caching, rate limiting, etc.)
All of the above is difficult and time-consuming to implement. Hasura solves issues such as the N+1 problem and provides features such as caching, rate limiting, and authorization by default.
N+1 problem
GraphQL query execution usually involves executing a resolver for each field. Let's say you are fetching artists and their tracks. Each "artist" record in the database would invoke a function to fetch their N "tracks". As a result, the total round trips become N+1, which could become a huge performance bottleneck. The depth of the query makes the number of queries grow exponentially.
Hasura mitigates the N+1 problem because, at its core, the server is a compiler. That means it compiles all GraphQL queries to a single SQL query, thus reducing the number of hits to the database and improving the performance.
Read more about how Hasura solves the GraphQL N+1 Problem.
Security Rules
Most applications require an authentication system to answer questions such as "Does this (valid) user have permission to access this resource or perform this action?". It specifies what data a particular user can access and what action the user can perform. Implementing such a critical system is a challenging task.
Hasura allows you to define granular role-based access control rules for every field in your GraphQL schema. It's granular enough to control access to any row or column in your database.
With row-level access control, users can access tables without having access to all rows on that table. This is particularly useful for protecting sensitive personal data, which is part of the table. This way, you can allow all users to access a table but only a specific number of rows in that table.
Column-level access control lets you restrict access to specific columns in the table. This is useful to hide data that are not relevant, sensitive, or used for internal purposes.
Combining both these rules gives a flexible and powerful way to control data access to different stakeholders involved.
How does it work? Hasura uses the role/user information from the session variables and the actual request to validate the request against the rules defined by you. If the request/operation is allowed, it generates a SQL query, which includes the row/column-level constraints from the access control rules. It then sends it to the database to perform the required operation (fetch the necessary rows for queries, insert/edit rows for mutations, etc).
Read more about Authorization and Access Control in the documentation.
Adding gateway features
A production-ready API requires features such as caching, rate limiting, monitoring, and observability. When you build the API manually, you must consider and implement all these features.
There can be latency issues and slow response times for GraphQL queries because of the response size, the location of the server, and the number of concurrent requests. Hasura has metadata about the data models across data sources and the authorization rules at the application level. This enables Hasura to provide end-to-end application caching. Cached responses are stored for a period of time in an LRU (least recently used) cache and removed from the cache as needed based on usage.
Read more about caching in the documentation.
Save time and resources
For enterprise-level Data APIs, the above features are a must. The API must be secure and performant. That results in massive work to implement just the basic CRUD functionalities. Once you add authorization, caching, rate limiting, and other features, it requires even more effort and resources.
What if you could skip all these tedious and time-consuming tasks? In the next section, you'll see how you can have a production-ready GraphQL API within minutes without writing code, plus all the above features (and more).
Building GraphQL APIs without writing resolvers
Hasura is a GraphQL Engine that makes your data instantly accessible over a real-time GraphQL API. It connects to your data source and automatically generates the GraphQL schema, queries, mutations, subscriptions, CRUD operations, and authorization.
It doesn't require writing any code unless you want to add custom business logic. You get everything out of the box.
Let's continue by creating a Data API with Hasura to demonstrate the power of Hasura. There are two ways to use Hasura:
- using Hasura Cloud, which is the easiest way to use and build Hasura applications
- using Docker to self-host it
This article uses Hasura Cloud. Navigate to your Hasura Cloud dashboard and create a new project by clicking the "New Project" button.
In the next step, click the "Create Free Project" button, and you should have a Hasura project up and running within seconds. You should see the console after launching the project.
The next step involves connecting the Hasura app to your database. Navigate to the "DATA" tab, then to "Create New Database", and click the "Create Neon Database" button.
The database should be created & connected within seconds.
Now click the "Create Table" button to create a new table. For this example, create a users
table with the following columns:
-
id
of type UUID (Primary Key) -
username
of type Text -
country
of type Text
Select the id
as the Primary Key and save the table.
If you navigate back to the "API" tab, you should see all the available GraphQL operations. You get all the operations needed to access and mutate data without writing code (or resolvers). On top of that, you also get real-time capabilities.
This simple example demonstrates how Hasura improves the process of building and shipping Data APIs to production. It unblocks and allows developers to move extremely fast since they automatically get all the critical features.
How does that work
A GraphQL Resolver is a function that specifies how to process a specific GraphQL operation and turn it into data. A conventional GraphQL API can't exist without resolvers because they are an essential part of it.
Hasura takes a different approach, though. Instead of using the conventional GraphQL resolvers, it uses a GraphQL Engine. The engine, which is actually a compiler, compiles the GraphQL operation into an efficient SQL query.
Let's take the Chinook database as an example again. The Chinook data model represents a digital media store, including tables for artists, albums, media tracks, invoices, and customers. The example below illustrates a couple of untracked tables.
When the tables are untracked, they are not exposed over the GraphQL API. To expose them to the public, you need to track them. When you track the tables, the GraphQL Engine automatically generates a bunch of things, such as:
- a GraphQL type definition
- queries
- mutations
- subscriptions
Check the docs for the complete list of things it generates.
Once the engine tracks the tables, it knows how to respond when users request data from the database to which these tables belong. You can also see the available operations in the "Explorer" tab.
As mentioned earlier, Hasura compiles the GraphQL operation into a SQL query. Consider the following query from the above image:
query {
Album {
Title
}
}
Click on the Analyze
button to see what happens under the hood when you run the query. Clicking the button opens a new pop-up with the "Generated SQL" and the "Execution Plan", as shown in the image below.
The generated SQL statement for the query is as follows:
SELECT
coalesce(json_agg("root"), '[]') AS "root"
FROM
(
SELECT
row_to_json(
(
SELECT
"_e"
FROM
(
SELECT
"_root.base"."Title" AS "Title"
) AS "_e"
)
) AS "root"
FROM
(
SELECT
*
FROM
"public"."Album"
WHERE
('true')
) AS "_root.base"
) AS "_root"
That's how Hasura works. Instead of using GraphQL resolvers, it generates SQL statements at runtime and then runs them against your database.
The video illustrates how Hasura doesn't need to use resolvers for data interop and what the key differences are with other systems.
Additional resources to learn more:
- Architecture of a high performance GraphQL to SQL engine
- Blazing fast GraphQL execution with query caching and Postgres prepared statements
Adding custom business logic
Even though Hasura doesn't use resolvers, you can join your existing GraphQL APIs with your Hasura application. That's possible with the help of Remote Schemas.
Remote Schemas
Hasura can merge remote GraphQL schemas and provide a unified GraphQL API. Think of it like automated schema stitching. All you need to do is build your GraphQL service and provide the HTTP endpoint to Hasura. Your GraphQL service can be written in any language or framework.
That enables users to connect their existing GraphQL API to their Hasura application. For instance, if you have a GraphQL payment API, you can join it to your Hasura application with the help of Remote Schemas.
Read more about Remote Schemas in the documentation.
Additionally, there are other ways of adding custom business logic, such as Actions and Event Triggers.
Hasura Actions
Actions are a way to extend Hasura's schema with custom business logic using custom queries and mutations. Actions can be added to Hasura to handle various use cases such as data validation, data enrichment from external sources, and any other complex business logic.
Additionally, Actions in Hasura allow for integrating new and existing REST APIs with your Hasura application. For example, if you have a Node.js REST API for sending emails, you can plug it into Hasura without making any changes to the existing API.
Read more about Actions in the documentation.
Event Triggers
Hasura can be used to create Event Triggers on tables in the database. Event Triggers reliably capture events (such as insert, update, and delete) on specified tables and invoke HTTP webhooks to carry out any custom logic.
A simple example would be triggering an API endpoint to send a welcome email once a user is added to the database.
Read more about Event Triggers in the documentation.
Summary
Exposing your data via a GraphQL API doesn't mean you need to spend resources and effort building it manually. You can let Hasura do all the heavy lifting for you. In the use cases where you require custom business logic, you can add it with the help of Actions, Remote Schemas, and Event Triggers.
Additional material:
Top comments (0)