TRPC, despite its short history, has gained much popularity in the Node.js/TypeScript community. One of the main reasons for its fast adoption comes from its brilliantly light-weighted design - there's no schema to write, no generator to run. Everything works magically, leveraging TypeScript's powerful type inference capability. It's one of the API toolkits providing the best developer experiences.
However, its power is also limited by the upper bound of type inference's capability. Let's look at an example. Suppose I have a backend service function that fetches a blog post whose signature looks like this:
export type Post = { id: number; title: string; }
export type User = { id: number; name: string; }
export type LoadPostArgs = {
id: number;
withAuthor?: boolean;
};
export type LoadPostResult<T extends LoadPostArgs> =
T['withAuthor'] extends true ? Post & { author: User } : Post;
export function loadPost<T extends LoadPostArgs>(args: T): LoadPostResult<T> {
...
}
What's unique about this generic function is that its return type "adapts" to the input type:
// p1 is typed `Post`
const p1 = loadPost({id: 1});
// p2 is typed `Post & { author: User }`
const p2 = loadPost({id: 1, withAuthor: true});
This โdynamicโ typing makes a pleasant auto-completion experience and helps catch errors at compile time.
Letโs expose this function via a tRPC router:
// routers.ts
const appRouter = router({
loadPost: publicProcedure
.input(z.object({ id: z.number(), withAuthor: z.boolean().optional() }))
.query(({ input }) => loadPost(input)),
});
export type AppRouter = typeof appRouter;
Then consume it from the client side:
const trpc = createTRPCProxyClient<AppRouter>({...});
const p1 = await trpc.loadPost.query({ id: 1 });
const p2 = await trpc.loadPost.query({ id: 1, withAuthor: true });
Both p1
and p2
are typed as Post
. The dynamicity is lost.
Why does that happen?
Let's take a look at the generic function again:
export function loadPost<T extends LoadPostArgs>(args: T): LoadPostResult<T> {
...
}
When it's called, the generic type parameter T
is inferred from the type of the concrete input argument (as long as it satisfies the LoadPostArgs type). After that, the TypeScript compiler can further infer the return type based on the inferred T
. The key is that everything happens inside the context of a function call.
Although tRPC gives the illusion of simple function calling when invoking a remote API, its situation is very different. During server-side router registration, the input's shape is statically analyzed from the zod schema, and there's no way of defining a "generic" router that you can instantiate with concrete types on the client side.
To make such "dynamic" generic typing work, tRPC needs to be able to hold "uninstantiated" generic function types internally and instantiate them in a different context. This requires a language feature called "Higher-Kinded Types" which TypeScript hasn't implemented yet. In fact, the feature request was created back in 2014, and we can celebrate its 10-yr anniversary soon ๐.
Allow classes to be parametric in other parametric classes #1213
This is a proposal for allowing generics as type parameters. It's currently possible to write specific examples of monads, but in order to write the interface that all monads satisfy, I propose writing
interface Monad<T<~>> {
map<A, B>(f: (a: A) => B): T<A> => T<B>;
lift<A>(a: A): T<A>;
join<A>(tta: T<T<A>>): T<A>;
}
Similarly, it's possible to write specific examples of cartesian functors, but in order to write the interface that all cartesian functors satisfy, I propose writing
interface Cartesian<T<~>> {
all<A>(a: Array<T<A>>): T<Array<A>>;
}
Parametric type parameters can take any number of arguments:
interface Foo<T<~,~>> {
bar<A, B>(f: (a: A) => B): T<A, B>;
}
That is, when a type parameter is followed by a tilde and a natural arity, the type parameter should be allowed to be used as a generic type with the given arity in the rest of the declaration.
Just as is the case now, when implementing such an interface, the generic type parameters should be filled in:
class ArrayMonad<A> implements Monad<Array> {
map<A, B>(f: (a:A) => B): Array<A> => Array<B> {
return (arr: Array<A>) => arr.map(f);
}
lift<A>(a: A): Array<A> { return [a]; }
join<A>(tta: Array<Array<A>>): Array<A> {
return tta.reduce((prev, cur) => prev.concat(cur));
}
}
In addition to directly allowing compositions of generic types in the arguments, I propose that typedefs also support defining generics in this way (see issue 308):
typedef Maybe<Array<~>> Composite<~> ;
class Foo implements Monad<Composite<~>> { ... }
The arities of the definition and the alias must match for the typedef to be valid.
Just like "Higher Order Functions" are functions that return other functions, "Higher-Kinded Types" are types that create other types. It's probably one of the most obscure areas of typing and language design, but if you're interested, here're a few pointers to follow:
Encoding HKTs in TypeScript (Once Again)
Michael Arnaldi for Effect ใป Dec 19 '21
How does this limitation hurt us?
A careful reader might have found some clue: the loadPost function's typing pattern is extensively used by Prisma ORM. It's where Prisma's best features come from: it doesn't just type things; it types them perfectly.
// post is typed `Post & { author: User }`
const post = await prisma.post.findFirst({
where: { id: postId },
include: { author: true }
);
We're building a toolkit called ZenStack, which extends Prisma's schema and runtime to add access control capability to the awesome ORM. It also provides plugins to generate different styles of APIs from its schema (powered by the access-control-enabled Prisma), and tRPC is one of them. The generated routers allow you to call Prisma's CRUD methods via tRPC with identical signatures:
const post = await trpc.post.findFirst.query({
where: { id: postId },
include: { author: true }
);
However, the generic typing limitation prevents our users from enjoying Prisma's best at the tRPC API level.
The brute-force fix
When type inference hits its limit, we can always fall back to code generation. The key insight is that, although the tRPC router's typing is lossy, the behavior is correct at runtime. I.e., calling loadPost with { withAuthor: true }
does return an author
field in the response. Only the typing is imprecise. And we can fix the typing with some code generation that simply "corrects" the typing from the client side.
To achieve that, we generate a createMyTRPCProxyClient
helper to create a tRPC client with type fixing. The idea looks like the following:
// the type of `loadPost` function
export type LoadPostFn<T extends LoadPostArgs = LoadPostArgs> = (
args: T
) => LoadPostResult<T>;
function createMyTRPCProxyClient(opts: CreateTRPCClientOptions<AppRouter>) {
// create a regular trpc client
const _trpc = createTRPCProxyClient<AppRouter>(opts);
// cast it to fix typing of the `query` function of the `loadPost` API
return _trpc as Omit<typeof _trpc, 'loadPost'> & {
loadPost: {
query: <T extends Parameters<LoadPostFn>[0]>(
input: T
) => Promise<ReturnType<LoadPostFn<T>>>;
};
};
}
Now the client-side typing is all good:
const trpc = createMyTRPCProxyClient({ ... });
// post is typed as `Post & { author: User }`
const post = await trpc.post.findFirst.query({
where: { id: postId },
include: { author: true }
);
Type-inference vs. code generation
Type inference is light and fast. Your changes are reflected instantly inside IDE without running code generation steps. When possible, it should be a preferred approach. But when you hit the limit of it, don't shy away from falling back to code generation. For ZenStack, this fallback is especially natural because the tRPC routers already came from code generation. It doesn't hurt to generate a bit more ๐.
ZenStack is our open-source TypeScript toolkit for building high-quality, scalable apps faster, smarter, and happier. It centralizes the data model, access policies, and validation rules in a single declarative schema on top of Prisma, well-suited for AI-enhanced development. Start integrating ZenStack with your existing stack now!
Top comments (0)