I bet everyone here has heard about, used and loved GraphQL over REST API. I also know the reasons why GraphQL is better than REST API.
BUT
Every...
For further actions, you may consider blocking this person and/or reporting abuse
I think the biggest con of GraphQL is the loss of network caching. You can't use thousand year old HTTP caching and various proxies but it might not be a problem, depending on the case. This is a great explanation phil.tech/api/2017/01/26/graphql-v...
I don't know if I agree with this. You can have a similar cache setup as you do with a REST api as long as you, say, hash the graphql query or something and just hit the cash server side if the graphql query hashes match.
Most clients aren't sending a variable number of different graphql queries anyway. Most have a fixed set of queries that get sent from different pages.
It may not be as straightforward as REST caching, but it's not very complicated either.
I think we're talking about two different types of cache. You can definitely do server side caching with either. After all if you're on the server you own the data so you can cache it however you want.
I meant network servers, as the HTTP spec says that GETs are idempotent, network servers (proxies, cdns, edge caching services) cache data, that can't easily happen with GraphQL by default, because all queries (even read queries) are transmitted using POST:
(taken from phil.tech/api/2017/01/26/graphql-v...)
It's not the end of the world, but it might mean that you have to get creative with your data for the fact you're suddenly losing the advantage of proxies.
There are standards popping up but on one side you have a decade year old system that works (and it's widely used by clients, networks and servers) for caching and on the other side you have to control both the client and the server if you want to have caching done right.
As the author was asking about the cons, and this I think is a con :)
Ah okay, I see what you’re saying. I was getting the network layer and application layer mixed up.
Though, while that’s true for now, I don’t see why tools like varnish couldn’t be extended to cache GraphQL responses at the network layer. If the adoption of GraphQL keeps increasing, we’ll eventually see solutions.
Just to add a bit to the discussion, I would encourage anyone interested in this thread to listen to this talk from GraphQL Summit on HTTP and caching in general with GraphQL. It was given by a senior platform engineer at GitHub. He goes pretty in depth comparing caching in GraphQL vs in a traditional REST API (Including HTTP caching). I think he does a good job of explaining the pros and cons with both patterns. youtube.com/watch?v=CV3puKM_G14
Thanks Bryan, very interesting!
thanks for the info
I k ow from within react builds using Apollo as the connector the individual caching is pretty solid and I haven't seen much in terms of drawbacks. Where I am wanting to cache the actual responses to serve to multiple individual connections i didn't find abstracting the backend calls with my own cache layer and serving / invalidating with a redis layer any more complex than I have with rest api's over the years.
Besides the mentioned drawbacks one that I see is that instead of relying purely on JSON they invented a custom DSL (wrapped inside a JSON), which comes with a custom parser that adds a substantial amount of JS to your code. In our case we we went from a ~380k bundle to a 600k bundle (before gzip).
Why do you have a graphql parser on your frontend?
Practically all clients (e.g., Apollo, urql) come with a query evaluation which requires a parser. This is not the schema parser, but it's still part of the graphql standard lib.
So if your client relies on the graphql package you have a parser in there, too.
I just took a look and neither urql, Apollo-client, react-Apollo, nor their dependencies have the graphql package as a dependency.
Unless I just missed it, you shouldn’t have a graphql parser client side
Sorry either you don't know these libs or you just like trash talk.
Still a parser is needed as all these libs perform validation up front and provide additional capabilities such as caching. They could all be added without additional parsing, but then it would be more cumbersome for the dev thus making the abstraction useless.
Jeezus man chill out. I just checked their deps on npmjs.org.
Didn’t know to check peer deps.
Nor did I know that apollo shipped their own.
You can do without. The Clojurescript client just sends a string over the wire, and get a Clojure map back.
That’s what I was thinking, but, I guess, Apollo needs some info on the query for it’s caching solution.
That makes sense. The Clojurescript library is more low level. It might be nice to have something similar to Apollo. But I quite like the simplicity of just binding the results of a query to some data in de dB. It's also easy to combine queries and subscriptions that way.
The big draw to Apollo for me is honestly not so much the caching, alththats a big plus, but the fact that GraphQL-codegen can make fully typed Apollo hooks for each GraphQL query or mutation I write.
It’s magical.
Clojurescript isn't typed so that won't work :P. Although you could build something similar using spec. You could even use generative testing out of the box that way for components created that way.
OMG, that 600k is huge.
I've worked with GraphQL quite a bit in both personal and professional settings. In my experience working with several teams in both new and existing GraphQL APIs, the biggest challenge has been schema design and management.
Maintaining a large schema can be really hard, especially when working across multiple teams with varying levels of experience with GraphQL. With REST, each endpoint is mostly isolated, which makes it easy to either make changes to or move endpoints to a new api version. In GraphQL, the schema is supposed to stay version-less by evolving over time. This means you need to think carefully about what you include in your GraphQL types, and how you structure your type relationships. If you create complex relationships that your clients begin to rely on, it can really hurt you later on.
The advice typically is to make small incremental changes to your schema as needed, but for many teams inexperienced with GraphQL, they don't take the time to focus on really understanding how their data graph should be structured.
Not to say that it isn't easy to mess up the design of a REST API, but I think in a lot of cases it is harder to fix a poorly designed GraphQL API due to the lack of versioning.
Yes, where with REST you risk having small differences between different endpoints causing errors. We work api first with REST, but there are at least 3 ways of doing pagination, and that's with about 50 developers.
Absolutely true, although I would mention that if you are running a Federated schema for multiple services managed by multiple teams, you can do the same thing in GraphQL unless you enforce a strict pagination standard like Relay.
BTW, just wanted to say thanks for your great talk on using Kafka to back subscriptions at Summit. We've been looking into doing something similar to back our first subscription service so it was great to hear some of your insights :).
Interesting take. Makes complete sense.
Queries can be more expensive to run - due to the multiple layers of resolvers and schema validation, nested queries can be more expensive. Schema validation is a big plus, don't get me wrong, but it does come with a trade-off.
thanks for the info
Queries might be more expensive, but using data loader one GraphQL query could be one query to the database, which with REST it might needed hundred simpler calls.
it’s just beautiful how data loaders work
I would say as you mentioned just the knowledge and learning, which is no worse than any new tooling.
Of that the one draw back I have is within the query syntax itself on consumption and getting to grips with the query/mutation fragmentation syntax a bit of a steep learning curve but easily surmounted. :)
Where rest does win is often with simple get calls for micro data where mapping the fields will take double the time to schematics forward. In these cases I have abstracted the calls into a simple get endpoint alongside my app build to save the bloat on the frontend.
Very little other than that I can think of as far as cons.
GraphQL isn’t a framework though it’s a protocol. It’s another protocol layer on top of HTTP, sure, but it doesn’t force you into any way of building an app like a framework does.
What makes you think it’s a leaky abstraction? It seems like REST is a much leakier abstraction over any dataset since it’s much more work to modify a rest api and clients than it is to correct a single graphql resolver.
Another thing to keep in mind in addition to everything that's been mentioned already is that instrumentation requires a little additional effort. You normally get response time metrics almost for free with the majority of frameworks but you're hardly gonna get resolver-level granularity instrumentation with GraphQL unless you build it yourself.
Your client needs knowledge and implementation with REST as well since the structure of data is static and defined server side.
And if you change a rest endpoint, you’d likewise have to change all clients, since they have no say in the response’s structure.
The abstraction Grapqhl provides allows you to make api changes server side without affecting the data structure the clients expect.
And yeah there’s no heat here, sorry if it came off like that 😅
Hammering out these small differences can provide some good insights is all.
"And if you change a rest endpoint, you’d likewise have to change all clients, since they have no say in the response’s structure." What? No, you change the version.
I partially agree with most of what you said, but please explain how you have less static types in your code? Schema validation comes built in, and works both ways (you can't receive an invalid parameter nor can you send)
One thing that hasn't been called yet is the the spec itself or pretty open. In practice it's not a big problem, because most clients and servers take inspiration from how Appolo did it. For example some servers and some clients allow to do queries and mutations over a websocket, but some don't.
I love it as well. A few down sides is consumers have learn a new way of requesting data. Over selection is something you have to handle. Error handling is strange and everyone does it different. I think the advantages out way these issue but not everyone agrees.
Caching
🙌
🔥
How about using REST for write, and graphql for read API operations?
Inability to fetch all fields with a wildcard:
github.com/graphql/graphql-spec/is...