It's not only about whether can I have body or not, but also about schema validation.
Querystring-based schema makes it harder to validate, unless you use an URL-safe serialization. (And it still makes it hard with Swagger / OpenAPI)
So, if I choose to POST, what benefits will I lose, and does it even matter? (perhaps, for scaling up)
I see two major scenarios here.
- Caching helps
- Caching is troublesome
Top comments (11)
POST is bad for user experience. For example:
robots.txt
.When you want to show a specific page for ajax table. If ajax table is POST based, it will be a headache.
I don't really get it, but thanks for trying to explain.
Anyway, the real URL being displayed on the browser URL tab doesn't really have to correlate with the API request METHOD, does it?
One big thing you'll be losing is en.m.wikipedia.org/wiki/Principle_.... Ignoring that principle is the cause of much more problems in software development than is generally appreciated.
GET requests are cachable from the server. It helps to reduce the load on the server side. GET should be also only for a read action and not for manipulating data or states.
From what I have found,
Also, regarding caching, what should I use for reading dynamic data? (that should not be cached, anyway)
Stil GET. Becaus it is documented in the rfc7231 to use GET for non manipulating request. The URL may is dynamic, but the response should be still the same if no other manipulating requests happen in between. And GET can also cashed on the server side or on a reverse proxy.
And in general a request body cannot cached.
How would the caching engine know?
Well nothing is more complicated that cach invalidation...
But in short. You have multible 'cache header' posiblities from the client side. How vanish and other reverse proxies works - no detailed knowlage. But this seems a good explanation developer.mozilla.org/en-US/docs/W...
About URL too long, I finally have seen it. With DELETE request.
431 Request Header Fields Too Large