Credits to https://blog.1password.com/what-is-public-key-cryptography/ for the cool image.
TL;DR:
Check this repository for...
For further actions, you may consider blocking this person and/or reporting abuse
Bypassing this as a hacker takes about 5 minutes. The 200 IQ hacker presses F12 then CTRL+SHIFT+F and searches for "verifySignature". Then all you need is to "return true" in the frontend javascript and all of this work serves no purpose, other than increasing the performance and complexity overhead of the entire API (RSA is costly, especially in JS). In the meantime, your API (where the data actually resides) hasn't had any improvement to security. I highly discourage people from implementing something like this.
You're not sure how source files in the browser work, are you? Also, please read the article; improvements to the API were made.
Apparently you're the one that does not understand. When multiple people are saying the same thing which is something you disagree with, maaaybe you should be the one reconsidering it.
I did exactly what Ariel mentioned. I will point you to something interesting to read: developer.chrome.com/blog/new-in-d... - maybe send this to your pentesters as well, but it sounds like we're doing their work.
I hope you can take feedback instead of being angry about someone pointing out your obvious mistakes, specially when you make them with such sarcastic arrogance.
First, obviously multiple people saying something wrong doesn't make it right, so merely having lots of people saying something doesn't automatically make it valuable feedback. I am interested in the content of the feedbacks, and am reading all of them, and answering their questions and pointing out flaws in their objections, when applicable.
So far the majority of negative feedback came from people who proved in their objections that they didn't understand what the article says. A majority of people who did understand provided valuable feedback, such as splitting the admin bits into a different app, fuzzing the API, etc, and agreed with the rationale that led to this implementation.
When professional pentesters say a vulnerability is critical, you better listen. As I said in the article, leave security to the experts.
About your interesting read, thank you for pointing out the all familiar devtools. However, in case you haven't tried before, changing the readable React source code does not automatically compile into a new working file on the Browser. The browser is not webpack. You'd have to change the compiled version. Obviously you're URGING to reply "but the hackers can do that somehow". Yes, they probably can, but this is not trivial. The hired pentesters are much smarter than you or me, they've been doing this for ages. If they didn't break it in two days, it is sufficiently secured for now.
I don't see how this point matters to the discussion. Browser overrides will modify the source before it is executed. As mentioned in the other thread, I've done it using devtools and I can still bypass your protection effortlessly.
Make assumptions about your own intelligence.
You seem to fail to understand that the only thing that got you "secure for now" was securing the critical backend flaw, not the RSA obfuscation you've done here.
I somehow need to prove to you that I understood your article (even though it's the author's responsibility to make it clear), so let me summarize it and then point out why this is not what you think it is:
isAdmin: false
- the flag that informs the client whether it should show "Admin controls" could then be changed toisAdmin: true
by an attacker using a man-in-the-middle tool. The attacker used Burp Suite for this.There are 2 things we can take from this:
The point other people and I are making here is that the client is in control of the user. The user can still set the flag
isAdmin
to true right before the code executes and that has been proved by using a simple code override in Chrome devtools. This does not mean it makes your application more or less secure - but it proves the effort you took to learn and implement response signatures might have been invested into something else. What effectively made your application secure was fixing the server flaw.I don't know how I can be clearer.
So far dozens of people understood the article very well and provided useful feedback. It is you and some other two guys who are bashing your heads against a strawman. The article seems to be clear enough.
The critical vulnerability was the hacker's ability to manipulate the UI as if he was an admin, which allowed him to use a form to create regular users, combined with his ability to spoof the request, to create a user that was itself an admin. This new user had true admin power. Fixing the API was not what made it secure, fixing the API was merely damage control. With the admin controls, finding other vulnerabilities is almost intuitive.
This is what they marked as a critical issue. People are eager to overestimate their ability to protect endpoints against unforeseen scenarios.
"and that has been proved by using a simple code override in Chrome devtools"
By whom?
Fixing the API would have prevented the attack completely. I don't know how the pentesters brainwashed you into thinking it was the other way around, that protecting your Front-end is what actually fixed the security flaw.
I challenge you to host a similar system with the same API flaw but with the signature obfuscation in place and let me break in.
Because "fixing an endpoint" is not the same as "making the API unbreachable". It is even weird that you can't connect these two dots. The hackers would simply find another unexpected way in in minutes.
Clone the repo and do it.
Host it, make it "unreachable" using your method and I will post here whatever you made unreachable by thinking your Front-end is secure.
Make an admin route and I can screenshot it. I'm determined to prove it to you if you give me the means.
I cloned the repo, ran a build locally and it is easily bypassable. There are no dots to connect.
Clone the repo, the implementation is already there and working. It even comes with a sample pair of keys, so all you need to do is install the dependencies and run.
Then prove you bypassed it. You claimed to have posted a screenshot, but I have re-read all my notifications and there are a total of zero screenshots of you breaking in. The time it took you to lie about posting the screenshot was enough for you to take an actual screenshot.
You're not disabling the signature, kid (what you said you could trivially do).
You did not prevent the signature verification. You have to disable the verification and then modify the network response to accurately represent what we're discussing.
What you did simply wouldn't work on a function that deals with all requests, your hardcoded data would instantly break the application.
But that's my fault, I set the bar too low. LoL 😂
It still proves my point, which you fail to see.
I see no evidence of what you claim in this screenshot. "John Doe" is the correct data. How does this prove the validation was bypassed?
But it was valuable. Try changing it to "false". If this works, it will probably show the error message.
Working or not (it probably won't, but could, anyway would be nice to know), I expect you learned that someone with technical knowledge responding with a mere attempt after three hours of intently messing around with it (your hurt ego is clearly a strong motivation) is comfortably outside the range of "trivial". Which ultimately proved my point: it is sufficiently secured against the profile of the potential attackers: employees with no tech skills but incentives to fiddle around.
The whole point is you don't need to change the server response. And even if you did, returning
true
from the validation function would work.Again, this took me 5 minutes - it's your terribly inefficient attitude that made this take 3 hours to understand.
If you're assuming your users are not capable of attacking you, why even bothering then? It appears to me you have wasted your time.
The whole point is that you do. As I explained, your other attempt would simply break everything else.
Just checking the times on the notifications from your messages we can clock you out at four hours (at least, since you're been interacting for several days at this point). That with full guidance, since I was here correcting every failed attempt you made, and disregarding the other measures in place. Thanks for taking your time into providing this very useful benchmark and proof of concept.
And I wasn't inefficient at all. I was constantly engaged in our conversation since ~6 in the morning, answering everything you said. If it took you four hours to do this with my constant guidance, then it does what it was designed to do: to protect the UI controls.
They have motivation to try. I'd say the only person wasting my time was you, but you also provided a valuable benchmark for me, so I thank you for that.
How can you be so presumptuous? I really should have let you stay in ignorance and denial but it goes against my principles.
It was a step by step process because you failed to extrapolate my ideas to the full solution. It's partially on me for not explaining them well enough.
I see. Your principles involve writing an article misrepresenting what this article claims trying to make fun of me for the crime of........ shuffles card..... asking for feedback.
You're obviously heavily invested in this. No one likes being disproven, especially with something they're proud of making. But please reconsider your attitude against someone that is trying to help.
You got humbled by technology and facts. I think my article served it's purpose.
Your article proved this measure accomplishes what it was designed to do.
I'm even tired of repeating the phrase "with enough time and effort". And voi la. It took an ego-hurt engineer half a dozen hours to do something that could work, with guidance and disregarding the other measures in place. It is sufficiently secured against our employees.
Not if they see my article 😏 don't tell them.
I am skeptical they could even if they did read. You made lots of jumps based on knowledge assumptions (things you don't know if other people know). That's probably the whole reason you naively said it was trivial, several hours before actually managing to do it.
As someone else pointed out, this is just security through obscurity at this point.
Putting a padlock in your locker is not obscurity just because a skilled attacker can pick it open if given enough time.
As I responsed to that person, obscurity would be changing the name of the "isAdmin" property to "dhASDuhVNAS132" trying to conceal what it does. So implementing something like Fractal as a security measure would be obscurity.
But OK. Thank you.
Point is you already have a padlock. What you did was to paint "TSA Certified" on it hoping nobody would be attempt to pick it.
"Browser overrides will modify the source before it is executed"
And modifying the source won't compile a new working version. Devtools is not webpack. You'd have to change the compiled version. If you can't see the difference, maybe you're wasting both our times.
And you fail to understand that fixing the backend was merely damage control. With the admin UI, the hacker would quickly find some other unexpected way in. You clearly overestimate your ability to know what you don't know.
"Never discuss with an ignorant. They will get the discussion to their level and beat you with experience."
I'm definitely wasting my time trying to help you understand what is wrong with your thought process. I felt obligated to comment as are are articles like this that hurt security as people will naively think this will protect them of anything and it won't.
Ah, yes, one of those quotes you can turn around 180º and they still work perfectly. What will your next argument be? The one about playing chess with a pideon? It is specially ironic, since you're the one leaving before providing evidence of your "trivial break-in". You probably tried and seen it doesn't work as you expected, right? It is likely that with enough time you can figure out a way, but this "enough time" is time I am securing the backend, so by the time you find a vulnerability, it could already have been patched.
And, finally, people will only be hurt by this article if they, as you, are unwilling to read. There is a huge disclaimer before the article starts, and I discuss my skepticism of the solution itself in the conclusion.
Good write-up of a real world security issue - thank you!
I think it's worth saying that BurpSuite cannot silently intercept TLS secured web traffic (ie: anything using https), a default browser will issue a security alert unless the user has installed a special certificate. This means that in the real world, users on default browsers are very unlikely to see any problems with your original app.
As the attacker was able to learn about your API (which they will always have the ability to do using their own tools) they could probe that to find the actual weaknesses. This is something your own in-house security testing can do in CI of course - testing both a 'happy path' and all permutations & boundary conditions for parameters (can be generated by tooling, as used by the pen-tester - no need to manually work all these out!), plus if you haven't fuzzed your public APIs, you should ;-)
I'm interested to know why you thought it so important to prevent the display of 'admin' controls in the UI through response tampering? The resources and logic for them is already present on the user's system and thus discoverable by interested / malicious parties even if they cannot be activated. The server side will no longer honour invalid requests if they are issued, and unless the user has modified their browser (as above), they will not be subject to any MITM tampering that could display the controls. It seems you may have spent lots of effort extrapolating new risk from the pen test report that didn't mention UI issues?
I thank you for your time reading it and leaving a very informative response.
I'm not sure how the hacker set up everything on his side, but he did mention configuring the certificate on his tool.
I'll bring up "fuzzing" to the rest of the team on our next sprint planning. Thanks!
When the team debated the report, we came to the conclusion that the exposure of the UI controls could turn the whole application into a playground for a malicious agent to quickly and easily find ways to wreak havoc. It gave visual and interactive cues about how the application works, without having to look at a single line of code.
This is why the attacker managed to break things in a matter of minutes. After that implementation, he fidgeted with the system for a few days and came up with nothing new.
But I think the major reason is that we didn't want to worry about what could go wrong if the user could change what the API is saying to the application. As you said, we extrapolated potential risks out of fear of the unknown.
Still, unless your application is doing something that's on the level of national security, it seems like a cost benefit analysis should show that obfuscating the UI in order to mitigate discovery is just not worth it.
In my opinion, the time would be better spent on even more thorough investigation of the backend to make sure that it does not matter what an attacker could do on your front end.
The application is used to calculate a yearly bonus paid to company employees based on their performance so there is motivation for a potential attacker to mess around trying to get a personal advantage.
Also, the information available for admins in the system is very sensitive. We can't risk users figuring out ways of seeing things they shouldn't.
We analyzed the impact this had on performance and we concluded it had no impact, if that is what you mean by cost-benefit.
About "thorough investigation of the backend", yes, but this is "CI&CD" stuff, constant iteration and improvement, we don't know yet what we don't know, and we can't risk it.
For example, one of the points in the report, that I didn't mention in the article, is that the attacker managed to mess around with our filter feature and figured out a way to override the backend standard filters that limit visibility of the data by access-level. He used a fake admin access in the browser and managed to see some restrict data because of his ability to change the request in ways we never designed the application to handle.
Its always "obvious" after a hacker explains how he broke in, but you know you can't be sure that a creative and motivated attacker won't find these bugs and break your app faster than you can find them and patch them. This uncertainty made us conclude that we should play it on the safe side and block this vector of attack first and fast, and then we investigate the API. Its not either-or.
No, I meant a cost benefit analysis of the amount of time it would take to address this issue on the front end compared to just hardening your backend.
I am also referring to the maintenance cost of supporting the added complexity on the front end.
My philosophy on this is that a motivated attacker will always find a way to extract info from your front end, so it's a lost cause.
I also echo the other comments about how the attack vector mentioned here is probably not a realistic one to exploit on a VICTIM'S machine
Well... it took 2 days to address this on the front-end, mostly because I had never done it before. I could probably implement this in 15 minutes now, with the repository I created to "store" this knowledge. Recently I've found the repository "jose js", that would've saved me even more time.
Securing an API is not a "task", it is a constant, never-ending process. "Hardening" the backend takes years and it is not enough alone, since all it takes is one gap in the armor.
About maintenance cost increase; we have a function that handles all HTTP requests, and added the verification step to that function. It doesn't impact anything else, really. The whole application is working as expected, as if nothing changed. This is not a breaking change and caused no shockwaves.
And I understand your philosophy, however, it wouldn't work on our case. The application deals with money and very sensible information. That's plenty of motivation for even a regular company employee to become a potential attacker. We can't afford to allow it to be easy. The attacker will have to be VERY motivated, because even specialists failed to break in after this was implemented.
This doesn't mean they can't find another way, but as they said, it is "sufficiently secured for now", and this calmed down the people with the money.
Yes, because there is no "user victim". The "victim" in this case would be the company. An employee trying to escalate his access to affect his bonus, for example.
Ah ok - raising the bar above the trivial to discover threshold 😁
You pretty much took the words out of my mouth. It seems like all that was really necessary was to fix the APIs that were improperly secured.
The problem is that "fix the APIs that were improperly secured" doesn't mean much. Sure, we fixed that endpoint and a couple of others after that, but we can't opperate in damage-control mode. We don't know all the insecurities that we don't know, and this is why we called the ethical hackers in the first place.
They're the experts and pointed out that this was a common vector of attack and a critical issue that needed to be fixed, I am just the developer who was tasked with fixing it. They said that being able to easily explore and modify the UI leads to security breaches in minutes, because it is very easy to overlook use-cases that "should" never happen.
Now automated "fuzzing" seems to be a good thing to implement and continuously improve upon, but the issue was critical, now it is solved, and we can implement fuzzing without fear of an attacker breaking our application in minutes.
I failed to understand why you couldn't simply use TLS? If your API has CA signed public certificate, the client only needs to verify the domain name of the connection after the TLS handshake is complete.
Any information from that connection will be sent by the API.
The whole setup about the RSA keys reminds me about the stuff used for SAML protocol and even SAML implementations ultimately trust the TLS instead of the keys, which they still also have to use for historical reasons to be compatible with historical mishaps in the protocol design.
As I wrote, both the application and the API are already protected with certificates. The hacker exported the certificate from his browser and imported it into his tool. The API believed that the requests coming from his tool were from his browser, and his browser believed the responses coming from his tool were from the API. And he could change anything he wanted basically using a find-replace. I suggest you take a look at the Burp Suite, even though it is a paid tool.
Only using TLS/SSL is not enough to prevent manipulation of the data.
Only using TLS is exactly enough to prevent manipulation of the data - that's basically it's whole purpose. :)
If you read carefully either using the blurp browser or installing their CA into your existing browser is a requirement to make this kind of attack work: portswigger.net/burp/documentation... - at which point you've basically completely circumvented TLS and all it's benefits.
You seem to be under the dangerous illusion client side code can't be tampered with - but this is simply not the case if you have a compromised (willingly or not) client.
Or to put it another way if a user or attacker can intercept your api traffic and modify it, surely the same attack vector can be used to intercept your client side code and modify it to remove any additional validation function you may add? Or the attacker can simply duplicate your client side code and remove any function that way - It's also a mistake to assume access-control-allow-origin would prevent this kind of thing - access control is only designed to protect the browser and relies on the browser to implement this to the specification (and if the client is compromised / malicious all bets are off) - it can even simply be disabled on many browsers through a simple toggle or registry edit in much the same way as a root CA can be installed. Again: as a basic rule any client side security feature can be disabled if the client is untrustworthy.
All is to say-- you should consider client side code already compromised; and adding additional validation such as this is simply a pretty trivial non-standard security mechanism that duplicates the already sufficient security of TLS and serves no real additional security other than some easily bypassed obscurity.
Time and energies would be better spent on hardening your apis, fuzzing and code reviews. This is the painful fact but this is where it counts - and finding the time and budget to do this over the long term is where most teams and companies mess up. Of course quick wins and stupid mistakes like disabling mock / initialisation endpoints are always good to check but it's a mistake to assume a client side function will prevent an attacker from finding an unprotected api or a misconfigured server rule.
Adding server side protection to protect access to some browser code can be a good idea, but again it's a mistake to rely on this, as a determined hacker will simply attempt requests based on the logical structure of your apis endpoints (and completely randomising your api behaviour isn't really viable for most sensible teams or products!). If you have a create user route, even without any client side code calling it an attacker will likely guess it's location and format it will then likely get an error message to confirm it's found the right route and then attempt to post any and all data to it in a format consistent with your application.
Spend your time protecting api endpoints, especially the high value ones like creating accounts and key transactions as beyond the basic mistakes this will be where your most critical vulnerability is outside of some external factor.
I agree 100%. Whenever you design any protocols, you should never trust client for anything. If you want to pass some data through the client, you have to use e.g. HMAC-SHA256 and sign the data before it reaches the client and check the data after you receive it from the client. If you need to prevent replay attack, you have to include a nonce to the data covered by the HMAC signature and you have to keep track of already seen nonces.
If you need to pass data from multiple trusted parties (e.g. trusted server operated by 3rd party) you can use public key encryption to reduce the amount of keys but that doesn't reduce the requirement to have the environment generating the message trusted.
If you generate the message in the untrusted client and sign or encrypt it in that client, that client can generate any message it wants because clients cannot be used.
The client code must assume that it can trust the server and it does it by verifying that the TLS connection is fully complete and the domain name is the expected one. In case of HTML5 this is implemented with server distributing the source code (HTML+CSS+JavaScript) to the client using public CA signed certificates. The public CA signed certificate is not the only way to do this but it's the path of least resistance given the existing client software already installed on the client system. Avoiding CA signed certificates and using self-signed certificate would improve security if you can pre-install the certificate as trusted on all client systems.
And the fact that attacker can see that some kind of admin user interfaces do exist doesn't matter because all the data and commands to actually use those admin interfaces is checked by trusted code running in trusted environment, the server.
The old saying says that if the attacker has physical access to your server, it isn't your server anymore. The same applies to the client hardware and that's why you never ever trust the client.
Some people keep asking for DRM and there are dishonest sellers selling you DRM "solutions" which pretend to make the client trustworthy. That's only smoke and mirrors and it depends on owner of client devices believing that DRM exist. You can use TPM chips and other implementation tricks to make clients harder to manipulate but you cannot fully prevent clients from being modified by the attacker.
Unfortunately, DRM cannot exist even in theory because it basically requires Alice being able to send a secret message to Bob without Eve being able to see or modify the message. And a fully functioning DRM would require that Bob and Eve are the same person! That's impossible for very simple reasons but DRM believers and sellers think otherwise.
"Only using TLS is exactly enough to prevent manipulation of the data [...] "installing their CA into your existing browser is a requirement to make this kind of attack work"
Only that, as explained in this article, it is not. It prevent's data being manipulated by third-parties eavesdropping the communication but does NOT prevent the end-user himself to manipulate the data. I think you're failing to see that the potential attackers in this case are otherwise legitimate users. The application deals with the employees bonus, so they have motivation to attack from the inside.
"at which point you've basically completely circumvented TLS"
Yes. Hopefully you can understand that your sentence literally means "TLS alone is not enough".
"other than some easily bypassed obscurity"
We hired professionals to "bypass it" and they said it was "sufficiently secured for now". And its not like this was "obscurity", since we thoroughly explained the mechanism to them before their attempt.
"Time and energies would be better spent on hardening your apis"
Hardening is a iterative process of improvement, that we never stopped and will never stop doing, but it is definitely not an "either-or" with closing other vectors of attack. All it takes is one gap in the armor, so closing gapping holes like the one described in this article is extremely cost effective. This was relatively quick to implement, and sufficiently closed this critical vector of attack for now.
Thank you for the extended reply :)
"It prevent's data being manipulated by third-parties eavesdropping the communication but does NOT prevent the end-user himself to manipulate the data"
True, but this is the fundamental nature of client-server systems. You are never going to be able to trust the client and nothing you could add will change this. Nothing can prevent the end-user himself from manipulating the data or your frontend code - they are the owner of their client system and can never be trusted (as you have discovered they can be the attacker). Any client side security you may try to add to circumvent this fundamental fact can simply be disabled because as a user, I can do anything on my system up to the limits of rolling my own compromised CA / browser / OS.
What makes you think adding a extra function to your source code you send to a compromised client will prevent that user from editing that exact same JS code to simply remove such a function? The only solution to such a problem would be to secure every computer you wish to consume your api with a secret key that is 100% isolated from the users of the system and could be used to decrypt your signed code before it was run on their computer. You would also have to prevent this secret from ever being read, as well as the decrypted code from being extracted after decryption. This is largely considered a pointless and impossible pursuit to undertake even in the cases where you have complete control of a system, or a propriety protocol such as in a large corporation or for closed platforms like app-stores, bluray etc. To attempt this using open standards and uncompiled, unsigned JS code is simply not possible.
The best you can do is to go down the route of signing your JS, but this is basically called TLS as it is protected the integrity of your source code. - "The principal motivations for HTTPS are authentication of the accessed website and protection of the privacy and integrity of the exchanged data while it is in transit. It protects against man-in-the-middle attacks"..." en.wikipedia.org/wiki/HTTPS
All your professional has done is remove this security the entire internet relies upon and manipulated some api calls by getting in the middle of what would otherwise be a secure channel. There is no logical protection to this kind of attack because he has compromised the client (which is basically why they can't class it as a MITMA -- you're not in the middle of a secure communication you've replaced 1/2 of the system to make the whole thing unsecure).
I don't see a reason why "you are never going to be able to trust the client" should be translated to "let the client-side application be easy to break since it is impossible to make it impossible to break".
I think that if our employees are having to develop their own browsers to break into our application, they deserve the extra bucks they'll hack into their bonuses lol
The fact that the same professionals were unable to remove this security afterwards calmed down the people upstairs. Any attack of this sort is non-trivial at this point.
"You are never going to be able to trust the client and nothing you could add will change this."
I totally agree. The point is that you don't trust the client but you check if the command that the client did send is allowed to be executed by the credentials used for the session that submitted the command.
If the attacker has taken control of the client system after the session has been initialized, there's nothing you can do about that. Adding public key encryption on top will not help.
However, a client system controlled by the user who has logged in with correct credentials is not a problem as long as you don't trust any logic executed on the client. And if you don't trust any logic on the client, you don't need to sign anything by the client.
The communication between the client and the server is protected by the TLS with gives secrecy and authenticity guarantees for the client (assuming no client certificates as is typical). As a result, you provide service from the server to clients and clients connect using TLS connection and then pass data that is used to identify the session and the command. Then trusted environment (server) verifies if the data is valid for the session (e.g. session has not expired) and then trusted environment (server) verifies if the given session is allowed to execute the requested command. None of this requires any trust on any logic on the client.
"I think that if our employees are having to develop their own browsers to break into our application, they deserve the extra bucks they'll hack into their bonuses lol"
You shouldn't design or implement "security" which depends on lack of skills of the user base. Have you heard about GTP-4? That's only a start. And hopefully the business prefers to hire dumb people only to allow the "security" to work.
If you want to prevent the employees from giving themselves extra bonuses, the only correct way to avoid the security vulnerability is to compute the actual action ("give bonus X to person Y") in a trusted environment only, namely on server. Then the only question is who is the current session owner and does that session have required capabilities to grant the bonus. No amount of client modifications can bypass that check.
If you do something else, you have to be brutally honest and say that there's no security but obscurity by security – as in key under the doormat, absolute safe as long nobody notices or guesses it. And make sure to communicate this to the decision makers, too. Sometimes that may be enough but it shouldn't be mixed with real security.
Public key encryption is designed for use case where you want to send messages over untrusted medium and do not want to handle connection specific encryption keys. It cannot fix the problem where the message sender (client logic) is untrusted. And signing or encrypting the message after it has been generated in untrusted environment will not make the message trustworthy.
I am kind of confused by your reply.
The messages are not generated in untrusted environment. They are generated and signed in our trusted server. The client side can't sign messages. I think you missed something in the article.
Also, this is not an either-or. Continuous improvement of back-end security is not something you stop doing. Ever. Neither will we stop. The first action we took was fixing the API and doing a sweep on other endpoints.
However, as pointed out by the professional pentesters, this IS a problem, a critical problem, and as I can see from some of the replies to this article, a very ignored problem. People are way overconfident in their ability to perfectly secure their backend; as I was "pretty sure" we secured ours.
The majority of potential attackers will try to break something for a few hours or days, fail, and give up. This is protection (as opposed to security, I guess).
Imagine not putting a padlock on your locker because you know all locks can be picked by a sufficiently skillful lockpicker with sufficient time. What the padlock does is both raising the bar (a majority of people won't even try, and a majority of those who try will fail) and giving you time (if the lock takes 5 minutes to pick, you now have 5 extra minutes to react to the thief). Time we are using now to implement measures such as fuzzing (recommended to me in another response in this article) that will improve the strength of the back-end.
Yeah, it seems like I've misunderstood something if you create the signatures on the server. However, if the server creates the signature using private key and the client is verifying the data using the public key, how does this improve anything over simply sending the data over TLS connection?
As I understood the article, it seemed like the client was signing the data using the public key and the server was verifying the results using its private key. That would be an unsafe protocol.
The hacker used a specialized tool to bypass TLS connection (for himself only) and manipulate the responses from the server.
What we do is verifying the signature from the server (made with the private key) on the client (verified with the public key), and reject the data if it doesn't match.
As others pointed out, this doesn't make it impossible to manipulate the data (some suggesting things that...... aren't possible, which made me take what they say with a grain of salt), but the pentesters concluded it is sufficiently secure for now. For now being the keyword, they'll come back later this year, and I'll try to provide some follow up on what went down.
Why do you bother verifying the server signed data on the client if the data come through TLS connection? The attacker that can modify the TLS connection can also change the computed results of that verification.
Do you have some reason to believe that the client software would be intact but the attacker can MITM the TLS connection? I'm asking this because the way you describe the signature seems like this is the only attack that your method would prevent. All the situations I can think of allow modifying the client logic if TLS connection is not safe either.
If he tries to change the response from the API, the verification will fail, he can't fake a signature for the modified data because he only has the public key.
There are other mechanisms in place, such as SRI and CSP to name two, to help mitigate the attacker's ability to modify the source files (they were there for different reasons, but they helped during the second round of attacks where the hackers failed to break in after two days).
Mitigate being the keyword here, we are aware that they can puzzle their way into disabling those as well.
Both SRI and CSP depend on TLS for their security so if you don't trust TLS, you cannot trust SRI or CSP either. (This is because both SRI and CSP are optional features which are enabled with the data passed over TLS. If you think TLS is not safe, you cannot expect to be able to successfully pass the data to enable these features either.)
I have major trouble understanding the exact vulnerability class you're trying to combat here. Do you think TLS is safe or not?
And yes, CSP with the reporting feature turned on may help catch less skilled attackers while they try to attack the system. A skilled attacker will use tools that have CSP and SRI checks disabled so they will never trigger. As an alternative, they may be using setup where CSP and SRI do trigger but never leak that result to remote server.
It appears to me like you're thinking that you can trust the client (browser engine) but you cannot trust TLS. It doesn't seem like a reasonable assumption to make. For all cases where TLS can be bypassed the server submitted client logic can also be modified at will. For example, you can use the Burp Suite to also remove SRI and CSP from the headers and HTML just fine. You can also replace your custom JS code in place of the server provided code. Even a good adblocker such as uBlock Origin can do this.
Calling this setup mitigation instead of obfuscation seems incorrect to me. Typically mitigation would be about reducing the effects of a successful attack (e.g. sandboxing) and obfuscation is about making the attack harder without actually preventing it. This blog describes an obfuscation method, if I've understood it correctly.
Had the blog post been titled "Using public key encryption to obfuscate SPA client logic" or "Smoke and mirrors: DRM implementation for your SPA" I would have no problem because then the post wouldn't give false impression what's actually happening.
I hope you're able to see how your objections prove my point when they all start with "a skilled attacker". A skilled attacker can hack NASA.
You would understand the exact vulnerability if you would read the article again with the renewed understanding of our exchanges. The hackers said that the ability to effortlessly interact with admin controls was what allowed them to find vulnerabilities in minutes instead of several days as it takes now.
They recommended that mitigating this was critically important.
Also, your definitions are... a bit off. An example of obfuscation would be changing the "isAdmin" property to something like "hadhau1863an", so that the attacker wouldn't know what it is from simply looking at it. The purpose of the attribute would be >obfuscated<, so implementing something like Fractal as a security measure would be obfuscation.
Putting a wall around your castle is not obfuscation. Yes, it doesn't make it impossible for sufficiently experienced climbers to get in, if they have enough time to climb before we knock them down (the time it takes the attacker to get in is time we are finding and patching vulnerable endpoints), but it does protect the castle against the majority of attackers.
This measure wasn't designed against professional hackers (even though it helped against them in discernible ways) but against curious fiddlers, who are the likely attackers, since company employees are the only ones with access to the application.
I would argue that putting a wall around your castle is similar to obfuscation because it assumes that the attacker is moving on the ground. Whenever you're building secure software, you should start with the assumption that the attacker does the best move, not the move that is easy to prevent. This is not different from e.g. playing chess: if you make a move and opponent can make 5 moves of which 4 mean that you win the game and one means that you'll lose the game, you'll not win the game with 80% probability.
And yes, I used expression "a skilled attacker" to refer any attacker that is not blinded by the obfuscation a.k.a. smoke and mirrors. It seems like a pretty low bar for me, but I used word "skilled" to leave out script-kiddies.
How does public key encryption help when the message/command is generated by client? Remember that all clients are untrusted by definition because the attacker controls the hardware. Clients have all the data and keys you send to them and may or may not follow any logic you submitted to the client.
You cannot generate trusted data in untrusted environment so it doesn't matter if you then encrypt or sing that client generated now-untrusted data.
I think you got it backwards.
The message is generated and signed in the API.
I know they have access to any key we send them, that's why we only give them the public key, they can't sign messages with the public key, so they can't fake the data.
If the API (trusted server) signs the data, why do you need a signature at all? Wouldn't TLS already provide all the authenticity you need? The client can verify the connection (TLS + domain name) to the trusted server and anything it receives from the TLS protected connection is trusted.
I explain in the article that the attacker is able to bypass TLS by installing his certificate on his tool.
Yes, and that only affects that specific client. And as the client is always untrusted anyway, that doesn't change what the server can or should do.
If you run a service that sends HTML+CSS+JS to the client to implement the interface, you should think that as default implementation of the client and the end user not installing TLS bypass allows the end user to trust that he or she is actually running the default client implementation. The TLS connection is a guarantee to the end user that he or she is running the original data and software provided by the server.
TLS connection cannot prevent the client from running non-standard implementation (that is, executing some other logic than the default implementation provided by the server). And using public key encryption running on client hardware cannot prevent that either! That's the whole point. The only way you could pretend to prevent client from running non-default logic is some kind of DRM implementation, which cannot exist even in theory because it would be similar thing to perpetual motion matchine.
You can pretend to have a working DRM implementation similar to pretending you have a perpetual motion machine. If that's what you want to do, fine. But never ever think that it's a real thing or real security.
"Yes, and that only affects that specific client"
It doesn't have to affect other clients. I understand what you're saying, but it really doesn't apply to what the article is about. I think you're missing the point made by the pentesters: they marked this ability to easily manipulate responses as critical and recommended preventing it because it was the only reason they were able to break-in in the first place.
You also seem to be mistaking "security" for "protection" (and "protection" is what is claimed in the article). You don't put a padlock in your locker for "security", since any sufficiently skillful lockpicker with sufficient time will be able to break in. You put it for "protection". The majority of potential attackers won't even try to pick the lock, and the majority of potential attackers who try will fail, and even so, the time it will take for the lockpicker to pick it open can be enough for you to catch the thief on the act.
So the silly objections like "but this doesn't do anything because the attacker can roll his own CA, create his own browser, run it on his own operating system, running in the hardware he hand-made in his garage" are not properly objections to the solution implemented.
If you simply leave your locker without a padlock, people will open it and take your stuff. Big surprise.
The reason people use e.g. pin tumbler padlocks is either ignorance or cost. For software, implementing the correct stuff (that is, checking capabilities/permissions on server) requires about the same effort as doing it incorrectly (running trusted logic in untrusted environment, e.g. client).
My point is with the effort spent on "protection" you could have also implement real security instead. If you already had the incorrect implementation, sure, it requires more work to fix the whole implementation.
This "protection" will make attack a bit more complex but it cannot prevent it, unlike real security which requires doing the correct implementation.
(And yes, in case of digital security, you could argue that the attacker than brute force e.g. AES-128 encryption but physicists would then argue that the total energy needed would exceed the total energy of the Sun over its whole lifetime. That's much better level of security than the best mechanical lock you can get. And if you want high quality mechanical lock, the best options I've aware of are "Abloy Protec" and "Kromer Protector" safe lock. And of those, unmodified Abloy Protec has actually been picked in real life but that's really really hard. I know of three people in the whole world that can pick Abloy Protec.)
"will make attack a bit more complex but it cannot prevent it"
Then it serves it's purpose. I don't buy the argument "the effort spent on it would've been more useful elsewhere", because the effort to implement this was miniscule compared to the hundreds of hours already spent on implementing security measures on the API, and the hundreds (or maybe thousands) more that will take to make it technically impenetrable.
Read the article, comments and even the simple repo, and still don't understand the point of all this.
First, not related to the security problem but to the implementation of this "fix" - So you basically did some form of JWT, why didn't you just use JWT protocol in the first place, like you said already have for authorizartion. Your server can send a signed JWT token (the payload of which can be whatever your server needs, it's not restricted to auth usecases only, like in this case JSON.stringify(responseData)). And your client can just decode/verify it. If the current user-hacker tries to change this JWT token or it's payload it will fail. This are 2 lines of code, one in server and one in client, using the right libs which apperantly you already use for the authentication part.
Second it's best to describe what your app is doing but what I figured it's smething like:
If this is the case and you (or your boses) think that you've "secured" it with what you've done then obviously no need anyone to convince you otherwise. If this is not the situation then just explain what at all you are trying to protect and then people will willingly be happy to provide guidance and help
I need to verify the signature on the client, and JWT verifies it on the server (at least, that is how I learned it). This doesn't help in this case, because the hacker can intercept any attempt to contact the server to validate the signature and fake the response saying it passed.
I came across the repository "jose js" recently and it seems there is something "like" what I did there, but I couldn't make the time to get to know it yet.
I can't disclose details about the application. But it is like a 360-evaluation tool, and people's final score is related to their bonus. If, by messing around, they find a way to modify their scores, this could impact their bonus.
The hackers reported this as a critical issue because of the profile of the potential attackers; employees with low tech skills and good incentives to mess around. Looking back, maybe I should have made it clearer on the article. I expected people to just "get it", but I guess I shouldn't. Lesson learned.
Many people have provided helpful guidance, and I gathered a lot of useful information to discuss with the team. We're fuzzing the API to battle test our validations, for example.
The JWT's payload can be verified anywhere, successfully decoding it is actually the verification, if the payload is tempered then decoding/parsging it will fail. It is most likely what you already do with the auth JWT , you receive from server JWT with lets say payload claims like "user:xxx", "admin:false", "prop:value", so client verifies it by successfully decoding it and sees "Aha, the payload say user:xxx, prop:value, ..." and so on. If someone doesn't matter who, man-in-the-middle or same user tempres it and tries to put "user:yyy", "admin:true" then the decoding will just not be possible. Read it more properly on jwt.io/ , I'm not native english speaker.
Thanks, I'll read, but as I understand, the decoding of a JWT is simply parsing it's content as base64, it would still need the secret to validate it, so that's why it happens on the backend... perhaps I'm missing something, so I'll look into it. It is possible that JWT accomplishes what I needed, but we simply didn't know at the time.
Thank you very much.
There's two main types of JWT, and inside those there's a selection of cryptographic cyphers you can use.
You can sign a JWT with an RSA private key on your backend and verify it using a public key on your frontend, or any on any API endpoint.
This type is JWS, And as you mentioned, this version is just base64 encoded data, but with exactly the sort of cryptographic signature you're after.
The other type is a JWE, and in this form the entire payload is not only signed but encrypted, so you cannot see the payload in flight.
Again, this can be decoded and verified on both the front and backend.
Cool. JWS seems to work like what I did. Could've saved me some time, but I still enjoyed building this as I learned a lot.
JWE I suppose the front would need to have the secret, so it wouldn't really help. But I guess it can be good for server to server communication?
Thanks for the info.
Both JWS and JWE can work either with PSK or public private keys.
It depends on the crypto chosen.
Using RSA or Eliptic curve would work with public private keys, just as your solution did. With these the front end would only need the public key to (decode JWEs &) verify the JWT.
Nothing about JWTs is limited to backend, it's just as applicable to frontend.
If admin elements where embedded in the front-end, the api “inception” to reveal them didn’t matter, a hacker could just look in the HTML to find the form or simply use chrome dev tools to customize the api response with ‘isAdmin=true’ with dev tools to reveal your form. Your main issue lies in your backend.
A good rule of thumb is never trust the front end because it can be anything. It can even be the Postman instance I just started up.
Now when you went on the RSA, you completely lost me. It’s a lot of work for little benefit, work I see as not worth it. A hacker can still send malformed requests, it just takes a little more effort and you’re right back at step 1.
Secure your backend!
It wouldn't be so simple in the case of a React app, the elements are not simply hidden in the HTML, but yes, with infinite time an attacker can figure out anything, but they don't have infinite time.
The hacker cannot manipulate responses because they are not signed in the front-end with the public key, which is the only key he has.
This is not either-or. Secure both. You shouldn't make it easy to break just because you can't make it impossible to break.
I don’t mean to be rude, but I can’t understand what you’re trying to say.
The RSA signing code is in the front-end right. That means a hacker can malform and create their own api requests or inject a payload to modify the response since they have the signing code so it’s not a matter of them having “infinite time” it can be done in a matter of 5 minutes that’s what I’m trying to say.
For the reasons stated above is why I say secure your backend. You say it’s not one of the other, I don’t have to use your web application. Like I said I can spin up a http client, extract your RSA code and you’re right back at step 1, but I can only your 1 backend.
You get what I’m saying? Your RSA is useless.
"I don’t mean to be rude, but I can’t understand what you’re trying to say"
Neither am I, but why bother replying in such affirmative manner if you didn't even understand? That's not only rude, its pedantic. Read the article before engaging, please.
"The RSA signing code is in the front-end right"
No. Read the article, please. The front-end VERIFIES the signature. The signing code is in the BACK END. The front-end only has the PUBLIC key.
"extract your RSA code and you’re right back at step 1"
You can VERIFY messages, you CAN'T SIGN them, which means you CAN'T CHANGE them.
"You get what I’m saying?"
Do you?
No I didn’t mean I didn’t understand your article. I understand your article that’s why I was replying affirmatively. I didn’t understand your initial reply, which seemed like abstract ideas, that’s what I was saying I didn’t understand, asked for clarification then asked you to see my side by saying “you get what I’m saying” but you took it in an entirely different direction.
My last points:
Cheers
"No I didn’t mean I didn’t understand your article"
But you didn't, you claimed twice that I was signing messages on the front end, which in the article itself I explain is a bad idea.
About your points:
Yes, that is why securing the API is important. This is not what the article is about. The article is about the attackers faking the responses from the API.
I have never seen this being done, but I won't say it can't be done, it probably can. But so what? The application will immediately stop working as soon as you try to change the response.
You're not the first to make this claim, and I'm not saying it can't be done, it probably can, given enough time, but how? The professional pentesters couldn't break it, and they had two full days to try, and full knowledge of how the solution was implemented. You can't simply change the source files in devtools in your browser and have the new code be executed (you can change it, but it won't reflect on the code that is actually running. Test it), that's not how any of this works.
If it can be done, it is not as trivial as you're probably thinking. Which brings us to the report's conclusion: "sufficiently secured for now".
Inserting modified code into a web application is very easy to implement using almost any proxy software. For example, we can take the same Burp Suite, intercept the js file response and replace it with our modified version.
Application stops working? It's my browser, my client. Once my client downloads your application I can do whatever I want no matter what you think. If I visit your application from my browser, it will not stop working because I won't allow it.
Anyone could change the api response to anything they want, no matter what encryption or whatever fancy thing your api is sending back because I CONTROL THE CLIENT not you. I can change your API response to whatever I want.
Yes you can change source files to whatever you want, I don't know why you think you can't, where is that idea coming from? I just did it right now for dev.to just cause I can, as I would do with your site.
Again I'm not trying to be rude, you seem to gaps in your knowledge of the browser based on your other responses and you seem to put too much faith into this backend api signing function and underestimate how much control users really have. I'm trying to tell you its trivial BECAUSE IT IS.
I want you to have a secure application at the end of the day, thats why I'm saying focus your energy to where it needs to be NOT ON THE CLIENT WHERE I HAVE FULL CONTROL and you can't do anything to stop me...
Unless... you have a secure backend 😊.
Report "Sufficiently secured for now" is more like a false sense of security.
This was one of the things the hackers tried. This was, if not prevented, at least mitigated by SRI, CSP, and other measures that were already in place.
I am sure with enough time and effort they could eventually overcome the security layers. Eventually. In any case, the client is sufficiently secured for now.
Yeah......... you haven't read the article. Nor my responses, for that matter.
We greatly invalidated the damage you think you could cause with your "full control". Sure, you can try to change something, but then it won't work. Enjoy your "full control" over a non-working application.
Enjoy the fake sense of security which is easily defeated by a right click and inspect element! Trust me you haven't read my responses or anyone elses, otherwise you would understand the flaw by now. It's been pointed out like 3 times by previous commenters.
To each their own, Cheers!
I am almost tempted to give you access to the development environment of the application just to watch you fail. Sadly, it would break company rules.
You haven't read the article, you haven't read the responses, but you're 100% confident you could break this doing something you don't even know you can't do (at least not in any way remotely as trivial as you're suggesting), probably because you haven't tried.
Likewise to you my friend, just remember you haven't properly refuted any claims that I've made nor anyone else have made. You just keep repeating the same thing thinking it covers all your bases and it doesn't, your change is next to useless. But I'm not the the user (gladly) so I'll leave it at that.
I would love to get the dev enviornment, please do! At Google I've seen all sorts of security protocols, even broke a few myself and seeing the details of your "front-end security" is laughable. That's why I'm warning you. But hey.
Cheers, I won't be responding after this.
This kind of sounds like security by obscurity.
Not at all. Its security by "you can't change how the application is supposed to work".
What's stopping me from making my own modified version of the client? Client side applications are not supposed to be "protected" or anything, since anyone can theoretically modify them and change the client-side behavior. If there's any secrets in the client then you're doomed already. But if everything is secured via protected API routes then there's nothing to worry about.
The post makes it sound like you're trying to protect against client-side modification through tools like Burp Suite. But that's the wrong way to look at the problem, since anything client side should basically be considered compromised. Your goal is not to block tools like Burp Suite. All that tool does is allow you to play with the requests that are made. There are many other tools out there to do things like that.
So basically it's the client side that does the signature verification, so I could simply copy your app's code using the dev tools in the browser, and then make a modified version that removes the client-side signature check. And if the client sends signed messages to the server, then I can just find the key (it has to be given to the client at some point) and make my own API requests using a custom script that adds the signature. Yes it's way more difficult, but that's the definition of security through obscurity.
In some situations it might make sense to implement something like this, say for a client-side game or something where you want to make cheating more difficult, but the moral of the story is that the client should never be trusted, and your server API is what handles security.
It is not the client sending signed messages to the API, it is the API that sends signed messages to the client. The only key the client has access to is the public key, which is not enough to sign messages. About client-signed messages, I wrote in the article thats exactly the reason I objected to something like HMAC. It would take no effort to find the keys, and then it would definitely be security through obscurity. Protecting the API is done by validation of access-level and of data. The validation is to prevent modification of the responses from the API to the client.
I had this exact concern, but tested a lot if it was possible to somehow remove the line (or add a "return true") to the function on the browser "source", but it never worked. How would you run this modified version? You can place breakpoints, and they'll be hit, but modified code won't run AFAIK. You couldn't simply copy the code, modify it and execute it from localhost or other domains and have it working. because the access-control-allow-origin headers are tight. Unless you're aware of another tool that can do that?
The ethical hacker himself was unable to disable it, and he had a couple of days to mess around with it, and we explained to him how the mechanism worked, so it wasn't like he was operating from "obscurity". I'm sure with enough time, resources, and dedication, a motivated attacker can figure the system out from what is publicly available in the client, but with enough time, resources and dedication an attacker can hack into anything so... what's your point?
Also, this solution doesn't only block "Burp Suite", it blocks this vector of attack as a whole, any local proxy will fail.
You're assuming that CORS will protect your API from rogue clients. It will not. CORS protects your users from rogue clients making requests on behalf of the legitimate client. You can run a browser in unsecured context and bypass CORS. You can install a browser extension and bypass CORS. You can call the API directly from an HTTP client and fake the origin. Please read on what CORS tries to protect you from because it seems you have a misconception of it.
What you implemented here is still just an extra step, but no more secure than just testing if your code is running under your domain, for example. The same way I can get your code to fake the domain I can get it to fake that the signatures are correct. You just made it slightly more difficult to access admin routes.
The real thing that gave you the "secure enough" assessment was fixing the API to not allow a rogue client to create admin accounts. The whole RSA "workaround" just made it slightly harder for an attacker to instruct your client to do what the attacker wants.
I hope you can see how even the examples you brought are evidence of how the barrier to discovery was greatly elevated from this simple action alone.
Your objection is boiling down to "but with infinite time, patience and motivation an attacker will figure it out from what is publicly available", which leads to a dangerous "better to let the client be easy to break since it is impossible to make it impossible to break".
When you say it "just made it slightly harder", I wonder if maybe you're aware of something that the professional pentesters weren't? Would you mind cloning the repository and briefly explaining how you'd break it in practice? Because some of your suggestions won't work.
I hope you can take feedback, as it looks like we attacking your ideas hurt your ego as you were heavily invested in something you found really cool to learn and implement.
Does not seem the repo you sent is representative. Host your production client somewhere and let us play with it.
People giving bad feedback doesn't hurt.
It is representative. The code is about the same.
So it is as bad as I imagined. I literally did this in 5 minutes.
If you did, how? You obviously didn't used devtools, as I'm explaining in another comment.
Care to explain what you are trying to convey with the attached image?
Just as a test to see if the changes made to the source files are applied. They're not. The browser is not webpack. You cannot change the readable source files and have the changes create a new compiled version. This is not how the devtools work.
In the image I changed the function in the source files the way you're suggesting you can, to purposefully fail the verification. If this worked, the application on the left wouldn't show any data after a page refresh. It is still working, because these changes were not reflected.
Then it does not seem you used the tool correctly. My screenshot above was taken from devtools.
Again, the browser does not need to be Webpack for this to work - you are misinterpreting how this works.
I posted the screenshot showing the correct use.
I'll go through my notifications again, but I've seen no screenshot from you. Just texts.
The browser would have to be webpack in order for you to modify readable code and have it compile into a new working version. To do something remotely similar to what you want, you would have to modify the compiled version, which is not trivial and requires understanding of how React works under the hood. Any attempt at calling this trivial is.......... well, there are no polite ways of saying it, so I won't.
Modifying the compiled version is precisely what you need to do. But you don't need to understand how react works under the hood to do that.
It seems to me you're modifying the code translated from source maps and that won't work. As pointed, this is a misunderstanding of how the overrides tool works.
It's not because you don't know how to do it that it is not trivial, by the way.
"It seems to me you're modifying the code translated from source maps and that won't work"
Yes. I'm saying that it won't work for some 10 comments by now.
"As pointed, this is a misunderstanding of how the overrides tool works"
Yes, it IS a misunderstanding: of the people suggesting this is as something that trivially works as they naïvely thought. It doesn't. You have to change the compiled code, which is not trivial.
Thanks for the write-up, but are you implying that everyone building an API + SPA should go and add this extra encryption layer on top of HTTPS/SSL?
I feel we're then sort of duplicating things, since this is what SSL/HTTPS was meant for ... if that isn't sufficient, and we really need this kind of "extra" thing on top, then would this not already have been made more or less a "standard" recommendation in this type of architecture?
Besides, well, if you know how to use Chrome DevTools then you can already "manipulate" a lot of what's being HTTP-posted to the server - you can (with some effort, but it's not really difficult) bypass most of the "checks" done by the frontend.
That's why (as others have said) you can simply never trust the client - all of the business logic, validations, authorization checks, and so on, need to be enforced server side - and if you do that, then in most cases this extra "layer" doesn't add much value, in my book.
But anyway it's interesting, and you got me thinking, not about adding this exact solution, but about "what if" scenarios (client device being hacked) and how to mitigate risks.
I agree with everything you said, but we came to a different conclusion about the value added by this layer.
It is like putting a padlock on your locker. It won't stop highly skillful and motivated attackers for long, but it is definitely not useless, because the vast majority of people won't try, and the majority of people who try will fail, and it will still take time for even specialized attackers to get through. And this time is valuable, since we're constantly improving the security of the back-end. This time could be the difference between a vulnerability being found and being patched.
Yes sure, absolutely - as with almost everything in software development, "it depends" - I can certainly imagine that there are scenarios or use cases where this is a very useful technique ... dismissing an idea too hastily is one of the most common mistakes (and something we're almost all guilty of, including myself).
Very interesting article, thank you for sharing it!
I have only few questions.
I mean both the frontend and backend applications are accessible only through the HTTPS protocol. They're in different domains, and each have it's own certificate.
I have not heard of it before, however, I just looked what it is, and I'm not sure if it would solve the problem. The hacker has access to the certificate his browser would trust, and he somehow imported it into his tool. He is not sending a fake certificate, he is sending a trusted certificate (as far as I understood his explanation).
I think that the hacker would need to "compromise" in some way the user's browser, for example the hacker could install a fake CA root certificare in the user's browser otherwise he would not be able to tamper the request/response.
The SSL pinning does just that, in fact even if the hacker is able to compromise user's browser, given that the server SSL certificate Is pinned inside your application then response can't be tampered without your application noticing it.
Think of this attack as a malicious user trying to break things to his advantage (the tool is used by the company to calculate a yearly bonus paid to each employee based on their performance, so there is motivation to try). In this case, the user's browser is the hackers browser.
In a sense it is not a "man in the middle", because it is not a third-party, it's the user himself trying to mess around.
At risk of gathering more attention (!), now that we know more about the context and threat model here (ie: the legitimate users are the likely attackers), are there other risk mitigating controls that you have / could have to reduce the risk to the business? Things that come to mind (in no particular order):
These are awesome suggestions, thank you very much.
The API has exponential throttling for the same IP or same user (it helped us check the DoS box). We log requests responded with 403 (forbidden). I'll talk to devops to see if they can set some sort of alert on it. Will definitelly be helpful.
Some actions are auditable and revertible. Not all, though, we can definitelly improve that.
Your third suggestion is excellent. We've been planning on integrating the app with the company's support platform, and having grants be handled by tickets flowing through a series of approvals. Gotta carefully secure that communication, though.
The last point is something we already do. Developers have no admin access in production.
Gastei algumas horas lendo o artigo e todos os comentários.
Em parte, gostei do artigo, traz uma boa discussão, e pude parar para perceber como seria uma implementação de uma biblioteca JWT (concordo com a pessoa que disse isso, basicamente tu implementou um JWT) e achei interessante a forma como o fez.
Entretanto me vejo representado um pouco no próprio autor, não o "eu de hoje", mas o "eu do passado". Eu também fui um desenvolvedor assim, muito seguro de si, que achava que o que eu fazia era o que era bom e pronto, não aceitava críticas, etc... Mas aprendi com a vida que não é bem assim, e que as outras pessoas tem muito o que contribuir para minha evolução, bastava apenas que eu me deixasse ouvir e beber do conhecimento dos outros.
Em síntese, não existe uma forma certa de fazer uma coisa, mas existem várias formas certas de se chegar ao mesmo resultado, e foi observando cada forma, cada experiência e cada conselho que hoje sou quem sou.
Não sou eu quem vou lhe dizer que o que fez não tem valor ou que não serve de muito quando a solução real está no back-end (e está), se você não quer se dar por convencido disto. O que posso realmente lhe dizer é: ouça (neste caso, leia) as pessoas, sendo mais experientes ou não do que você, vão sempre lhe trazer uma luz e questões relevantes. Mesmo meus alunos mais leigos em desenvolvimento me ensinaram algo, então absorva.
Reconhecer que tomou uma decisão "não tão boa" não é uma derrota, mas um aprendizado. De próxima vez, você já saberá qual caminho "não seguir", e então aumente sua base de conhecimento à partir daí.
Then your code simply won't work at all. You should know that the name of the functions and variables in the compiled code won't be the same as in the source code. Simply targeting a "setState" function won't accomplish what you believe it will.
You failed to see that this is exactly what JWT does. I just didn't know JWT could be used like that before, and ended up creating my own implementation of "JWT". I will probably migrate the whole thing to JWT, but everything that I learned during the process was still valuable; including that JWT is not 100% secure, for the same reasons presented against my implementation in this discussion.
It is silly and naïve to believe users can't fiddle around just because they don't know how to use hacking tools. They could modify the responses in devtools, for example. Trying to do that will break the application due to this measure.
I came here looking for feedback and criticism, and valuable feedback and criticism was provided. Just not by you. It seems you can't take feedback about your ability to provide feedback.
Your pull request ultimately would not disable the signature verification, but not only that, it would probably not do anything at all, since you're changing the React source code and not the compiled version the browser actually reads. As I said several other times, the browser is not webpack and it won't compile a new version for you. You would have to go deeper.
You completely failed to understand that the potential attackers are legitimate users with no tech skills but incentives to fiddle around. Any hacker that could bypass the signature could also delve into the source code to find the endpoints and then continue from Postman or other such tool. This is NOT "who" we are protecting against. It is silly to say things like "security should be on the backend" as an objection to this, because not only it completely misses the point, but it supposes (obviously ignoring the article, where I explain the hundreds of hours that were already invested into securing the app) that security in the backend is being ignored. Do NOT overestimate your ability to make something safe. "Stupid mistakes" like the ones described in the article are present everywhere.
MFA makes no difference in this context, it is already enforced for all users. The SPA does checks the token on the server but the communication can be intercepted and changed. You can't see how spoofing the responses is related because, as your PR suggests, you did not understand what the problem is.
I should not update the post because, if you read it, you'll see there is both a disclaimer and a conclusion about that. You simply didn't read it.
Finally, as I have learned in other comments, what I "hacked together", as you call it, is a simplified version of JWT, which is industry standard, so at this point I can't understand your position. We hired experts and we trust them to say that this is a critical issue; as you said, this is not your job, but is theirs.
Your definition of "fairly simple" radically misses the point. This is the difference between finding a vulnerability in a couple of minutes and finding a vulnerability in a couple of days. And this is not an exaggeration, since it is exactly what happened with the pentesters.
I am proud of what I built and of what I learned while I was building it. However, as I stated, I wrote this article to get criticism, and I even pointed out how I suspected that the lack of material about this suggested this could be an heterodox strategy. Some people provided valuable feedback, I learned about fuzzing and other things, you said a couple of good things too (not all, MFA is already enforced for all users. It doesn't make a difference in this context). Others bashed it (some after proving they didn't understand the problem, nor the solution).
What you have ended up implementing is basically using the JWT which is a pretty standard pattern when you don't want the data to be modified and a way for consumers to verify the data is not tempered (e.g. OAuth 2.0 tokens). You can probably look more how it is used in OAuth world to improve your implementation (rotating private key)
Sort of. In fact, JWT is used in the application for authentication. We store data about the user in the token, and we make a request to the API with the token in the headers to get information about his access-level. The endpoint verifies the signature, decodes the data, does some processing, and spits back the information we want.
The problem is that it still relies on the backend to verify the signature (does it not? Legit question, this is how I learned it). Since the attacker can change the information that is coming back from the API, he could just make it say "yup, das legit bro, he is an admin", and be allowed in pages he wasn't supposed to be in, even if this only gives him very limited access to actual data.
Still, the guys upstairs did not like this, and I was tasked with fixing it.
This is honestly so good. Thanks for sharing dude.
Can it be done with a public key, or do I need to send the secret to the front end?
Then my implementation does EXACTLY what JWT does. Yet you're bashing it as if it's useless.
I don't, this is what my implementation does, but you're saying it is useless, when it is exactly what JWT does. I just didn't know then that it could be verified with a public key, so I built my own.
Yes, a header, the content, and the signature, but you don't need to validate the signature to decode the content, you just need to parse it as base64. That is what got me confused.
Have fun reading: dev.to/victorwm/how-i-trivially-by...