DEV Community

Cover image for Three Steps For Increasing The Security of Your Web Apps
Jamie
Jamie

Posted on

Three Steps For Increasing The Security of Your Web Apps

I recently shared the following QR code with my work's Slack group:

A QR Code I recently shared at work

Pro tip: never just scan a random QR code without checking it with a service like this first

This QR code isn't malicious, it simply links to the site for my new podcast (https://dotnetcore.show/), and I'm happy to say that everyone at work passed. That is, I saw no new requests for the site, from IP addresses near where I work.

I think this proves a point that I'm about to make in the post. And that point is: us developers are the experts at development, and our opinions on development matters.

A Real-World Example

The story which prompted this post was one about Feedify, and all of their customers, being hacked. Here's a link to the first place I saw the story: https://www.bleepingcomputer.com/news/security/feedify-hacked-with-magecart-information-stealing-script/

Reading this blog post lead me on an opinionated twitter rant about security in webapps

Unfortunately for Feedify I picked on them during this rant, but they are far from the first to be hit with this kind of attack. Earlier this year, many of the websites for different branches of the UK Government had a Crypto Miner injected into them:

In both of these examples, a third party script was compromised and JS for a further third party (a sixth party?) was injected into it. Or written another way:

  • my amazing e-commerce site includes a script from Feedify
  • Feedifys script is compromised and already has some other script embedded into it
  • my amazing e-commerce site now includes the compromised script.

But how can you stop this from happening? It turns out that its stupidly simple, and only takes a maximum of three steps.

Stop Using unsafe-eval

The way that Feedifys script was originally included was along the lines of:

var s = document.createElement('script');
s.type = 'text/javascript';
s.src = 'https://cdn.feedify.net/getjs/feedbackembed-min-1-0.min.js';
document.getElementByTagName('head')[0].appendChild(s);
Enter fullscreen mode Exit fullscreen mode

This should immediately be ringing alarm bells. Why are we adding a script via JS? Couldn't we do exactly the same by adding

<script type='text/javascript'
   src='https://cdn.feedify.net/getjs/feedbackembed-min-1-0.min.js'>
</script>
Enter fullscreen mode Exit fullscreen mode

Which is the accepted practise. In fact, the first code snippet does exactly this, but at runtime rather than design time.

What's Wrong With That First Snippet?

Aside from being slow (the page has to be downloaded and all JS needed to be parsed before adding the news script to the head element), it's incredibly insecure.

By injecting scripts into the page like this, there's no way to check what has actually been added to the page until after the page has been rendered. This also means that you are allowing the JS engine in the browser to load and evaluate a script from an external source without you ever having checked it.

The same can also be said about scripts loaded using the second code snippet. But there are two big differences:

  • the second snippet is how you're meant to do it
  • you can perform a check, at runtime, that the script was loaded correctly using SRI

Subresrource Integrity basically tells the browser to do a has check on the script that it downloaded. This requires the script author giving you a hash value for their script, but you can easily generate one at the command line or using a service like this one

If the hash of the delivered JS file fails the check against the supplied hash, the browser raises an error in the console and refuses to parse the delivered script.

Because the original script delivered by Feedify was altered, it's hash value wouldn't have passed the SRI check.

Adding an SRI check for a JS file is as simple as the following (the hash value will be different for each script, obviously):

<script type='text/javascript'
    src='https://cdn.feedify.net/getjs/feedbackembed-min-1-0.min.js'
    integrity='sha256-3edrmyuQ0w65f8gfBsqowzjJe2iM6n0nKciPUp8y+7E='>
</script>
Enter fullscreen mode Exit fullscreen mode

I've used the SRI for jquery 3.1 here, because the Feedify script is gone from the web

CSP

Content Security Policy is an HTTP header which instructs the browser where it is allowed to load resources from. As it's an HTTP header, it's delivered shortly before the HTML for the page and sets up the rules for the current session.

If the domain isn't listed in the CSP, all requests to that domain will raise errors and will be blocked.

Setting up a CSP takes a little more effort. This is because it comes from the server, rather than the HTML itself. But all webservers have the ability to add custom headers, so it's not like its difficult to do.

The difficult part is getting it right.

Because modern webapps load resources from all over the web, you need to white list every single domain that your app loads from. And since the rules apply to all resource types (scripts, images, frames, CSS, etc.) each resource type has its own rule. This means that your CSP rule for scripts should look different to the CSP rule for CSS.

As an example, here is a redacted version of the CSP rule for https://dotnetcore.show/:

upgrade-insecure-requests;
default-src 'self';
connect-src 'self' https://cdn.jsdelivr.net https://api.unsplash.com;
script-src 'self' https://cdnjs.cloudflare.com https://code.jquery.com;
Enter fullscreen mode Exit fullscreen mode

This reads (to the browser) like:

Anything else will be blocked by the browser, before the request is generated. This also covers resources requested from within those scripts.

The compromised Feedify script contained a request used by an injected MageCart script. The request was sent to info-stat[.]ws

(I've added square brackets around the '.' char so that your browser doesn't make it into a clickable link)

A CSP which whitelisted the Feedify CDN domain but NOT the info-stat domain would have caused the requests to that domain fail, and for errors to be logged in the console. These errors could have been picked up in Dev, QA, and UAT environments, so would have been fixed by developers working on the affected webapps.

However, this would have required the use of the standard way of including scripts (i.e the removal of the unsafe eval pointed out above).

And Finally

This is the opinionated and slightly controversial point of the post.

4000 sites were hit with this attack.

That means 4000 dev teams (assuming that each site was created by an individual team) either didn't know about the security issues with using unsafe eval or that CSP could be used to secure their websites.

If this is the case, that means a huge number of dev teams either didn't know about these security measures

...

OR they did know, and didn't push back hard enough when decision makers decided not to invest time in security.

We need to remember that we are the experts in the development space. We're hired specifically because we are experts at this. As such, it's our duty to know about these things and push back as hard (if not harder) when the powers that be decide not to invest in security.

When a breech or security issue happens, it'll be our butts on the line not those of the decision makers. Simply raising these points, having them logged, and moving on quietly is not the way to do it.

Put yourself in the place of the users who were caught in the breech, think of what it's like to have to have to deal with having your identity stolen - worse yet, your life savings stolen - because some developer didn't want to feel slightly uncomfortable in bringing up the possibility of a security breech.

Would you want to be that person? I know that I wouldn't.

Top comments (16)

Collapse
 
pxlpnk profile image
Andreas Tiefenthaler

Thank you for sharing this post! I really enjoy seeing people picking up this topic and sharing their ideas and thoughts.

You are bringing up CSP as a measure to prevent certain kinds of attacks, this is a very powerful but also a complex security feature. In my experience, if you do not start with a very strict CSP right from the beginning you will have a hard time adding it later to a production site without breaking anything.

I really like the way that Google explains all of it here:
developers.google.com/web/fundamen...
And the Owasp site is always a good starting point as well: owasp.org/index.php/Content_Securi...

There are a few more headers that already improve the basic security of any web app quite a bit and are easier, if not even trivial to implement.

The most important and notable ones are:

  • HTTP Strict Transport Security (HSTS)
  • X-Frame-Options
  • X-XSS-Protection
  • X-Content-Type-Options

They are all quite good explained - again - on the OWASP site: owasp.org/index.php/OWASP_Secure_H...

@Jamie I think you did a great job on explaining all of this, thank you again.

Collapse
 
dance2die profile image
Sung M. Kim

Thanks Jamie for the mind-awakening post.

I have a question regarding a general security.

When you program, sometimes implementing with O(n^2) or O(n log n) algorithm is just good enough compared to a possible O(N) ones.

Are there any absolutely minimum of security knowledge developers should know about that's good enough?

Collapse
 
dotnetcoreblog profile image
Jamie

Imagine you want to steal a car. You case a street and check out each car, one by one. You look for any visible means entry, but you're also looking for any physical locks on the steering wheel, etc. You also need to know which models are easier to hot wire.

Now imagine that you have to park your car along a street where a lot of thefts have taken place. To ensure that your car isn't going to be picked out, you make sure that you have put any valuables away in the glove box or trunk locked your car; placed a physical lock on the steering wheel; engaged the imobiliser; armed your alarm; etc.

In security, you need to be looking for the ways that someone could break into your app. You want to find as many as possible and put things in place to stop others from exploiting them.

I would say that every web developer should know of the OWASP Top 10 security risks, at the very least. You could easily lose a day or two, doing a deep dive on the OWASP site (just like anyone could with TV Tropes) and still only scratch the surface.

Collapse
 
dance2die profile image
Sung M. Kim

How I understood was that, when hackers are looking for vulnerable sites and tend to attack those with lack of security measures.

And "a street where a lot of thefts have taken place" sounds like a popular commercial sites, where security need to be more tight.

And thanks mate for providing the absolutely minimum (OWASP list) one should know.

Collapse
 
shostarsson profile image
Rรฉmi Lavedrine

Great.
I love that one "When a breech or security issue happens, it'll be our butts on the line not those of the decision makers.".
That's true, but it is always very hard to make them responsible for what they asked.
And the one that is going to work days and nights to solve a security breach is always the engineers. Pretty rarely the decision maker.

Collapse
 
dotnetcoreblog profile image
Jamie

but it is always very hard to make them responsible for what they asked.
And the one that is going to work days and nights to solve a security breach is always the engineers. Pretty rarely the decision maker.

Which is precisely why we should always speak up, and make our opinions known. It can be hard to do it, but it's our job to make sure that these things are covered. No one else is going to bring it up, but us.

After all, we're the experts.

Collapse
 
andreasvirkus profile image
ajv • Edited

Hi Jamie,

Great post! I must admit I added an external script via js in a similar manner recently (it's not in production yet luckily, so granted I get some solid advice here, that'll definitely change). The reason behind that was that I wish to download the lib dynamically only on a certain spa route. How would you handle such a situation?

Collapse
 
dotnetcoreblog profile image
Jamie

External scripts aren't really a big problem.

I would recommend that you have add it to your CSP, generate an SRI, and make sure that require SRI for is enabled in your CSP.

That way, if the external script ever changes then the browser won't even load it.

Collapse
 
thomasjunkos profile image
Thomas Junkใƒ„

I found

observatory.mozilla.org/ helpful and cspisawesome.com/ as well.

Collapse
 
dotnetcoreblog profile image
Jamie

Fantastic links Thomas. I really like CSP is Awesome it looks really helpful for setting up what is an incredibly complex thing.

For those who are doing .NET stuff, I know that Paul Seal's Security Headers tool can help to generate the web.config sections, too.

Collapse
 
jmscavaleiro profile image
jmscavaleiro

Many thanks for this post.

Collapse
 
dotnetcoreblog profile image
Jamie

You're very welcome

Collapse
 
pavondunbar profile image
Pavon Dunbar

This is an amazing article and a definite eye-opener for me.

Thank you for sharing Jamie. Appreciate the content.

Pavon

Collapse
 
anduser96 profile image
Andrei Gatej

Great article. Thank you!

Collapse
 
mikedx profile image
Mike DX

"4000 sites where hit with this attack."

*were

Collapse
 
dotnetcoreblog profile image
Jamie

Doh! I always get that wrong.

Thanks, I'll amend the post