Using open-source tools to test open-source projects feels like a great match. It wasn't until the other day that I remembered that the team behind DEV had open-sourced the bones of the site as Forem. To make it an even better match, the stack matches up nicely with the currently supported languages included in Bearer's new free and open-source security application security testing (SAST) tool. Unlike many security tools, this one is really focused on helping devs make sense of security concerns in an actionable way.
In this article, we'll install the Bearer CLI, run it against forem, and analyze the results to see what issues arise and how we might tackle them. Spoiler alert: we'll find a bunch of results—some easy to fix, some we can ignore, and some that might be larger judgment calls.
Install bearer
If you'd like to follow along, follow the install and scan instructions. Otherwise, you can skip down to the analyze the results section below.
The quickest way to install Bearer is with homebrew or curl.
Homebrew:
brew install Bearer/tap/bearer
Curl:
curl -sfL https://raw.githubusercontent.com/Bearer/bearer/main/contrib/install.sh | sh
You can then confirm that everything is installed and working by running the following:
bearer version
You should see the version number and sha value. At the time of this writing, the version is 1.1.0.
Clone forem and run a security scan
If you haven't already, go ahead and clone forem locally.
git clone https://github.com/forem/forem.git
It's a pretty big project, so it might take a few minutes. Once complete, navigate into the project—likely with cd forem
.
We'll start by running Bearer with all of its default settings. This will give us a text-based security report.
bearer scan .
Note: The .
in the line above just means "run this command on the current directory".
Depending on your machine, this could take a few minutes. For reference, the scan took 1m26s for me, with the rule analysis taking another 1s.
When the scan completes, you'll see the security report.
Analyzing the results
Here's a gist with the full output. The scan detected 0 critical failures, 44 high failures, 2 medium failures, and 36 warnings. That's 82 items that have the potential for improvement.
These failures and warnings are from Bearer's built-in rules. Rules are checks against best practices. Most are derived from the OWASP Top 10 and the CWE, and some are common best practices for languages and frameworks. At the time of this scan, there were 107 default rules.
Back to the results, 46 failures and 36 warnings seem like a lot, but if we break down the results we can manage them more easily. For one, let's ignore the warnings for now.
You can tell the scan to only show certain severity levels:
bearer scan . --severity critical,high,medium
Don't worry, the scan itself is cached, so this won't take the full amount of time again.
Of the 46 remaining failures, we can start to see that there are only a handful of rules that make up these failures. This is because Bearer shows you each location within your code where the failure occurred. These rules are:
- Javascript: open redirect
- React: dangerouslySetInnerHTML
- Assorted ruby user input rules
- Ruby: weak encryption
- Ruby: data sent to Honeybadger
Remediating the failures
With our nice to-do list in place, let's start at the highest severity level—in this case, there were no critical items so we'll start at "high".
Javascript: open redirect
This rule, javascript_lang_open_redirect
, is one I actually hadn't heard of before. If we look at the docs link provided in the output, we have some advice on fixing the problem and a handful of resources. We can see this relates to OWASP A01:2021 and CWE-601. If we review the OWASP open redirect guidance, it basically says:
Unvalidated redirects and forwards are possible when a web application accepts untrusted input that could cause the web application to redirect the request to a URL contained within untrusted input. By modifying untrusted URL input to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials.
Still a bit unsure? Let’s look at the file and line output:
HIGH: Open redirect detected. [CWE-601]
https://docs.bearer.com/reference/rules/javascript_lang_open_redirect
To skip this rule, use the flag --skip-rule=javascript_lang_open_redirect
File: app/javascript/runtimeBanner/RuntimeBanner.jsx:63
63 window.location.href = targetLink;
Looking at the RuntimeBanner
component and heading down to line 63, we can then make a judgment on whether this should change. In this case, if we go a bit earlier, we'll see this:
...
const urlParams = new URLSearchParams(window.location.search);
const targetPath = urlParams.get('deep_link');
if (currentOS() === 'iOS') {
// The install now must target Apple's AppStore
installNowButton.href = FOREM_APP_STORE_URL;
// We try to deep link directly by launching a custom scheme and populate
// the retry button in case the user will need it
const targetLink = `${APP_LAUNCH_SCHEME}://${window.location.host}${targetPath}`;
retryButton.href = targetLink;
window.location.href = targetLink;
...
I don't know enough about their codebase to make a valid judgment here. It may be safe since it's only used for appstore links, but it also accepts get params as input for the redirect—which you normally want to avoid without properly validating them. Have a thought? Leave a comment below.
React: dangerouslySetInnerHTML
If you've done any React work, you know all about dangerouslySetInnerHTML
. It's a necessary evil sometimes, but there is in fact a way to make it safer. That's why the rule javascript_react_dangerously_set_inner_html
exists. Checking the doc link, we see the following advice:
Sanitize data when using dangerouslySetInnerHTML
<div dangerouslySetInnerHTML={{__html: sanitize(data)}} />
Along with a link to OWASP's advice for further context. Great! We can include the DOMPurify library and it’s an easy fix.
Assorted ruby user input rules
I'm grouping many of the triggered ruby rules together here because they all have one thing in common: avoid doing things directly with user input. The following rules trigger in various places throughout the code:
- ruby_lang_http_url_using_user_input
- ruby_lang_reflection_using_user_input
- ruby_rails_redirect_to
While different in their implementation, they all share the same risk and can more or less be resolved in the same way. If we look at the remediation docs for ruby_lang_reflection_using_user_input, the advice is to compare the input to pre-set values, then pass those values to the method instead of directly passing the input. Like this:
method_name =
case params[:action]
when "option1"
"method1"
when "option2"
"method2"
end
method(method_name)
There may be some cases where the scan picks up perfectly safe redirects and reflections due to the nature of params, so it’s good to evaluate these incidents on a case-by-base basis.
Ruby: weak encryption
The weak encryption rule, ruby_lang_weak_encryption
, is one of my personal favorites. Many of us come to assume that most encryption libraries and algorithms are good enough, but it turns out that many of the old standards are showing their age. OWASP identifies algorithms it considers weak, so let's avoid using them to encrypt sensitive data.
MEDIUM: Weak encryption library usage detected. [CWE-331, CWE-326]
https://docs.bearer.com/reference/rules/ruby_lang_weak_encryption
To skip this rule, use the flag --skip-rule=ruby_lang_weak_encryption
File: app/services/mailchimp/bot.rb:153
153 Digest::MD5.hexdigest(email.downcase)
In this case, it looks like the mailchimp service is only using MD5 to encrypt email addresses. We can replace it with something like BCrypt instead.
Ruby: data sent to Honeybadger
Finally, the last rule to trigger on forem's code, ruby_third_parties_honeybadger
, is the following:
MEDIUM: Sensitive data sent to Honeybadger detected. [CWE-201]
https://docs.bearer.com/reference/rules/ruby_third_parties_honeybadger
To skip this rule, use the flag --skip-rule=ruby_third_parties_honeybadger
File: app/controllers/omniauth_callbacks_controller.rb:77
76 Honeybadger.context({
77 username: @user.username,
78 user_id: @user.id,
79 auth_data: request.env["omniauth.auth"],
80 auth_error: request.env["omniauth.error"].inspect,
...
82 })
You may wonder why this is a problem. In the case of this code, we're sending the user's username
to a third-party service. While username isn't inherently sensitive data, it certainly has to potential to be and should be treated as such. It's better to use IDs that can't identify the user if the third party—in this case, honeybadger—is breached. You can see the full list of supported data types, sorted by category, on the docs.
What else can we do
On top of all the rules mentioned so far, there were also a handful of warnings—mostly around setting application-wide security. This is good advice, but depending on the use case might not be as vital. The great thing is, you can customize the scan to your needs by adjusting the flags, and even add comments to your code—much like you would with a linter—to ignore parts that are intentional choices. You can even write your own custom rules to check for any code patterns or security best practices that aren't included in the default ruleset.
The SAST security report is the core of the Bearer CLI app, but there are also privacy reports, secret detection, and a deeper dataflow report. You can even run it as a GitHub action each time new code gets pushed. Give it a try on your own apps, improve the code on some of your favorite open-source projects, and help make it better.
Let me know in the comments what other guides you'd like to see on using the CLI. And, if you work on forem, let us know if we can improve the results!
Top comments (0)