In the previous episode we generated SBOM and we checked its content. But I mentioned the issues with vulnerability scan. Let's try to use another tool to complete this part now.
$ git clone https://github.com/nexB/vulnerablecode.git
$ cd vulnerablecode
$ make envfile
$ docker-compose build
So far, the process is the same like for ScanCode.
No surprise, we run the stack here as well
$ docker-compose up -d
This time we have three containers
$ docker-compose ps
vulnerablecode_db_1 docker-entrypoint.sh postgres Up 5432/tcp
vulnerablecode_nginx_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
vulnerablecode_vulnerablecode_1 /bin/sh -c ./manage.py mi ... Up 8000/tcp
According to documentation, we need to import data on this point.
$ docker-compose run vulnerablecode ./manage.py import --list
$ docker-compose exec vulnerablecode ./manage.py import --all
$ docker-compose exec vulnerablecode ./manage.py improve --all
This way we imported / updated the whole data. Documentation shows also how to collect only part of information (to save time, space and transfer), however I believe that the only way is to import everything to have full visibility of issues.
The process is long. I mean it. The cause lies more in my configuration and having containers running in WSL2.
I found that some importers need additional configuration.
For example GitHubAPIImporter:
Importing data using vulnerabilities.importers.github.GitHubAPIImporter
Cannot call GitHub API without a token set in the GH_TOKEN environment variable.
Traceback (most recent call last):
File "/app/vulnerabilities/management/commands/import.py", line 61, in import_data
ImportRunner(importer).run()
File "/app/vulnerabilities/import_runner.py", line 44, in run
count = process_advisories(advisory_datas=advisory_datas, importer_name=importer_name)
File "/app/vulnerabilities/import_runner.py", line 54, in process_advisories
for data in advisory_datas:
File "/app/vulnerabilities/importers/github.py", line 171, in advisory_data
response = utils.fetch_github_graphql_query(graphql_query)
File "/app/vulnerabilities/utils.py", line 241, in fetch_github_graphql_query
raise GitHubTokenError(msg)
vulnerabilities.utils.GitHubTokenError: Cannot call GitHub API without a token set in the GH_TOKEN environment variable.
Failed to run importer vulnerabilities.importers.github.GitHubAPIImporter. Continuing...
Well, I had other issues, so I decided to take a look on importers I can use
$ docker-compose exec vulnerablecode ./manage.py import --list
Vulnerability data can be imported from the following importers:
vulnerabilities.importers.nginx.NginxImporter
vulnerabilities.importers.alpine_linux.AlpineImporter
vulnerabilities.importers.github.GitHubAPIImporter
vulnerabilities.importers.nvd.NVDImporter
vulnerabilities.importers.openssl.OpensslImporter
vulnerabilities.importers.redhat.RedhatImporter
vulnerabilities.importers.pysec.PyPIImporter
vulnerabilities.importers.debian.DebianImporter
vulnerabilities.importers.gitlab.GitLabAPIImporter
vulnerabilities.importers.pypa.PyPaImporter
vulnerabilities.importers.archlinux.ArchlinuxImporter
vulnerabilities.importers.ubuntu.UbuntuImporter
vulnerabilities.importers.debian_oval.DebianOvalImporter
I imported a few of them one by one.
After the import, we need to run last command
$ docker-compose exec vulnerablecode ./manage.py improve --all
This takes ages (I mentioned the cause eariler). We can, fortunately, improve them one by one, selecting proper item from the list
$ docker-compose exec vulnerablecode ./manage.py improve --list
Ok... As I lost my patience, the day after I came back to the table and started again. I created an EC2 instance in AWS this time and I executed the commands above simultaneously, on my WSL2 based docker and on EC2. And you know what? No difference.
Ok, I said. Let's try once again, this time I will not interrupt.
5 hours later... "Something" happened and my process died.
Oh f****....
I decided to go with improve
command. It also took a lot of time, so I resigned. It is not functional at all. I wonder, if this happened to me only, or your experience is the same? Share your opinion with me!
Preparations are not complete, but let's take a look what we can get from the service.
When we enter the website, we see very simple GUI
We can provide the package name to find any vulnerabilities in it
At this point we should be able to get more details by clicking package name, but I got only errors. Not funny.
Another option is a search for vulnerability by its ID.
We have some data there.
A few more words about the collecting data. I said the process took good part of the day. I would say, around 10 hours and wasn't finished. It is totally dissapointing. I checked the performance a few times and this part is also dissapointing. Here is a performance snapshot
Summary
I don't like this tool. It is very uneffective, does not add any value to the chain. And most importantly, I have no idea if the collecting process can work effectivelly or not. What I observed is useless.
The general idea of this tool is quite ok. If it can be incorporated with ScanCode (by the way, both of the tools are part of the suit on AboutCode) and serve as service providing details about vulnerabilitites - I go with it. Right now, as I can see, it is not the missing tool (which I mentioned in previous episode).
To be clear, this tool doesn't generate any SBOMs, it is not its function. I didn't spent much time on it, my approach is to find reasonable, quickly implementable and easy tool to add to CI/CD processes.
The project is quite young. The current version is 0.1. So, hopefully, it will work better soon :)
Top comments (0)