I've been trying actively for a while to contribute to open-source projects. However, I had to come to the harsh reality that open-source is not what I thought or expected since I discovered a common pattern among open-source projects that is a bit troubling.
Project maintainers are warm and welcoming till you try to touch their code. They are fine of making Devs fix their documentation typos or write unit-tests for them. Otherwise, they become defensive and refuse contributions from those they are not familiar with even if the code makes sense (God forbid you have Arabic or Indian nickname -- sarcasm).
Open-source security tools don't allow you to post about security vulnerabilities in public. Instead, they ask you to send a private email about any critical issues which raises a lot of questions about transparency and whether they are interested in profit or giving us better product.
Corporate Devs Vs. Contributors war:
I noticed that the bigger or more popular the project, you will notice that the project already has a substantial number of Devs who are actually working for the company that is developing the product. This creates a conflict between community interest and corporate interests as Devs who are working as employees will give each other more favorable treatment in contrast to the Open-Source developers; not only that, but this divide creates outright us vs. them mindset specially among the corporate Devs who will be inclined to blame the Open-Source contributors for any issues that may arise. I haven't yet seen them do the blaming but I did notice the difference in treatment (double standards).Project maintainers can be toxic and unpleasant to deal with. Unfortunately, reference to #3, the project maintainers can be extremely toxic. If you compare their attitude to the open-source contributors who are working for free for the betterment of the community, you will find out so quickly that the project maintainer hate their job and their company hired them because they are skilled, not because they love their job.
The above issues do not apply to all projects or all project maintainers but definitely there are bad apples which I'd recommend to avoid dealing with them if they engage in any red flag behaviors.
How can Open-Source improve?
They need to be more transparent, specially about security issues in their code/software. Lack of transparency is a major red flag which cannot be tolerated. If there are bugs or security problems, they need to be public and people should have the right to know whether or not their product is secure from cyber threats.
Segregate between employees and open-source contributors. Your employees shouldn't have the last say in regards to code changes but rather the community. Closing issues/PRs you don't like doesn't promote the culture of Open-Source that the community and consumers want! Auditors should be allowed to audit what issues and PRs on Github your employees closed specially if those issues/PRs raise important questions that could impact the product and customers.
Companies that sponsor or invest in Open-Source need to view the Devs and contributors' community as beyond just tokenism and marketing labels but rather, as part of the product's success. This means, if you want to hire project maintainers, the project maintainers need to have the correct framework and be more active and understand the culture behind Open-Source (instead of closing issues and PRs whose authors have Arabic or Indian names).
Despite the above setbacks, there are still decent Open-Source projects. I'd recommend aiming or working with less popular projects as their maintainers are less likely to view your code changes as an insult but rather as useful contribution.
Stay strong and always be #opensource contributor!
Top comments (5)
I agree with everything you wrote, almost. In my 10 years of open-source (which now i hate) i found only toxic people and 0 contributions. People think about open source as "I will fork/clone it, and fix things as i want. Then republish" instead of collaborating and improve the original project.
For example, i'm the only author of an important Visual Studio Code extension developed (born on SublimeText) many years ago, it has almost 7 milion users... but i got 0 collaborations or PR over the years. Instead i got 800 forks, and many, many republish with different name (and in most case without a single line of code changed).
open-source != transparency != do what you want
Very good point you mentioned there highlighting the fact that the opposite can be true too.
I'd personally try to collaborate first with the original project. If I find them toxic, I'd move on to other projects where I feel that my contributions and I feel respected and valued.
By the end of the day, it boils down to the individuals behind the screen. However, I'd like to highlight an issue.
I do not like to use the label "Toxic". While deliberate toxic behavior does exist indeed among some individuals, sometimes it's difference in personality or communication styles.
The tricky part is distinguishing between individuals who outright don't like you and those who simply lack the communication skills.
In my perspective, if I try to work with someone but they are not even giving me a chance or any constructive feedback, it means that they don't actually want to collaborate/work with me. Then, I'd move on.
There is a difference between someone who says "Your code is bad" and someone who says "Your code is good but we could do xyz".
If definition of open-source was about improving community and continuous improvement, then lots of projects would fail to meet that definition.
Again, it's the individuals behind the screen ...
There is a reason for that. When you discover a security vulnerability you don't want to shout this out to the public for bad actors to exploit.
Open-source security tools request private reporting of vulnerabilities to ensure responsible disclosure, a practice designed to address security issues effectively while minimizing potential harm. Here’s why public disclosure of vulnerabilities is discouraged:
A dedicated email address (e.g., security@project.org).
PGP keys for encrypted communication, ensuring confidentiality.
Guidance on what information to include (e.g., reproduction steps, impact assessment).
After resolving the issue, maintainers may publicly acknowledge the contributor who reported it responsibly, showcasing collaboration while maintaining ethical practices.
This approach strikes a balance between transparency and security, ensuring that vulnerabilities are managed in a way that protects the community at large.
BTW have a look at a maintainers view why you may not receive an answer to your report
Very good points you raised there. The question is, how do you know if bad actors haven't already discovered it?
I disagree on this point. The reasonable user, is aware that there is no perfect software; but the reason someone is using an "Open-Source" solution, is because they want the transparency rather than the perfect solution!
Again, if we are interested in profit, then yes, we care about the brand and it's marketing; but if we care about the customer, then we are not thinking profit but rather thinking on how to be transparent and responsive.
With the governments' never ending violation of our privacy and rights, how do we know that the private email got actually issues addressed? If the company is colluding with the government agencies in secret, the person responding to the security vulnerabilities reports could actually be one of them! I'm not being paranoid, tech companies and government found workarounds to spy on us without legal accountability!
I think that if a user or customer is too naive to believe that the "Other" solutions don't have vulnerabilities or problems, I wouldn't want such a user in the first place because they don't understand the products they are using. There are vulnerabilities being discovered all the time and they keep developing as more features or techniques are produced. Therefore, as far as I'm concerned, I'd rather lose trust in a product rather than risk having something that could harm me on my device without knowing.
I do agree on this point as much but that's the problem, the moment any point of the process has been centralized, the smart user should assume that there is no accountability. I'm not saying that there is a perfect solution for this problem since I recognize it's never black or white. However, I'd rather have the user live with the harsh reality that such things happen rather than giving them the illusion that their data is safe as we learned how Meta (formerly Facebook) have abused our data alongside other prominent names.
I'm just playing the devil's advocate, but I appreciate your points and the debate!
I don't get your point. Many open source projects are widely used, and disclosing security vulnerabilities prematurely before a fix is available could result in serious harm.
When you look at the policies of big open source projects like Plone or Django you will find that it is very transparent.
The examples you give are related to closed source projects.