Introduction
Mark Zuckerberg recently shared a long post about open source AI being the path forward on his Facebook page. He shared a pretty good argument in favor of open source AI. He explained how switching from the closed-source Unix to the open-source Linux was a game-changer back in the day. And, how Linux now powers the most widely used device in the world---the mobile phone. He then discussed how open-source AI tools such as the Meta-developed Llama 3.1 405B will be the best choice for developers, businesses, and the world in general. But enough of what Mr. Zuckerberg said, let's start with understanding the basics of open source AI.
So, what is Open Source AI?
In simple words, Open-Source AI means AI technologies developed under open-source licenses. This means, anyone can use, modify, and share these AI technologies' source code, models, and algorithms, as they are freely available in the public domain. You knew this already, didn't you? 😛
Why is Open Source AI a Big Deal? Why do Developers and Businesses love it?
Most of us who aren't exactly living in a utopia still believe in democracy -- you know, the whole free and fair opportunity for everyone kind of a thing. And guess what? Open-source AI tools are totally on board with that vision!
Open-source AI Democratizes Technology!
Just like everyone deserves the essentials of life, like food, clothing, and shelter, everyone has the right to the world's cutting-edge technologies as well.
Open-source intelligence tools make just that possible.
They level the playing field, giving everyone access to powerful AI tools, not just those with deep pockets. So, if you are a developer with some intelligent tricks and skills up your sleeve that can make a difference in a new technology then open source is your thing.
Liberty to Customize
Duh! It's obvious. It's open, so anyone can customize it according to their needs. And, we engineers get our dopamine fix by delving deep into the codebase and modifying it to suit our needs. Open-source AI gives developers the freedom to develop tailor-made solutions to the different needs of different organizations.
For instance, consider an e-commerce business that wants to build this personalized recommendation system.
Sounds cool, right?
But how can one achieve this?
This is where open source AI swooshes in like a savior!
With the help of an open-source AI framework like TensorFlow, developers can customize the recommendation algorithm to fit their unique product catalog and user behavior.
They can tweak the model parameters, integrate additional data sources, and optimize the system for their specific performance needs.
This level of customization would be difficult with proprietary software. Have you seen their prices? Shoot! 🔫
Transparency and Trust
We, developers, like to peek under the hood because let's accept it, coding is our superpower. And knowing exactly what our power can do helps us trust it, especially when we're in the middle of an apocalypse.
Open source AI offers exactly that -- trust and transparency. Open source AI codes are open to review, so we can check them for any sneaky biases or nefarious tricks.
Transparency is our sidekick to ensure that our AI allies are as reliable as they are powerful!
Let's take an example. Imagine you are building a financial app that uses AI to predict market trends.
If you use an open-source AI to build this predictive model, you get the authority to review the codes thoroughly, you can check if the default settings are skewing predictions, look for any hidden errors or biases, and build an app that is thorough, accurate, and most importantly, unbiased.
Also, you can assure your stakeholders and customers that your AI-modeled app is thoroughly vetted, reliable, and adheres to the best industry practices. All this is possible because you have the power to fit, alter, and change unlike with the proprietary AI system that offers you a black box approach, where you can't see how predictions are made. This leads to a lack of visibility and distrust.
Interoperability and Migration
Is it not enough that we, mere mortals, are tied down by our own fate that now you have to tie us down within your closed AI ecosystem? Let us breathe and build. Phew!
Unlike closed source AI, open source AI offers a flexible ecosystem that allows businesses and developers to build their own systems offering seamless integration with other tools and systems. Switching from one open source tool to another or integrating multiple tools is generally easier without the constraints of proprietary systems.
Let me explain the interoperability that open source AI tools offer through an interesting case study
CERN, the European Organization for Nuclear Research, operates the world's largest particle physics laboratory. Previously, CERN relied on proprietary tools like MATLAB and SAS for data analysis and management. They faced major challenges such as high costs, limited interoperability, and difficulties in upgrading with these closed source AI tools.
To address these issues, CERN adopted open source solutions like Apache Spark for large-scale data processing and TensorFlow for machine learning.
This switch enabled CERN to process and analyze large datasets efficiently, saving on software licensing fees and ensuring continuous integration of new technologies. By leveraging the open-source community, CERN not only improved its data analysis capabilities but also ensured that its systems could easily interact with other platforms and adapt to future needs.
This case study highlights how open-source AI can drive innovation, cost savings, and interoperability in complex, data-intensive environments.
Must Read:
5 Free AI Coding Copilots to Help You Fly Out of the Dev Blackhole
Shivam Chhuneja for Middleware ・ Jun 19
Data Protection and Privacy - the Scary Ps
Remember when EU countries had a collective privacy freakout some time back? They were seriously spooked about how their data was being handled and shared.
But, with open source AI tools, governments and organizations got transparency and control over how their data was being processed and secured.
The open-source community actively reviews and improves code, ensuring data privacy and protection. This approach not only helps meet stringent regulations but also puts a smile on the faces of developers and businesses who can confidently navigate the privacy minefield with a clearer, more secure path.
Open Source AI is Light on the Pockets
We developers love open source AI tools as these don't cost us an arm and a leg. Most often they are available for free. For us, it's like finding out that some chic coffee shop is offering unlimited refills of our favorite coffee! ☕
No pricey licenses. No pricey Starbucks Coffee 😛
Finally! What Open source AI means for the World
Open source is basically the secret ingredient for a future where it benefits everyone. Imagine AI turbocharging human productivity, and creativity, and making life better all around, while also giving the economy and research a serious boost. Keeping AI open source means more people get in on the action and prevents a handful of companies from hogging all the power. It also makes the tech safer.
Sure, there's chatter about open source AI being risky, but I'm betting it's actually less risky than the closed versions. Open source means more eyeballs checking out the code, which cuts down on those accidental blunders. And when it comes to intentional mischief, open source helps keep the troublemakers in check. Moreover, rigorous testing and transparency mean we catch issues before they become problems.
Middleware Open-source - A Game Changer for Your Engineering Team's Productivity
At Middleware, we're practically open source campaigners, so we have rolled out our own stellar open source DORA Metrics! This tool's got your back with all the juicy details you need on deployment frequency, lead time, MTTR, and change failure rate.
Middleware's open source version is the perfect tool for engineering managers juggling different projects with different engineering management teams. Now, get ready to transform your DevOps game with some next-level predictability and visibility!
FAQs
1. How does open source AI work?
Open source AI allows developers to access an AI tool and modify the underlying code without restrictions. Open-source AI offers a shared codebase that allows the global developers' community to improve and add new features. This collaborative approach makes AI tools accessible and helps accelerate advancements.
2. How do I contribute to an open-source AI project?
To contribute to an open source AI project, first find a project that aligns with your interests and skills. Then, acquaint yourself with its codebase, documentation, and contribution guidelines. Finally, you can contribute by fixing bugs, adding features, improving documentation, or participating in discussions and reviews on platforms such as GitHub.
3. How do I stay updated with changes in an open source AI project?
In order to stay updated, keep track of the project's repository, subscribe to alerts for the updates, and engage with the community through forums or chat channels. Regularly check the project's documentation and changelogs for the latest information.
Top comments (2)
Insightful! OpenAI, or even the Google's or Microsoft's of the world might be big, popular companies... but data privacy remains a concern with me.
There's this PR that was opened recently comparing GPT-4o and Llama 3.1 405b
AI Powered DORA Report #493
Pull Request Contents
Quick Look at how it works 👀
github.com/user-attachments/assets...
Acceptance Criteria fulfillment
Evaluation and Results: GPT4o Vs LLAMA 3.1
We did the DORA AI analysis for July on the following open-source repositories: facebook/react, middlewarehq/middlware, meta-llama/llama and facebookresearch/dora.
Mathematical Accuracy
DORA Metrics score: 5/10
GPT 4o with DORA score 5/10
LLAMA 3.1 with DORA Score 8/10 (incorrect)
GPT 4o DORA Score was closer to the actual DORA score than LLAMA 3.1 in 9/10 cases, hence GPT4o was more accurate compared to LLAMA 3.1 in this scenario.
Data Analysis
The trend data for the four keys dora metrics, calculated by Middleware, was fed to the LLMs as input along with different experimental prompts to ensure a concrete data analysis.
The trend data is usually a JSON object with date strings as keys, representing weeks' start dates mapped to the metric data.
Mapping Data: Both the models were at par at extracting data from the JSON and inferring the data in the correct manner. Example: Both GPT and LLAMA were able to map the correct data to the input weeks without errors or hallucinations.
Deployment Trends Summarised: GPT4o
Deployment Trends Summarised: LLAMA 3.1 405B
Extracting Inferences: Both the models were able to derive solid inferences from data.
LLAMA 3.1 identified week with maximum lead time along with the reason for the high lead time.
This inference could be verified by the Middleware Trend Charts.
GPT4o was also able to extract the week with the maximum lead time and the reason too, which was, high first-response time.
Data Presentation: Data representation has been a hit or miss with LLMs. There are cases where GPT performs better at data presentation but lacks behind LLAMA 3.1 in accuracy and there have been cases like the DORA score where GPT was able to do the math better.
LLAMA and GPT were both given the lead time value in seconds. LLAMA was able to round off the data closer to the actual value of 16.99 days while GPT rounded off the data to 17 days 2 hours but presented the data in a more detailed format.
GPT4o
LLAMA 3.1 405B
Actionability
GPT4o
LLAMA 3.1 405B
Summarisation
To test out the summarisation capabilities of the models we asked the model to summarise each metric trend individually and then feed the output results for all the trends back into the LLMs to get a summary or in Internet's slang DORA TLDR for the team.
The summarisation capability of large data is similar in both the LLMs.
LLAMA 3.1 405B
GPT4o
Conclusion
For a long time LLAMA was trying to catch up with GPT in terms of data processing and analytical abilities. Our earlier experimentation with older LLAMA models led us to believe that GPT is way ahead, but the recent LLAMA 3.1 405B model is at par with the GPT4o.
If you value data privacy of your customers and want to try out the open-source LLAMA 3.1 models instead of GPT4, go ahead! There will be negligible difference in performance and you will be able to ensure data privacy if you use self hosted models. Open-Source LLMs have finally started to compete with all the closed-source competitors.
Both LLAMA 3.1 and GPT4o are super capable of deriving inferences from processed data and making Middleware’s DORA metrics more actionable and digestible for engineering leaders, leading to more efficient teams.
Future Work
This was an experiment to build an AI powered DORA solution and in the future we will be focusing on adding greater support for self hosted or locally running LLMs from Middleware. Enhanced support for AI powered action-plans throughout the product using self hosted LLMs, while ensuring data privacy, will be our goal for the coming months.
Proposed changes (including videos or screenshots)
Added Services
AIAnalyticsService
to allow summarising and inference of DORA data based on different models.UI Changes
github.com/user-attachments/assets...
Added APIs
Added Compiled Summary API to take data from all the above
Fetch Models
curl --location 'http://localhost:9696/ai/models'
Further comments
Zuck went from stealing all the data to using all this data for LLMs, conflicted about supporting Meta Models, but that the best shot against OpenAI. Great read!