DEV Community

Cover image for 9 open-source gems to become the ultimate developeršŸ”„ šŸš€
Nevo David
Nevo David Subscriber

Posted on • Edited on

9 open-source gems to become the ultimate developeršŸ”„ šŸš€

TL;DR

For me, AI is everywhere.
But sometimes, it's hard to understand what a fun project is and what project you can use for your website.

I have curated a list of 9 open-source repositories you can implement into your repository tomorrow!

FamilyGuy GIF


1. Composio šŸ‘‘ - All in one Tooling solution for your AI agents

Iā€™ve built AI agents before, but when it came to connecting with third-party services like GitHub, Slack, and Jira for complex AI automation, I couldn't find a single tool that workedā€”until I discovered Composio.

It is an open-source platform that offers 100+ tools and integrations across industry verticals, from CRM, HRM, and social media to Dev and Productivity.

Composio Integrations Dashboard

They handle complex user authentication, such as OAuth, ApiKey, etc., so you can only focus on building efficient and complex AI automation.

They have native support for Python and Javascript.

You can quickly start with Composio by installing it using pip.



pip install composio-core


Enter fullscreen mode Exit fullscreen mode

Add a GitHub integration.



composio add github


Enter fullscreen mode Exit fullscreen mode

Composio handles user authentication and authorization on your behalf.

Here is how you can use the GitHub integration to star a repository.



from openai import OpenAI
from composio_openai import ComposioToolSet, App 

openai_client = OpenAI(api_key="******OPENAIKEY******")

# Initialise the Composio Tool Set
composio_toolset = ComposioToolSet(api_key="**\\*\\***COMPOSIO_API_KEY**\\*\\***")

## Step 4
# Get GitHub tools that are pre-configured
actions = composio_toolset.get_actions(actions=[Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER])

## Step 5
my_task = "Star a repo ComposioHQ/composio on GitHub"

# Create a chat completion request to decide on the action
response = openai_client.chat.completions.create(
model="gpt-4-turbo",
tools=actions, # Passing actions we fetched earlier.
messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": my_task}
  ]
)



Enter fullscreen mode Exit fullscreen mode

Run this Python script to execute the given instruction using the agent.

For more about Composio, visit their documentation.

Composio GIF

Star the Composio repository ā­


2. LLMWare - AI for Complex Enterprises

Building AI prototypes is one thing, but deploying them for enterprise use cases is a different ball game.

Solutions need to be secure, robust, and bulletproof. LLMWare is a framework that will give you confidence in building enterprise AI apps.

They let you quickly build super-efficient AI agents, Enterprise RAG, and other workflows.

LLMWareĀ has two main components:

  1. RAG PipelineĀ - integrated components connecting knowledge sources to generative AI models.
  2. 50+ small, specialized modelsĀ fine-tuned for critical tasks in enterprise process automation, including fact-based question-answering, classification, summarization, and extraction.

Getting started with LLMWare is easy.



pip3 install llmware


Enter fullscreen mode Exit fullscreen mode

Here is a simple example of data retrieval in LLMWare.




"""This example demonstrates the various ways to retrieve data from libraries:
      1. Create a sample library (e.g., UN Resolutions)
      2. Execute a Text Query with Author Name Filter
      3. Display Results
"""

import os
from llmware.library import Library
from llmware.retrieval import Query
from llmware.setup import Setup

def create_un_500_sample_library(library_name):

    library = Library().create_new_library(library_name)
    sample_files_path = Setup().load_sample_files(over_write=False)
    ingestion_folder_path = os.path.join(sample_files_path, "UN-Resolutions-500")
    parsing_output = library.add_files(ingestion_folder_path)

    return library

def text_retrieval_by_author(library, query_text, author):

    #   create a Query instance and pass the previously created Library object
    query = Query(library)

    #   set the keys that should be returned in the results
    query.query_result_return_keys = ["file_source", "page_num", "text", "author_or_speaker"]

    #   perform a text query by author
    query_results = query.text_query_by_author_or_speaker(query_text, author)

    #   display the results
    for i, result in enumerate(query_results):

        file_source = result["file_source"]
        page_num = result["page_num"]
        author = result["author_or_speaker"]
        text = result["text"]

        # shortening for display purpose only
        if len(text) > 150:  text = text[0:150] + " ... "

        print (f"\n> Top result for '{query_text}': {file_source} (page {page_num}), Author: {author}:\nText:{text}")

    return query_results

if __name__ == "__main__":

    library = create_un_500_sample_library("lib_author_filter_1")
    qr_output = text_retrieval_by_author(library=library, query_text='United Nations', author='Andrea Chambers')


Enter fullscreen mode Exit fullscreen mode

ExploreĀ examples ofĀ how to use LLMWare. For more information, refer to theĀ documentation.

LLMWare GIF

Star the LLMWare repository ā­


3. CopilotKit - Integrate AI into your Web App

If you're searching for a way to integrate AI into your existing workflows, your search ends here. CopilotKit allows you to integrate AI workflows directly and easily with any application.

It offers React components like text areas, popups, sidebars, and chatbots to augment any application with AI capabilities.

Letā€™s see how to build an AI chatbot using CopilotKit.



npm i @copilotkit/react-core @copilotkit/react-ui


Enter fullscreen mode Exit fullscreen mode

Configure App provider

First, you must wrap all components that interact with your copilot with theĀ <CopilotKit />Ā provider.

Use theĀ <CopilotSidebar />Ā UI component to display the Copilot chat sidebar. Some other options you can choose from areĀ <CopilotPopup />Ā andĀ <CopilotChat />.



"use client";

import { CopilotKit } from "@copilotkit/react-core";
import { CopilotSidebar } from "@copilotkit/react-ui";
import "@copilotkit/react-ui/styles.css";

export default function RootLayout({children}) {
  return (
    <CopilotKit publicApiKey="<your-public-api-key>">
      <CopilotSidebar>
        {children}
      </CopilotSidebar>
    </CopilotKit>
  );
}


Enter fullscreen mode Exit fullscreen mode

Copilot Readable State

To provide state knowledge for the Copilot.



import { useCopilotReadable } from "@copilotkit/react-core";

export function YourComponent() {
  const { employees } = useEmployees();

  // Define Copilot readable state
  useCopilotReadable({
    description: "List of available users",
    value: users,
  });

  return (
    <>...</>
  );
}


Enter fullscreen mode Exit fullscreen mode

Copilot Action

Let the Copilot take action using theĀ useCopilotActionĀ hook.



import { useCopilotReadable, useCopilotAction } from "@copilotkit/react-core";

export function YourComponent() {
  const { employees, selectEmployee } = useEmployees();

  // Define Copilot readable state
  useCopilotReadable({
    description: "List of available users",
    value: users,
  });

  // Define Copilot action
  useCopilotAction({
    name: "Select an employee",
    description: "Select an employee from the list",
    parameters: [
      {
        name: "employeeId",
        type: "string",
        description: "The ID of the employee to select",
        required: true,
      }
    ],
    handler: async ({ employeeId }) => selectEmployee(employeeId),
  });

  return (
    <>...</>
  );
}


Enter fullscreen mode Exit fullscreen mode

You can check theirĀ documentationĀ for more information.

copilotkit GIF

Star the CopilotKit repository ā­


4. Julep - Managed backend for AI apps

Building AI apps can quickly become convoluted. Multiple moving components, such as models, tools, and techniques, make managing them more challenging.

Julep, a managed backend for AI apps, streamlines building super-efficient AI apps with built-in memory (user management) and knowledge (built-in RAG and context management).

Get started with the followingĀ pipĀ command.



pip install julep


Enter fullscreen mode Exit fullscreen mode

Here is how it works.



from julep import Client
from pprint import pprint
import textwrap
import os

base_url = os.environ.get("JULEP_API_URL")
api_key = os.environ.get("JULEP_API_KEY")

client = Client(api_key=api_key, base_url=base_url)

#create agent
agent = client.agents.create(
    name="Jessica"
    model="gpt-4",
    tools=[]    # Tools defined here
)
#create a user
user = client.users.create(
    name="Anon",
    about="Average nerdy tech bro/girl spending 8 hours a day on a laptop,
)
#create a session
situation_prompt = """You are Jessica. You're a stuck-up Cali teenager.
You complain about everything. You live in Bel-Air, Los Angeles and
drag yourself to Curtis High School when necessary.
"""
session = client.sessions.create(
    user_id=user.id, agent_id=agent.id, situation=situation_prompt
)
#start a conversation

user_msg = "hey. what do you think of Starbucks?"
response = client.sessions.chat(
    session_id=session.id,
    messages=[
        {
            "role": "user",
            "content": user_msg,
            "name": "Anon",
        }
    ],
    recall=True,
    remember=True,
)

print("\n".join(textwrap.wrap(response.response[0][0].content, width=100)))



Enter fullscreen mode Exit fullscreen mode

They also support Javascript. Check out theirĀ documentationĀ for more.

Julep GIF

Star the Julep repository ā­


5. Traceloop's openLLMetry - AI monitoring made easy

Building AI apps without monitoring LLMs is a disaster waiting to happen. Language models are non-deterministic and can be tricky at times, and the only way to deliver consistent value to users is to monitor LLM traces constantly.

Taceloopā€™s OpenLLMetry, an open-source monitoring platform, is the best for tracking AI workflows.

OpenLLMetry can instrument everything thatĀ OpenTelemetry already instruments, including your DB, API calls, and more.

In addition, they built a set of custom extensions that instrument things like your calls to OpenAI or Anthropic or your Vector DB, such as Chroma, Pinecone, Qdrant, or Weaviate.

The easiest way to get started is to use the SDK. For a complete guide, visit ourĀ docs.

Install the SDK:



pip install traceloop-sdk


Enter fullscreen mode Exit fullscreen mode

In your LLM app, to start instrumenting your code, initialize the Traceloop tracer like this:



from traceloop.sdk import Traceloop

Traceloop.init() 


Enter fullscreen mode Exit fullscreen mode

That's it. You're now tracing your code with OpenLLMetry! If you're running this locally, you may want to disable batch sending so you can see the traces immediately:



Traceloop.init(disable_batch=True)


Enter fullscreen mode Exit fullscreen mode

For more, refer to their documentation.

Traceloop GIF

Star the OpenLLMetry repository ā­


6. Taipy - Build AI web apps in Python

TaiPy is an open-source library for building production-ready AI apps faster in Python. It allows you to go from simple pilots to production-ready web applications quickly.

This is a superpower for all Python developers; even without knowledge of JavaScript, you can build real-world AI applications.

Quickly get started with it usingĀ pip.



pip install taipy


Enter fullscreen mode Exit fullscreen mode

This simple Taipy application demonstrates how to create a basic film recommendation system using Taipy.



import taipy as tp
import pandas as pd
from taipy import Config, Scope, Gui

# Defining the helper functions

# Callback definition - submits scenario with genre selection
def on_genre_selected(state):
    scenario.selected_genre_node.write(state.selected_genre)
    tp.submit(scenario)
    state.df = scenario.filtered_data.read()

## Set initial value to Action
def on_init(state):
    on_genre_selected(state)

# Filtering function - task
def filter_genre(initial_dataset: pd.DataFrame, selected_genre):
    filtered_dataset = initial_dataset[initial_dataset["genres"].str.contains(selected_genre)]
    filtered_data = filtered_dataset.nlargest(7, "Popularity %")
    return filtered_data

# The main script
if __name__ == "__main__":
    # Taipy Scenario & Data Management

    # Load the configuration made with Taipy Studio
    Config.load("config.toml")
    scenario_cfg = Config.scenarios["scenario"]

    # Start Taipy Core service
    tp.Core().run()

    # Create a scenario
    scenario = tp.create_scenario(scenario_cfg)

    # Taipy User Interface
    # Let's add a GUI to our Scenario Management for a complete application

    # Get list of genres
    genres = [
        "Action", "Adventure", "Animation", "Children", "Comedy", "Fantasy", "IMAX"
        "Romance","Sci-FI", "Western", "Crime", "Mystery", "Drama", "Horror", "Thriller", "Film-Noir","War", "Musical", "Documentary"
    ]

    # Initialization of variables
    df = pd.DataFrame(columns=["Title", "Popularity %"])
    selected_genre = "Action"

    # User interface definition
    my_page = """
# Film recommendation

## Choose your favourite genre
<|{selected_genre}|selector|lov={genres}|on_change=on_genre_selected|dropdown|>

## Here are the top seven picks by popularity
<|{df}|chart|x=Title|y=Popularity %|type=bar|title=Film Popularity|>
    """

    Gui(page=my_page).run()



Enter fullscreen mode Exit fullscreen mode

Check out theĀ documentationĀ for more.

Taipy Gif

Star the Taipy repository ā­


7. Trigger Dev - Background Jobs Platform

Trigger. Dev is an open-source platform and SDK that allows you to create long-running background jobs with no timeouts. Write normal async code, deploy, and never hit a timeout.

They also let you reliably call AI APIs with no timeouts, automatic retrying, and tracing.Ā You can use the existing SDKs with it.



import { task } from "@trigger.dev/sdk/v3";
// Generate an image using OpenAI Dall-E 3
export const generateContent = task({
  id: "generate-content",
  retry: {
    maxAttempts: 3,
  },
  run: async ({ theme, description }: Payload) => {
    const textResult = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: generateTextPrompt(theme, description),
    });

    if (!textResult.choices[0]) {
      throw new Error("No content, retryingā€¦");
    }

    const imageResult = await openai.images.generate({
      model: "dall-e-3",
      prompt: generateImagePrompt(theme, description),
    });

    if (!imageResult.data[0]) {
      throw new Error("No image, retryingā€¦");
    }

    return {
      text: textResult.choices[0],
      image: imageResult.data[0].url,
    };
  },
});


Enter fullscreen mode Exit fullscreen mode

Trigger dev GIF

Star the Trigger.dev repository ā­


8. Milvus - Cloud native vector database for AI apps

Vector databases are a crucial part of building AI applications. They help you store, query, and manage embeddings of unstructured data like texts, Images, videos, and audio.

Milvus is one of the best open-source vector databases. It has all the advanced vector storing and querying methods, from HNSW to quantization methods.

They provide SDK for most languages like Python, Javascript, Go, Rust, etc.

Here is how you can get started with PyMilvus.



pip install --upgrade pymilvus==v2.4.4



Enter fullscreen mode Exit fullscreen mode

To install the Model library for embedding operations, run the following command:



pip install pymilvus[model]


Enter fullscreen mode Exit fullscreen mode

Connect to Milvus



from pymilvus import MilvusClient

client = MilvusClient("http://localhost:19530")

client = MilvusClient(
    uri="http://localhost:19530",
    token="root:Milvus",
    db_name="default"
)

client = MilvusClient(
    uri="http://localhost:19530",
    token="user:password", # replace this with your token
    db_name="default"
)


Enter fullscreen mode Exit fullscreen mode

For more, refer to their documentation.

Milvus GIF

Star the Milvus repository ā­


9. Postiz - Grow your internet presence using AI

Building an app is one thing, but getting users is another. What can be a better vector for finding potential clients and users than social media?

Postiz helps you step up your social media game using AI.

It offers everything you need to manage social media posts, build an audience, capture leads, and grow your business.

Check out the repository for more.

Postiz GIF

Star the Postiz repository ā­


Thanks for reading the article.

Let me know in the comments below if any other cool AI tools or frameworks have helped you build your application.

P.S. Feel free to follow me on X; I share valuable stuff - promise!

Top comments (48)

Collapse
 
rsiv profile image
Rak

You should check out the Nitric Framework - nitric.io/

Nitric is an OpenSource cloud framework that provides resources like APIs, WebSockets, databases, queues, and more. Its pluggable architecture lets you swap services or clouds without changing your app code.

Plus, it's language-agnostic, supporting JavaScript, TypeScript, Python, Go, and many others.

Collapse
 
julianthefrank profile image
Julian Frank

Looks like a competition for AWS amplify

Collapse
 
nevodavid profile image
Nevo David

Not really, their competition is winglang.io

Thread Thread
 
rsiv profile image
Rak

Yeh this is correct in regards to direct competition.

I suspect Amplify users would find value in Nitric as well, its useful for any developer looking move fast in terms of cloud deployments rather than being bogged down with a lot of lines of IaC.

Collapse
 
nevodavid profile image
Nevo David

This looks interesting!

Collapse
 
jungbasher profile image
JungBasher

Any ideas on alternatives to Nitric.?.Please share.šŸ˜‰

Collapse
 
h8moss profile image
h8moss

It's all AI?
Always has been...

Image description

Collapse
 
abrahamn profile image
Abraham

I think the title changing slightly to the modern day AI developer something something would also fit šŸ˜

Collapse
 
nevodavid profile image
Nevo David

šŸ¤–

Collapse
 
hosseinyazdi profile image
Hossein Yazdi

Composio looks promising. Thanks for the share Nevo!

For those who're interested in more, I'd like to suggest OpenAlternative.

It suggests the best open-source alternatives to popular software.

Collapse
 
nevodavid profile image
Nevo David

This is a great directory!

Collapse
 
hosseinyazdi profile image
Hossein Yazdi

Thanks for your feedback Nevo!

Collapse
 
harshith_mullapudi profile image
Harshith Mullapudi • Edited

You should check out tegon.ai - Github

Tegon is an open-source, dev-first alternative to Jira, Linear. You can automate anything using Tegon actions. It's based on typescript.

Collapse
 
andrewbaisden profile image
Andrew Baisden

Great resource, so many products and services available now we will never run out of content.

Collapse
 
nevodavid profile image
Nevo David

True that!

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

This is so cool; thanks for this fantastic list.

Collapse
 
nevodavid profile image
Nevo David

You are welcome

Collapse
 
urulooke profile image
Alejandro GonzƔlez Valenciano

Great list, thanks a lot!

Collapse
 
nevodavid profile image
Nevo David

Thank you šŸ™šŸ»

Collapse
 
marcinguy profile image
Marcin Kozlowski

Nice list

Collapse
 
nevodavid profile image
Nevo David

šŸš€

Collapse
 
reactadmin profile image
react-admin

Great resource, thanks for sharing! šŸ™ If you're looking for an open-source React framework
for B2B apps, you should check out React-admin.

Collapse
 
nevodavid profile image
Nevo David

It's good!

Collapse
 
st3adyp1ck profile image
Ilya Belous

Wow, these open-source gems are like the Infinity Stones of codingā€”gather all 9, and youā€™ll unlock ultimate developer power! Just hoping my IDE doesn't snap away half my code...šŸ˜ˆ

Collapse
 
nevodavid profile image
Nevo David

šŸ˜…šŸ˜‚