DEV Community

Cover image for Run LangTrace – Open Source Observability Tool for LLM Applications
Ayush kumar for NodeShift

Posted on

Run LangTrace – Open Source Observability Tool for LLM Applications

Image description

Langtrace is an open-source observability tool licensed under AGPL-3.0 and freely available for users and the community. It captures, debugs, and analyzes traces and metrics from all your applications leveraging LLM APIs, vector databases, and LLM-based frameworks.

Langtrace enables seamless tracing with Open Telemetry support, real-time monitoring, performance insights, detailed analytics, and effective debugging tools. It also offers a self-hosting option for full control over deployment.

Prerequisites

  • A Virtual Machine (such as the ones provided by NodeShift) with at least:
    • 8 vCPUs
    • 16GB RAM
    • 150GB SSD(Atleast)
  • Ubuntu 22.04 VM
  • Access to your server via SSH

Step-by-Step process to Install LangTrace Tool Locally

For the purpose of this tutorial, we will use a CPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

However, if you prefer to use a GPU-powered Virtual Machine, you can still follow this guide. LangTrace works on GPU-based VMs as well, performance is better and faster than CPU VM on GPU VM. The installation process remains largely the same, allowing you to achieve similar functionality on a GPU-powered machine. NodeShift’s infrastructure is versatile, enabling you to choose between GPU or CPU configurations based on your specific needs and budget.

Let’s dive into the setup and installation steps to get Screenshot to Code running efficiently on your chosen virtual machine.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

  • Visit the NodeShift Platform and create an account.
  • Once you have signed up, log into your account.
  • Follow the account setup process and provide the necessary details and information. Image description

Step 2: Create a Compute Node (CPU Virtual Machine)

NodeShift Compute Nodes offers flexible and scalable on-demand resources like NodeShift Virtual Machines which are easily deployed and come with general-purpose, CPU-powered, or storage-optimized nodes.

  • Navigate to the menu on the left side.
  • Select the Compute Nodes option.
  • Click the Create Compute Nodes button in the Dashboard to create your first deployment. Image description

Step 3: Select Virtual Machine Uptime Guarantee

  • Choose the Virtual Machine Uptime Guarantee option based on your needs. NodeShift offers an uptime SLA of 99.99% for high reliability.
  • Click on the “Show reliability info” to review detailed SLA and reliability options. Image description

Step 4: Select a Region

In the “Compute Nodes” tab, select a geographical region where you want to launch the Virtual Machine (e.g., the United States).
Image description

Step 5: Choose VM Configuration

  • NodeShift provides two options for VM configuration:

    • Manual Configuration: Adjust the CPU, RAM, and Storage to your specific requirements.
  • Select the number of CPUs (1–96).

  • Choose the amount of RAM (1 GB–768 GB).

  • Specify the storage size (20 GB–4 TB).

    • Predefined Configuration: Choose from predefined configurations optimized for General Purpose, CPU-Powered, or Storage-Optimized nodes.
  • If you prefer custom specifications, manually configure the CPU, RAM, and Storage. Otherwise, select a predefined VM configuration suitable for your workload.
    Image description

Step 6: Choose an Image

Next, you will need to choose an image for your Virtual Machine. We will deploy the VM on Ubuntu, but you can choose according to your preference. Other options like CentOS and Debian are also available to install LangTrace.
Image description

Step 7: Choose the Billing Cycle & Authentication Method

  • Select the billing cycle that best suits your needs. Two options are available: Hourly, ideal for short-term usage and pay-as-you-go flexibility, or Monthly, perfect for long-term projects with a consistent usage rate and potentially lower overall cost.
  • Select the authentication method. There are two options: Password and SSH Key. SSH keys are a more secure option. To create them, refer to our official documentation. Image description

Step 8: Additional Details & Complete Deployment

  • The ‘Finalize Details’ section allows users to configure the final aspects of the Virtual Machine.
  • After finalizing the details, click the ‘Create’ button, and your Virtual Machine will be deployed. Image description

Step 9: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.
Image description

Step 10: Connect via SSH

  • Open your terminal
  • Run the SSH command: For example, if your username is root, the command would be:
ssh root@ip

Enter fullscreen mode Exit fullscreen mode
  • If SSH keys are set up, the terminal will authenticate using them automatically.
  • If prompted for a password, enter the password associated with the username on the VM.
  • You should now be connected to your VM! Image description

Step 11: Clone the Repository

Run the following command to clone the repository:

git clone https://github.com/Scale3-Labs/langtrace.git

Enter fullscreen mode Exit fullscreen mode

Then, run the following command to navigate to the main project directory:

cd langtrace

Enter fullscreen mode Exit fullscreen mode

Image description

Step 12: Install Dependencies

Before we install Docker, we need to install some required dependencies.

Run the following command to updating the Ubuntu package source list for the latest version and security updates:

sudo apt update

Enter fullscreen mode Exit fullscreen mode

Image description

Then, run the following command to, install the dependency packages:

sudo apt install apt-transport-https ca-certificates curl software-properties-common 

Enter fullscreen mode Exit fullscreen mode

Image description

Step 13: Add the GPG key for the Docker Repository

We use curl to add the GPG key for the Docker repository.

Run the following command to add the GPG key for the Docker repository:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Enter fullscreen mode Exit fullscreen mode

Then, run the following command to add Docker APT repository to the system’s source list:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Enter fullscreen mode Exit fullscreen mode

Image description

Step 14: Install Docker

Run the following command to update the package source list again:

sudo apt update
Enter fullscreen mode Exit fullscreen mode

Then, run the following command to install the docker:

sudo apt install docker-ce -y

Enter fullscreen mode Exit fullscreen mode

Image description

Step 15: Verify the Docker Installation

Run the following command to verify the docker installation:

sudo systemctl status docker

Enter fullscreen mode Exit fullscreen mode

Image description

Step 16: Working with Docker Images and Check Version

Open a new terminal and use SSH command to connect with VM again.

Let’s run a simple test container named hello-world as a warm-up and see if Docker is running correctly

docker run hello-world

Enter fullscreen mode Exit fullscreen mode

Check the output below in the screenshot.

Then, run the following command to check the docker version:

docker --version

Enter fullscreen mode Exit fullscreen mode

Image description

Step 17: Starting the servers

First, run the following command to change the permissions of the Docker socket file (/var/run/docker.sock) to make it readable and writable by all users on the system.

sudo chmod 666 /var/run/docker.sock

Enter fullscreen mode Exit fullscreen mode

Then, run the following command to start the server with docker compose:

docker compose up -d

Enter fullscreen mode Exit fullscreen mode

Image description

Next, run the following command to verify that docker is installed correctly, the daemon is running, and your user has the appropriate permissions to interact with docker:

docker ps

Enter fullscreen mode Exit fullscreen mode

Image description

Step 18: Access VM with port forwarding and tunneling

To forward the LangTrace port from your CPU VM to your local machine, use this SSH port forwarding command:

ssh -i "C:\Users\Acer\.ssh\id_rsa" -L 3000:localhost:3000 root@188.227.106.10

Enter fullscreen mode Exit fullscreen mode

Explanation:

  • -i "C:\Users\Acer.ssh\id_rsa": Specifies the path to your private SSH key.
  • -L 3000:localhost:3000: Forwards local port 3000 to port 3000 on the VM.
  • root@188.227.106.10: Connects to your VM as the root user at the IP 188.227.106.10.

Image description

After running this command, you can access the LangTrace in your local browser at localhost:3000.

Step 19: Access LangTrace UI in Browser

Open any local browser and navigate to http://localhost:3000 to access the LangTrace UI.

Once the interface loads, click on the Admin Login button.

Image description

Step 20: Logging In with Admin Credentials

You can find the admin credentials, such as the username and password, in the .env file on GitHub.

Repo Link: https://github.com/Scale3-Labs/langtrace/blob/main/.env

Image description

Image description

Next, enter the username and password, then click on the Sign in with Credentials button.

Image description

Step 21: Check the User Interface

Now check the user interface on localhost.

Image description

Step 22: Install Langtrace Python OpenAI SDK

Run the following command to install the langtrace python openai sdk:

pip3 install langtrace-python-sdk openai

Enter fullscreen mode Exit fullscreen mode

Image description

Then, run the following command to upgrade the openai:

pip install --upgrade openai

Enter fullscreen mode Exit fullscreen mode

Image description

Next, run the following command to install the specific version of openai:

pip install openai==0.28

Enter fullscreen mode Exit fullscreen mode

Image description

Step 23: Create OpenAI API Key

To use the OpenAI API, you need to create an API key. This key will allow you to securely access OpenAI’s services. Follow these steps to generate your API key:

  • Log In to OpenAI:
    Visit the OpenAI platform and log in to your account. If you do not have an account, you will need to sign up.

  • Access the API Section:
    Once logged in, navigate to the top right corner of the page where your profile icon is located. Click on it and select API from the dropdown menu. Alternatively, you can directly access the API section by clicking on API in the main dashboard.

  • Create a New Secret Key:
    In the API section, look for an option that says Create new secret key or View API Key. Click on this option.

Image description

  • Generate the Key: After clicking on create, a new API key will be generated for you. Make sure to copy this key immediately as it will only be shown once.

Image description

Step 24: Update the System and Install Vim

What is Vim?

Vim is a text editor. The last line of the text editor is used to give commands to vi and provide you with information.

Note: If an error occurs stating that Vim is not a recognized internal or external command, install Vim using the steps below.

Step 1: Update the package list

Before installing any software, we will update the package list using the following command in your terminal:

sudo apt update

Enter fullscreen mode Exit fullscreen mode

Image description

Step 2: Install Vim

To install Vim, enter the following command:

sudo apt install vim -y

Enter fullscreen mode Exit fullscreen mode

This command will retrieve and install Vim and its necessary components.

Image description

Step 25: Add Code in Configuration File

Run the following command in the terminal to enter in the configuration file:

vi test.py

Enter fullscreen mode Exit fullscreen mode

Image description

Entering the editing mode in Vi:

Follow the below steps to enter the editing mode in Vim

Step 1: Open a File in Vim

Step 2: Navigate to Command Mode

When you open a file in Vim, you start in the command mode. You can issue commands to navigate, save, and manipulate text in this mode. To ensure you are in command mode, press the Esc key. This step is crucial because you cannot edit the text in other modes.

Add the following code in configuration file:

from langtrace_python_sdk import langtrace
from langtrace_python_sdk.utils.with_root_span import with_langtrace_root_span
import openai

# Initialize Langtrace with appropriate API key and host
langtrace.init(
    api_key="your openai api key",
    api_host="http://localhost:3000/api/trace",
)

@with_langtrace_root_span()
def example():
    # Set OpenAI API key
    openai.api_key = "your openai api key"

    # Create a completion request
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=[
            {
                "role": "system",
                "content": "You are a helpful assistant.",
            },
            {
                "role": "user",
                "content": "What is the capital of India?",
            },
        ],
    )

    # Print the response content
    print(response["choices"][0]["message"]["content"])

    # Call the example function
example()

Enter fullscreen mode Exit fullscreen mode

Image description

Step 26: Run the Script

Finally, execute the following command to run the script:

python3 test.py

Enter fullscreen mode Exit fullscreen mode

After running the script, you will get the following output. In the code, I am asking, “What is the capital of India?” The output will be, “The capital of India is Delhi,” which means both your script and server are working fine. Refer to the above code screenshot for clarification.

Image description

Step 27: Check the Metrics, Traces and Additional Settings

After running the script, your LangTrace UI will be running on localhost:3000. You can open the UI in a browser to check usage, metrics, prompts, and other settings. Refer to the screenshots below for more details.

Image description

Image description

Image description

Conclusion

In this guide, we explain the LangTrace AI open-source Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evaluations and metrics for popular LLMs, LLM frameworks, vectorDBs and more. and provide a step-by-step tutorial on installing LangTrace AI locally on a NodeShift virtual machine. You’ll learn how to install the required software, set up essential tools like Vim, Docker etc.

For more information about NodeShift:

Website
Docs
LinkedIn
X
Discord
daily.dev

Top comments (1)

Collapse
 
jose_angelalvaradogonza profile image
JOSE ANGEL ALVARADO GONZALEZ • Edited

In this example, can i use other LLM open source, for example Ollama, phi, etc ?