Interacting with command line commands is the fastest way to get things done as a software engineer. While GUI tools can simplify some tasks, nothing beats the speed and efficiency of command-line tools for many operations.
In today's article, we will explore the argparse
library in Python, which is essential for writing command-line tools. We'll build an intelligent log analyzer using argparse
to see how it works in a practical context.
If you are interested in more content covering topics like this, subscribe to my newsletter for regular updates on software programming, architecture, and tech-related insights.
Project Setup
Before we dive into coding, let's set up our project environment. You will need to have Ollama installed on your machine, along with the Mistral model.
First, pull the Mistral model:
ollama pull mistral
Ensure that the Ollama server is running locally. The default address is http://localhost:11434.
Next, create a simple directory for our project and set up a Python virtual environment:
mkdir smart-tool
cd smart-tool
python3 -m venv venv
source venv activate
Install the required packages:
pip install requests
With the project setup complete, we can start writing our command line tool. But first, let's understand more about argparse
.
What is argparse
?
Python comes with a variety of built-in tools for system interaction, time manipulation, JSON handling, and more. Among these tools is argparse
, a module in Python's standard library used for parsing command-line arguments. It allows you to create user-friendly command-line interfaces for your programs.
Here are some key terms to understand when dealing with command line tools:
Argument: A value passed to a program via the command line. Example: In
python
script.py
filename.txt
,filename.txt
is an argument.Parameter: Inputs to functions or commands, including arguments and options. Example: In
python
script.py
filename.txt --verbose
, bothfilename.txt
and--verbose
are parameters.Command: An instruction given to the command-line interface. Example:
python
script.py
filename.txt --verbose
is a command.Option: A parameter that modifies the behavior of the command, typically prefixed with dashes (e.g.,
-v
or--verbose
). Example:--verbose
inpython
script.py
filename.txt --verbose
is an option.Subcommand: A command within a larger command structure, allowing multiple modes of operation. Example:
commit
ingit commit -m "message"
is a subcommand ofgit
.
The argparse
library makes it easy to integrate these components into your command line tools. Let's look at a simple example where we display the user's age. Create a file called age.py
and add the following code:
import argparse
def main():
parser = argparse.ArgumentParser(description='Example CLI')
parser.add_argument('-n', '--name', help='Your name')
parser.add_argument('--age', type=int, help='Your age')
args = parser.parse_args()
print(f"Hello, {args.name}! You are {args.age} years old.")
if __name__ == '__main__':
main()
In this code, we define a main
function to handle the command line logic. We set up the parser with ArgumentParser
, configuring arguments for the name and age. The -n
and --name
arguments are for the user's name, while --age
is for their age. After parsing the arguments, we print a greeting message.
Run the code from the command line:
python3 age.py -n Your_name --age Your_age
Here’s an example of the output:
Now that we understand the basics of argparse
, let's create a more complex program: an intelligent log analyzer.
Building the Intelligent Log Analyzer
In this section, we'll build an intelligent log analyzer using argparse
, Ollama, and Mistral. The tool will take the path of a log file as an argument, read its content, and send it to the Ollama API for analysis. The API will help identify errors and provide clues on how to resolve them.
Create a file called smartlog.py
and add the following code:
import argparse
import requests
import json
def analyze_log(file_path, custom_message=None):
# Default prompt message
default_message = "Hi. Please, take a look at this log content and find error lines if any. If possible, give some clues on how to resolve these issues."
prompt_message = custom_message if custom_message else default_message
# Read the log file
with open(file_path, 'r') as file:
log_content = file.read()
# Prepare the data to be sent to the Ollama API
payload = {
"model": "mistral",
"prompt": f"{log_content} \n\n {prompt_message}",
}
# Send the content to Ollama API and handle streaming response
response = requests.post(
"http://localhost:11434/api/generate", # Replace with your Ollama API endpoint
headers={"Content-Type": "application/json"},
data=json.dumps(payload),
stream=True
)
# Check if the request was successful
if response.status_code == 200:
print("Analysis Result:")
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
json_response = json.loads(decoded_line)
print(json_response["response"], end='', flush=True)
if json_response.get("done"):
break
print() # Ensure the final output ends with a newline
else:
print(f"Failed to analyze log file. Status code: {response.status_code}")
print(response.text)
def main():
parser = argparse.ArgumentParser(description="Intelligent Log Analyzer")
parser.add_argument('file', type=str, help='Path to the log file')
parser.add_argument('--message', type=str, help='Custom message to include in the prompt', default=None)
args = parser.parse_args()
analyze_log(args.file, args.message)
if __name__ == "__main__":
main()
In this code:
The
analyze_log
function takes a file path and an optional custom message. It reads the log file content and sends it to the Ollama API.The response from the Ollama API is streamed and printed line by line.
The
main
function sets up theargparse
parser, defining the requiredfile
argument and an optionalmessage
argument. It then callsanalyze_log
with the parsed arguments.
Test the program with a log file. You can find an example log file at this GitHub link. Run the program:
python smartlog.py 2023-09-09.log
You will get an output similar to this:
Analysis Result:
Based on the given log content, it appears that there is an issue with sending webhooks, specifically that they are returning a `<nil>` value. This could indicate a few different things:
1. The webhook URLs are incorrect or inaccessible. Check that the URLs are valid and reachable from your server. You can try accessing them directly in a web browser to see if you get an error.
2. Authentication is required but not being provided. Make sure that any authentication tokens or keys required by the webhooks are being passed correctly.
3. The data being sent to the webhooks is invalid. Check that the format of the data being sent matches what is expected by the webhook.
4. Network connectivity issues. Check that your server has network connectivity to the webhook servers, and that any firewalls or security groups are configured correctly.
5. Maximum retries reached. This warning indicates that the script is attempting to send the same webhook multiple times (up to a maximum of 5), but it's still failing. You may need to investigate why the webhooks are not being sent successfully, and consider implementing error handling or retry logic in your code.
To help diagnose the issue further, you could add some debug statements to your code to print out more details about what is happening when the webhook is being sent. For example, you could print out the URL being used, any authentication tokens or keys, and any error messages that are being returned. This information could provide additional clues as to what is causing the issue.
You can also customize the prompt message:
python smartlog.py 2023-09-09.log --message "What is the structure of these logs"
The output will be:
Analysis Result:
These logs are written in a plain text format with each line representing an event or message. The first part of each line is a timestamp indicating when the event occurred. Following the timestamp, there are one or more messages enclosed in parentheses and prefixed by their respective severity level (ERROR or WARNING).
For example:
ERROR - 2023-09-09 22:58:45 - error sending webhook: %!s(<nil>)
The above log entry indicates an ERROR event that occurred on September 9, 2023 at 22:58:45. The message associated with this event is "error sending webhook: ".
Similarly, the following line is a WARNING event that occurred on September 9, 2023 at 23:04:26:
WARNING - 2023-09-09 23:04:26 - maximum retries reached: %!s(int=5)
This warning message indicates that the maximum number of retries (5) have been reached for an operation related to a webhook.
And that’s how you can use argparse
to build both simple and complex command line tools. 🚀
Here is a version of the command line code with colors output and a loading.
import argparse
import requests
import json
import threading
import time
from colorama import init, Fore, Style
# Initialize colorama
init(autoreset=True)
def analyze_log(file_path, custom_message=None):
# Default prompt message
default_message = "Hi. Please, take a look at this log content and find error lines if any. If possible, give some clues on how to resolve these issues."
prompt_message = custom_message if custom_message else default_message
# Read the log file
with open(file_path, 'r') as file:
log_content = file.read()
# Prepare the data to be sent to the Ollama API
payload = {
"model": "mistral",
"prompt": f"{prompt_message}\n\n{log_content}"
}
# Function to display a loading animation
def loading_animation(stop_event):
while not stop_event.is_set():
for char in "|/-\\":
print(Fore.YELLOW + f'\rLoading {char}', end='', flush=True)
time.sleep(0.1)
if stop_event.is_set():
break
print('\r', end='', flush=True) # Clear the loading line
# Event to signal the loading animation to stop
stop_event = threading.Event()
# Start the loading animation in a separate thread
loader_thread = threading.Thread(target=loading_animation, args=(stop_event,))
loader_thread.start()
# Send the content to Ollama API and handle streaming response
try:
response = requests.post(
"http://localhost:11434/api/generate", # Replace with your Ollama API endpoint
headers={"Content-Type": "application/json"},
data=json.dumps(payload),
stream=True
)
# Check if the request was successful
if response.status_code == 200:
stop_event.set()
loader_thread.join()
print(Fore.GREEN + "\nAnalysis Result:")
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
json_response = json.loads(decoded_line)
print(Fore.CYAN + json_response["response"], end='', flush=True)
if json_response.get("done"):
break
print() # Ensure the final output ends with a newline
else:
stop_event.set()
loader_thread.join()
print(Fore.RED + f"\nFailed to analyze log file. Status code: {response.status_code}")
print(Fore.RED + response.text)
except Exception as e:
stop_event.set()
loader_thread.join()
print(Fore.RED + f"\nAn error occurred: {str(e)}")
finally:
# Ensure the loading animation stops in case of any unexpected errors
stop_event.set()
loader_thread.join()
def main():
parser = argparse.ArgumentParser(description="Intelligent Log Analyzer")
parser.add_argument('file', type=str, help='Path to the log file')
parser.add_argument('--message', type=str, help='Custom message to include in the prompt', default=None)
args = parser.parse_args()
analyze_log(args.file, args.message)
if __name__ == "__main__":
main()
You can find the code of the codebase at https://github.com/koladev32/smartlog.
Conclusion
In this project, we've learned how to use the argparse
library to build a command line tool in Python. We explored the basics with a simple age display tool and then moved on to create a more complex intelligent log analyzer using argparse
, Ollama, and Mistral. This tool demonstrates the power and flexibility of command line interfaces, enabling efficient and user-friendly interaction with your programs.
Happy coding, and good luck with your projects!
If you have any questions or feedback, feel free to leave a comment below.
If you enjoyed this article and want to stay updated with more content, subscribe to my newsletter. I send out a weekly or bi-weekly digest of articles, tips, and exclusive content that you won't want to miss 🚀
Top comments (1)
This is nice. Ty K!