DEV Community

AnhChienVu
AnhChienVu

Posted on

Building a Command-Line Tool to Leverage Large Language Models for generating README

For this week, I am working on the first release in my Open Source Development class.In my Release 1, I built a command-line tool designed to interact with OpenAI-compatible Chat Completion API endpoints, enabling developers to transform files using the powerful capabilities of large language models (LLMs). While many have experienced LLMs through user-friendly applications like ChatGPT, this project allowed me to engage with them on a deeper, programmatic level. By connecting the tool to these APIs, I was able to harness the ability of LLMs to transform and process text in various ways, such as translating languages or reformatting content. This release offered valuable insight into how LLMs can be utilized to solve complex text transformation tasks in programming, a skill set that is becoming increasingly relevant in the field.

Github Repository

You can find the source code, contribute to the project, and explore more about VShell by visiting VShell.

Feel free to fork the repository, open issues, or submit pull requests. Your contributions are always welcome!

What is VShell

VShell is a powerful command-line interface (CLI) tool that leverages a Large Language Model (LLM) to process input files and generate a README file that will explain the source code functionaility and how to use it. Just imagine if someone give you a source code, and you want to have an overall idea about what it does, then use my application, it will give you a picture of it.Besides, it integrates with the OpenAI Chat Completions API (Groq) to deliver enhanced functionality for your data processing needs.

Features

  • Accepts multiple input files as command-line arguments for streamlined batch processing.
  • Streams output directly to the terminal via stdout by default.
  • Integrates seamlessly with OpenAI’s Chat Completions API (Groq) to process input data.
  • Logs detailed information about errors and debugging to stderr for easy troubleshooting.
  • Supports the use of a .env file to configure API keys and other setup values automatically.
  • Supports the option to save results to a specified output file instead of displaying them in the terminal. (Optional feature)
  • Allows optional configuration of model parameters such as temperature for chat completion processing. (Optional feature)

How to install VShell

To install and set up VShell, follow these steps:

  1. Ensure Node.js is installed on your system.
  2. Create a Groq API Key.
  3. Clone the VShell repository to your local machine.
  4. Navigate to the project folder in your terminal and run: npm install
  5. Link the package globally: npm link
  6. Create a .env file to store your Groq API key and other necessary configuration values. ''' #.env file GROQ_API_KEY=your_groq_api_key '''

Usage

Essentially, if you are using Node.js, you have to run it by:
node server.js file_name(s) <arguements>
So if you do not want to write node server.js everytime you run, instead using vshell, you can follow these instructions below:

  1. Ensure Your Project is Set Up: Make sure you have your project files ready, including server.js and .env.
  2. Add a bin Field to package.json: Ensure your package.json has a bin field that points to your server.js file. Here’s an example:
   {
     "bin": {
    "vshell": "src/server.js"
     },
     "scripts": {
    "start": "node src/server.js"
     },
   }
Enter fullscreen mode Exit fullscreen mode
  1. Ensure the server.js file has executable permissions: chmod +x server.js
  2. Use npm link to create a global symlink for your package: npm link
  3. To run VShell, use the following command: vshell file_name(s) <arguements>

Options

  • -V, --version : Output the version number.
  • -d, --debug : Enable detailed debug output.
  • -u, --update : Update VShell to the latest version.
  • -m, --model : Specify the LLM model to use.
  • -T, --temperature : Set the temperature parameter for the model (Groq) (default: 0.2).
  • -o, --output : Specify an output file to save the results.
  • -h, --help : Display help for VShell commands.
  • -t, --token-usage : Speicfy specify the usage of token for prompt and response

Example

To process README.md with a custom temperature setting and save the result to output.txt, use:
vshell ./README.md -t 0.5 -o output.txt
This version improves the clarity and professionalism of the README while retaining all the necessary details.

Top comments (0)