For all projects I worked on, I used some sort of project management system where project scope was defined as a list of tasks (tickets), and progress was reported by changing task statuses.
While such project management systems offer various dashboards and reports, interpreting a long list of tasks with cryptic titles created by engineers is not trivial. To provide transparency for project sponsors and customers, I had to manually create a project report and then answer related questions.
What if instead we asked LLM for help? The idea is simple: fetch all project tasks and feed them to LLM asking for the report. Then start a chat allowing for further questions.
Collecting data
We will use Jira for this experiment as it's a popular tool with easy-to-use REST API. The example project, for which we'll create the report, is very technical - it's about creating a build script that can detect what the code uses and generate the required instructions for the build system. Such a project will surely have technical tasks with cryptic titles.
Let's start with fetching tasks. The project in the example setup is represented as a single parent ticket (an epic) with a list of child tickets (tasks). For each task, we will fetch the full history to see how the ticket status changes over time. Using Python's Jira client the implementation is simple. Note in Jira nomenclature the term issue is used instead of ticket which is reflected in the code.
jira = JIRA(server=os.environ["JIRA_SERVER"], basic_auth=(os.environ["JIRA_USER"], os.environ["JIRA_TOKEN"]))
def fetch_issues(epic_key):
issues = []
print("Loading epic data...", end="", flush=True)
issues.append(jira.issue(epic_key))
print("done")
print("Loading tasks...", end="", flush=True)
child_issues = jira.search_issues(f"parent = {epic_key}")
for issue in child_issues:
issues.append(jira.issue(issue.key, expand="changelog"))
print("done")
return issues
Since fetching all tickets with history takes a while, it is handy to store this data locally for further experiments. While playing with the implementation I used the below functions to save tasks to, and load them from a file:
def save_issues(filename, issues):
with open(filename, "x") as file:
file.write("[")
file.write(",".join(
json.dumps(issue.raw) for issue in issues))
file.write("]")
def load_issues(filename):
with open(filename, "r") as file:
data = json.load(file)
return [Issue(jira._options, jira._session, raw=raw_issue)
for raw_issue in data]
Preparing data
The next step is to prepare data for LLM. Raw Jira data in JSON format is quite verbose, we don't need all these extra fields. Let's extract basic information: subject, description, type, status, and creation date. From history, we will only extract ticket status changes along with their date and author, ignoring changes to other fields.
All this information will be stored as plain text. I have seen people using JSON or XML as LLM input, but my observation is LLMs are very good at interpreting plain text data. Plus with this approach, I don't need to worry about formatting text fields to be JSON or XML compatible. The only processing I do is to strip empty lines from the description, and the primary reason is to make it easier for me to look at results.
def strip_empty_lines(s):
return "".join(line for line in (s or "").splitlines() if line.strip())
def issue_to_str(issue):
return f"""
{issue.fields.issuetype}: {issue.key}
Summary: {issue.fields.summary}
Description: {strip_empty_lines(issue.fields.description)}
Type: {issue.fields.issuetype}
Status: {issue.fields.status}
Created: {issue.fields.created}
Priority: {issue.fields.priority}
"""
def changelog_to_str(changelog, changeitem):
return f"""
Author: {changelog.author.displayName}
Date: {changelog.created}
Status change from: {changeitem.fromString} to: {changeitem.toString}
"""
def history_to_str(issue):
if issue.changelog is None or issue.changelog.total == 0:
return ""
history_description = ""
for changelog in issue.changelog.histories:
try:
statuschange = next(filter(lambda i: i.field == "status", changelog.items))
history_description += changelog_to_str(changelog, statuschange)
except StopIteration:
pass
return history_description
#this function assumes the first issue is an epic followed by tasks.
def describe_issues(issues):
description = "Project details:"
description += issue_to_str(issues[0])
description += "\nProject tasks:"
for issue in issues[1:]:
description += "\n" + issue_to_str(issue)
description += f"History of changes for task {issue.key}:"
description += history_to_str(issue)
return description
The epic I use for this experiment has 30 tasks that have between 1 and 15 status changes in their history. I will not quote the full output of the describe_issues
function, but to give you an idea of how it looks here is a short excerpt:
Project details:
Epic: TKT-642
Summary: Create universal build script
Description:
Type: Epic
Status: In Development
Created: 2024-05-24T10:48:33.050+0200
Priority: P4 - Low
Project tasks:
Task: TKT-805
Summary: add test reporting for e2e tests
Description:
Type: Task
Status: In Progress
Created: 2024-09-06T09:56:33.919+0200
Priority: P4 - Low
History of changes for task TKT-805:
Author: Jane Doe
Date: 2024-09-06T10:04:15.325+0200
Status change from: To Do to: In Progress
Task: TKT-801
Summary: Sonar detection
Description: * add sonar config file detection *
Type: Task
Status: In Progress
Created: 2024-08-30T13:57:44.364+0200
Priority: P4 - Low
History of changes for task TKT-801:
Author: Jane Doe
Date: 2024-08-30T13:57:58.450+0200
Status change from: To Do to: In Progress
Task: TKT-799
Summary: Add check_tests step
Description:
Type: Task
Status: Review
Created: 2024-08-29T18:33:52.268+0200
Priority: P4 - Low
History of changes for task TKT-799:
Author: Jane Doe
Date: 2024-08-29T18:40:35.305+0200
Status change from: In Progress to: Review
Author: Jane Doe
Date: 2024-08-29T18:33:57.095+0200
Status change from: To Do to: In Progress
Prompting
The prompt we'll use consists of two parts. First, it gives an instruction to create a report, detailing what specific information we want in the report. Then we insert ticket information prepared in previous paragraphs. LLMs tend to give verbose responses, so we ask specifically to not add any additional explanation. As the experiment is performed in the terminal, we'll also ask to make the response terminal-freindly.
def create_prompt(isses_description):
return f"""
Using below information from jira generate executive report that focuses on what was done and what is remaining.
Provide project progress information. Point out problematic tasks and what may be wrong.
Use status change history to point out tasks that were worked on for too long or have been reopened many times.
Check if you can detect other problems from tasks or their history.
Provide only text content that can be displayed on terminal, without any additional explanation or markdown.
{isses_description}
"""
The LLM I use for this experiment is Anthropic's Claude 3.5 Sonnet. In addition to sending the prompt message, we'll also specify identity. Identity could be just added as another line at the beginning of the prompt, but since the plan is to follow up with questions (create an interactive chat), Claude's documentation recommends setting identity as a separate system
parameter in such a scenario.
def send_messages(messages):
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_KEY"])
identity = "You are a seasoned IT program manager. You have very good communication skills, your answers are concise and focus on key insights."
message = client.messages.create(
system=identity,
max_tokens=2048,
messages=messages,
model="claude-3-5-sonnet-20240620"
)
return message.content[0].text
def generate_report(prompt):
messages=[
{
"role": "user",
"content": prompt
}]
return send_messages(messages)
Let's put this together and run:
if __name__=="__main__":
issues = fetch_issues('TKT-642')
descrption = describe_issues(issues)
prompt = create_prompt(description)
report = generate_report(prompt)
print(report)
Below is the full report I received from Claude:
Executive Report: Universal Build Script Project (TKT-642)
Project Progress:
- 11 out of 24 tasks are completed (45.8%)
- 5 tasks are in progress or review (20.8%)
- 8 tasks remain to be started (33.3%)
Key Accomplishments:
- Created prototype jinja template for builder
- Implemented config extension and validation
- Added unit tests execution step
- Integrated support for sonar scanning
- Created image for building config and tech scripts
In Progress:
- Adding test reporting for e2e tests
- Implementing filtering logic for environments and Sonar detection
- Developing developer documentation
Problematic Areas:
- Task TKT-734 (Add config extension) went through multiple status changes, indicating potential issues or complexity.
- Task TKT-697 (Add support for sonar scanning) has been reopened multiple times, suggesting implementation challenges.
- Task TKT-715 (Create developer documentation) has been in progress for an extended period without completion.
Upcoming Work:
- Integration of the build system into various services (authentication, database-api, shopping-cart, reporting, localization)
- Adding support for monorepos and custom jobs
- Creating user documentation
Recommendations:
- Review the implementation of config extension and sonar scanning to address recurring issues.
- Prioritize the completion of developer documentation to support team adoption.
- Begin planning for the integration of the build system into remaining services.
- Consider reassessing the priority of tasks that have remained in the "To Do" state for an extended period.
Overall, the project is making steady progress, with nearly half of the tasks completed. However, attention should be given to resolving recurring issues and completing in-progress tasks to maintain momentum.
The result is quite good. It did miscount tickets - in various runs I did it gave me a different number, but never a correct number of 30. Well, LLMs are known for not being able to count r's in the word strawberry. But the summary is correct, recommendations are also reasonable.
Asking questions
Now let's create an interactive session and ask some questions. We'll start with a pre-defined question which is the original prompt we used to generate the report. Then we will read the user's input and attach it to messages. Each response, along with the report is also recorded in the messages chain.
def interactive_session(prompt):
messages = []
user_input = prompt
while user_input != "exit":
messages.append({
"role" : "user",
"content" : user_input
})
response = send_messages(messages)
print(response)
messages.append({
"role" : "assistant",
"content" : response
})
user_input = input("QUESTION: ")
return messages
To run this, just replace generate_report
in the main section of the script with interactive_session
. Below is a recording of an example session I did. I omitted from the quote the first response which just prints out the report.
QUESTION: how many engineers the team counts?
Based on the information provided in the task histories, I can identify at least three distinct engineers working on this project:
- John Smith
- Jane Doe
- Mike Carpenter
These three names appear frequently in the status change histories for various tasks. However, it's important to note that this may not represent the full team size. There could be other team members who haven't made changes recorded in these specific task histories, or who are involved in the project in other capacities not reflected in this data.
To get an accurate count of the team size, you would need additional information beyond what's provided in these task histories.
QUESTION: is unit test execution task done yet?
Yes, the unit test execution task is completed. Specifically, task TKT-700 with the summary "Add unit tests execution step" is marked as Done. Here's the relevant information:
Task: TKT-700
Summary: Add unit tests execution step
Status: DoneThe history of changes for this task shows that it was moved to "Done" status on 2024-08-02, after going through the typical workflow stages (To Do -> In Progress -> Review -> PREPROD VERIFICATION -> READY FOR PRODUCTION -> Done).
QUESTION: does the scope include running some sort of static code analysis tool?
Yes, the project scope includes running static code analysis. Specifically, there's a task for implementing Sonar scanning, which is a popular static code analysis tool. Here are the details:
Task: TKT-697
Summary: Add support for sonar scanning
Status: DoneThis task was completed and moved to "Done" status on 2024-09-02. The implementation of Sonar scanning indicates that static code analysis is part of the universal build script project.
Additionally, there's another related task that's currently in progress:
Task: TKT-801
Summary: Sonar detection
Status: In ProgressThis task mentions "sonar detection" in its summary, which further confirms that Sonar (a static analysis tool) is being integrated into the build process.
These tasks demonstrate that static code analysis, particularly using Sonar, is included in the project scope and has been largely implemented.
QUESTION: what are the 3 biggest risks of the project?
Based on the provided information, the three biggest risks for this project appear to be:
Integration Delays:
Several tasks for integrating the build system into various services (localization, reporting, shopping cart, authentication, database API) are still in "To Do" status. This poses a risk of delays in widespread adoption of the universal build script across different services. Failure to integrate across all intended services could limit the project's overall impact and value.Documentation Lag:
The task for creating developer documentation (TKT-715) has only recently moved to "In Progress" after being in "To Do" for an extended period. Additionally, the user documentation task (TKT-714) is still in "To Do" status. This lag in documentation creation risks hindering the adoption, proper use, and maintenance of the new build system. It could lead to misuse, inefficiencies, or resistance to adoption due to lack of clear guidance.Complexity in Implementation:
Several tasks, particularly those related to Sonar scanning (TKT-697) and config extension (TKT-734), have shown multiple status changes between "In Progress" and "Review". This pattern suggests potential technical challenges or uncertainties in implementation. There's a risk that these complexities could lead to bugs, performance issues, or maintenance difficulties in the future if not properly resolved.These risks could potentially impact the project's timeline, adoption rate, and overall success if not properly managed and mitigated.
That was surprisingly good. I do like the fact the LLM is using generic knowledge in addition to prompt data, so it was able to answer questions about the "static code analysis tool" even though no ticket data mentions those terms. The answers are a bit too verbose, more experiments with the prompt are required - probably appending instructions to each user's query would help shape better answers.
Issues such as miscounting tickets should be easy to solve, we can calculate base statistics and include them in the prompt itself.
Top comments (3)
Greeting and thanks Michal for sharing that. It looks like a quite useful report generator especially at the end of the sprint or e..g. Quarter.
Btw. Have you considered using a local LLM model? I found that using anything external can be tricky from the legal perspective in a company.
I did not try this yet, but I do plan to run LLaMA 3 locally and see how it will perform. I want to check if I can run the 70B version on the hardware I have.
You can also check jan.ai. Maybe it will be a good choice.