This week in the Topics in Open Source Development course, we were tasked with reading each other's code for the first assignment, evaluating it, and creating issues on GitHub to point out bugs, errors, or potential improvements. This is the essence of open source: collaborating and building with other developers. If I hadn't already completed my co-op work term, this experience might have felt quite different. Reading someone else’s code can feel strange at first, but with practice, it becomes second nature, and you quickly learn to distinguish between good and bad code.
How Was It Doing This Asynchronously?
I have to say, I really preferred the asynchronous approach to code reviews. This mirrors how it’s done in the workplace: you typically don't hop on a call with other developers, sharing your screen to point out issues or problems in their pull requests. Instead, everything is handled directly through GitHub, where comments, suggestions, and changes can be reviewed at each person's own pace.
Reviewing Someone Else's Code
As a Junior Software Engineer, I'm already quite familiar with code reviews—it's something I do regularly at my job. However, this was the first time I reviewed code written by my classmates. It was a really enjoyable experience, giving me a chance to see how others approach problems in ways that differ from my own. I didn’t encounter any major difficulties, but I did identify several issues. This week, my lab partner's project was a command-line tool that automatically generates README.md files based on source code files. I loved the concept; it’s a clever way to break down complex code into layman's terms. After trying out the program, and reading the code thoroughly, I found a few issues:
issue 1
One issue I found was that the generated README.md file was overwriting the existing README.md in the repository. This happened because the tool didn’t specify a path for the generated file or give it a unique name to distinguish it. My recommendation to fix this was to assign the generated file a unique name based on the source file it was created from, something like [FILE_NAME_README.md].
issue 2
Uncaught error when initializing the Groq
client: When I tried running the command-line tool, an error was thrown, but it wasn't very descriptive. The issue was caused by a missing API key in the .env file. I recommended wrapping the initialization in a try/catch
block to provide a clear and descriptive error message, so the user immediately understands what went wrong and why.
issue 3
Uncaught error when attempting to process a file that does not exist: The tool attempted to process a file without first checking if it existed. To fix this issue, I recommended wrapping the file parser in a try/catch block and providing a clear message to the user if the error occurs due to a missing file.
issue 4
I spotted a small typo in my partner's program: The -V
flag was used to print the version information, but the requirement specified using -v
or --version
instead. I pointed this out as a nitpick, suggesting the flag should be lowercase. Additionally, the requirement mentioned displaying both the package's name and its version, but currently, only the version was shown.
issue 5
For the last issue I found, this one highlighted the importance of effective prompt engineering. I created a non-source code file containing the text:
Once upon a time, there was a boy called Romeo. One day, Romeo met Juliet!
The program unexpectedly generated a README.md file that discussed the play, providing historical context and additional details, even though it should have only processed source code files. To address this, I provided some prompt engineering tips, such as emphasizing and reiterating important notes at both the beginning and end of the prompt, to help ensure the program correctly identifies and processes only valid source code files.
Someone Else Reviewing My Code
How did I feel about someone else testing my code? Honestly, it felt great! Knowing that I wasn't writing this code just for myself but for others too, I made sure my code was readable, well-commented, organized, and easy to understand. Since my code is public, anyone, regardless of their skill level, might read it or try to contribute, so clarity was key.
Nothing really surprised me during the review, but it was a good reminder that no matter how many times you double, triple, or even quadruple-check your work, someone else might still catch things you missed. My lab partner spotted a few details I had overlooked, like a typo in the README.md and a missing repository description. He even found and diagnosed an issue that was causing my program to fail entirely — the Groq client baseURL had stopped working! I quickly made a patch PR to fix this by removing the baseURL
altogether. Additionally, he pointed out that I hadn't discussed any of the packages used in my program in the README.md file, so I went ahead and added that information.
The Aftermath
I was able to resolve all of my issues thanks to my partner's incredibly detailed and thorough feedback, complete with screenshots, explanations, and suggested fixes. It was genuinely enjoyable to address issues that others had caught—things I hadn’t noticed myself! It gave me a sense of collaboration and community, which is exactly how Open Source Development should feel. I really enjoyed this experience and am eagerly looking forward to more.
Conclusion
The level of collaboration and teamwork required for asynchronous development is incredible. It's not just the developer writing the code who must focus on code cleanliness, formatting, and clear comments; the person filing an issue also needs to provide as much detail as possible to describe the problem, reproduce it, and even suggest potential fixes. This course has been a great reassurance that I made the right career choice. If it's this enjoyable in just the second week, I can't wait to see what's in store for the weeks to come!
Top comments (0)