DEV Community

Cezar Pleșcan
Cezar Pleșcan

Posted on

My Journey in Software Development

Early days as a programmer

The first instruction I wrote on a computer was the CIRCLE command from the BASIC language. I was fascinated to see the circle drawn on the display after executing the program. I experimented further with drawing lines, rectangles and other figures. But I fancied some animation, I wanted those figures to move on the screen. My parents borrowed an informatics book and I started to simply copy and paste the implementation of some exercises. At that moment I realized that programming wasn’t just about drawing figures on the screen but also implied more complex logic and algorithms. I discovered IF, FOR, WHILE and other statements.

I became attracted to computers and pursued the mathematics-informatics specialization in high school. We learned Pascal, C and C++, which is the foundation for many other programming languages. We were also taught more advanced and interesting topics like algorithms, data structures, object oriented programming, classes, pointers.

One homework assignment involved drawing some buttons that could perform some actions. I paid particular attention to crafting the visuals, especially the clicked state. Another homework involved creating movable figures. While others used keyboard commands to select and define new coordinates, I was determined to make them draggable by mouse, a more intuitive and practical approach. I was passionate about getting that homework done, it was one of the most complex I did at that time. Beyond these exercises, I also developed a C++ version of the classic snake game from Nokia phones.

My final high school project for graduation was a two-player card game. There, I first encountered Internet Explorer and JavaScript. I chose to develop my project in a browser because it offered an easier way to manipulate the cards on the screen, compared to a C++ desktop application. Apart from displaying the cards, I've also programmed the computer to play against it. Once again, I focused heavily on visuals and on instructing the computer to play as better as possible, as they brought me immense satisfaction. Creating practical and useful applications became a core principle for me.

At the faculty, I continued my journey in Computer Science. The most challenging project was to build a physical device with a photodiode capable of tracking a light source. For that I needed a microcontroller and various electronic components. I had to write the entire source code in the assembler language, which offered me the lowest possible level to interact with the microcontroller, unlike higher-level languages. I had to control every instruction and bit of information, there were no operations executed under the hood. I worked on that project for about six months and I managed to complete it and make it work as I expected.

Discovery of web development

First paid project

There were colleagues discussing PHP, but it didn’t raise my interest initially. A close friend approached me about upgrading a Wordpress website he’d previously worked on. I wasn’t initially familiar with PHP (the language used by Wordpress), but I remembered when some colleagues from the faculty were discussing it and that raised my interest and decided to embrace the challenge. This project opened my doors to web development.

The existing website was a presentation of fitness products and articles. The owner wanted to add an online sales feature for fitness magazines. The payment process involved sending an SMS to a payment provider, followed by their service accessing a link on our website to verify the payment success. Having studied MySQL during the faculty, I was able to integrate it for data management, like user registration, login, and keeping track of the bought magazines.

We successfully implemented the new feature, which allowed the website owner to start selling fitness magazines online. This marked my first paid project creating software.

The first personal web project

Realizing the benefit of creating web software, I began to learn more about PHP, Zend framework, JavaScript, jQuery, HTML, and CSS. One of the challenges in JS was to write a more structured code, since the language didn’t provide classes like PHP or C++. Initially I used plain objects to group related methods and properties, to have a way of working closer to the OOP principles.

When I started to use jQuery, I was curious to see how the source code was written. What impressed me was the way the code was structured, how the data was encapsulated, and a mechanism for defining private and public-like methods. I can say that jQuery was a turning point in developing my JS knowledge.

Even from that time, I believed the best way to learn was through practice, not just theory. A real-world problem I identified was the lengthy and error-prone process homeowners’ associations faced when managing invoices.

I began to sketch a solution for that problem. Initially I didn’t know what to start with, then I asked myself what were the main functionalities I needed to cover. I realized the process wasn’t simple and straightforward. If someone would have told me to solve an already defined problem, I could have it done, but I had to first define the problem, which proved to be challenging. The tools I used were a pen, a notebook, and my imagination. Writing down any ideas I had helped me to better visualize and clarify the requirements.

I then set out to create my own library with components, like form submission and validation, error display, and field specific validation rules for entering invoice data. My goal was to create a single-page application using Ajax requests for a more user-friendly experience, avoiding traditional page reloads.

My focus was on code structure too. This proved challenging, especially considering that it involved abstraction and experience, neither of which were heavily emphasized in school. There, unfortunately, we were asked to solve specific problems without delving into the broader applicability of the solutions.

Many times, I found myself caught up in implementation details, which were beneficial for my technical growth, but didn’t bring any valuable progress to the project outcome. A crucial step I discovered was taking a step back and clearly defining the desired outcome and functionalities. I had some appointments with a few administrators of homeowners’ associations to understand the real needs and the time consuming activities, so my project would be valuable to them.

The project spanned several months. While I made progress, I eventually acknowledged that a clear vision for the product was lacking. However, this wasn’t wasted time. Experimenting on this project equipped me with valuable knowledge. Furthermore, I learned the limitations of working solo and the importance of collaboration and seeking guidance from experienced developers.

Beginning of my web career

I decided to enter the web development field, and I started looking for job opportunities. During an interview, the manager was impressed by my solid experience and keen attention to details when I presented my previous project to him. I got the job as a full-stack developer, working with PHP for the backend.

I joined a team of five full-stack developers, and project manager who was responsible, amongst others, with task definition, organization, and assignment. We utilized agile methodology and Jira for task management, both new concepts for me. I was impressed by our team's organized approach. We openly debated technical topics, and I enjoyed working in this collaborative atmosphere. My initial tasks involved creating algorithms to detect different website types, such as e-commerce, forums, educational platforms, and government websites.

I improved my PHP knowledge, gained experience working with databases, task organization and prioritization. I learned that prioritizing was paramount because we had a time frame in which we needed to deliver the software. I grasped the core concept of agile methodologies: incremental development, continuous feedback gathering, and process improvement. While not fully comprehending the power of agile frameworks like Scrum at that time, I adhered to the established practices.

Another project I worked on was a social media platform for sports players. Players could register profiles, share posts, receive comments, and exchange messages with users. I continued working as a full stack developer, being part of another team ranging from two to four developers. A significant accomplishment for me was independently implementing the entire messaging system, designing and developing both the user interface for composing and displaying messages and the backend functionality for handling message communication and database updates. Taking charge of integrating Facebook helped me feel more confident in my ability to handle tasks on my own and needed less help from others.

Another feature I developed was an infinite carousel to display various images. The carousel needed to work both on desktop and mobile so it was the first time when I had to write JavaScript code that had to be mobile compatible. I also needed to ensure compatibility across different browsers like Safari, Opera, Firefox, Internet Explorer (Google Chrome wasn’t as popular yet).

As I saw the increasing complexity of web application, I realized the code needed to be organized in a better way, so I thought about modularization, utilizing IIEF blocks to define and separate different application logic and components. Even if our large JavaScript files only partially adhered to the single responsibility principle, the code functioned well, but that wasn’t the only goal to achieve. However, it lacked clarity for new team members, requiring additional guidance to understand existing code.

The first large-scale single-page application

Let me tell you about a project that was a real game changer for me. I had the opportunity to build a web-based version of an existing complicated desktop app. I was assigned the full responsibility of implementing it from scratch and almost couldn’t believe that I got such a chance!

I went back to my personal old project to get some inspiration from there, and thought again about how to structure the application, what were the main abstract parts of it and how they had to interact. The main role of an HTML page was to present content, not to interact with it. I had to come up with a structure that would help me to easily build a web application that would resemble a desktop app look and feel.

At that time jQuery was very popular and I used it too. It was quite straightforward to define event handlers, or to show and hide specific elements in the page. Being obsessed with the idea of working with abstractions, a pattern I first noticed was that specific actions were tied to specific elements only, and other parts of the webpage were influenced by the results of some of those operations. I tried to visualize the web page being composed of black boxes, each having inputs and outputs, being inspired by how electrical circuits systems work. Those black boxes could be composed from others as well and they could communicate with others. OK, so I had an abstract concept, but how could I apply it for a web page? The immediate connection I made was to associate an HTML element with its specific logic and treat it as a black box, or to visualize it as an object from the OOP context. If I needed some information connected to an HTML element, I would have used the Object, not the element itself, more specifically to call some methods on it. The idea of being able to use OOP inside a web page was a revelation at that time and I couldn't have imagined something better!

JS didn’t have native support for classes, but I found a very nice implementation of them from John Resig, the author of jQuery. Every class was defined by specifying its prototype as an object of functions, and had the possibility to be extended as well. But what could those black boxes I’ve talked earlier about actually represent? As I’ve mentioned, HTML elements and their associated logic were a specific type of classes (or black boxes), which I named ‘Container’, equivalent to today’s Component from Angular. To create an instance, I needed to pass the jQuery element and eventually a set of options to the class constructor. Those options could include functions that were executed when internal events were triggered; the functions had to start with ‘on’ prefix; now they can be considered an equivalent of the Angular event binding.

Another crucial concept I introduced was about data handling: how it was retrieved from the server, how it was updated, and how to trigger those actions. This part wasn’t related at all to any HTML content, so I had to introduce new types of classes to separate this concern: Resource, Request, Command.

The Resource class represented a specific piece of data that implemented the Observer pattern: other objects could register handlers for specific data events, and when those events occurred, the Resource instance would trigger the registered handlers. I have to mention that at that time I was unaware of the Observer design pattern, which I read about later. A special derived class was ApiResource, which provided a mechanism to communicate with the backend and update its internal data.

The Request class was responsible for the actual request to the backend. It also checked if the server returned an authentication error and also sent a secret token to prevent CSRF attacks. As may you expect, the ApiResource class used this class for communication with the server.

The Command and CommandManager classes were a way to define and trigger global actions (similar to today’s NgRx actions).

Emergence of various frontend tools

The JS code was split between multiple files, most of them using IIFE to avoid exposing the variables to the global namespace. The HTML files were generated by PHP in Zend Framework. It was clear that the JS files were not so easy to manage, for example, when I needed to load the files in a specific order, I had to use numbers as their prefixes. Later on, minification tools appeared, like JSMin and I created a PHP script to combine these multiple JS files and then minify them into a single one, which reduced the page load time.

The introduction of Node.js and NPM changed the frontend development and I found myself confused about what tools or libraries to use. After days of researching I finally figured out how to set up and properly use CommonJS modules and Browserify to bundle the JS files into a single one. I had to adapt to the new technologies and standards, which wasn’t so easy because I got used to my own mindset of writing code. This was a challenge for me, but I had no alternatives and I continued to study, practice, and learn.

One of the coolest things I found was the Backbone.js framework, which had similar concepts to the library I had developed, but it looked cleaner and more structured. I had the chance to use it on a real large-scale project, which also used npm private packages for separating shared components, styles, or logic. I witnessed how the web applications complexity started to grow, and the need for specialized tools or libraries became obvious.

Those internal packages were restricted from being written, with the exception of the frontend lead developers. Proving my skills in writing good code, I was granted access to push code into those repositories and also to review and approve any pull requests.

Backbone.js also used tempting engines, like Mustache, Handlebars, or Pug (formerly known as Jade), which separated the display logic from data. I found this approach very useful, I always loved when things could be decoupled and had well defined responsibilities. Later on, I acknowledged that this concept was named “separation of concerns” and it has been one of the main pillars of software design.

Frontend tools started to emerge and evolve. It was challenging for me to visualize the benefits of them and to use them properly. Sometimes I asked myself “Do we really need it?”, as long as we had the workflow in place and the team was used to the established procedures. I realized that development was not only about writing code, but also about working with tools that automated some processes, in order to avoid any human unintended errors.

With the release of ES6, many developers wanted to adopt the new JS features, but the browsers didn’t fully support the new standard, so we used Babel to write ES6 code and transpile it to the ES5 version. This meant that another process needed to be run, so automating them was really necessary. At that time we used Grunt to run all the processes, including live reloading in the browser, which was really a very cool thing to have.

These build tools changed the way the web development was done. One noticeable thing I faced was that the frontend became separated from the backend and most of the communication between them was done through API requests. The backend became unaware of how the frontend rendered the data, it just simply returned the requested data to the client. That implied that the frontend had to be served from a different server, named development server, mostly provided by BrowserSync or Webpack dev server. But where was the actual backend server with its database? The answer was offered by Docker.

This was the moment when I made the transition from being a full-stack developer to frontend development only, being more attracted to the visual part of an application, but without underestimating the work on the backend. I had to face a lot of tools, the web industry changed compared to the times when I started to work on my first web projects. I have to admit now that at those times I felt a bit overwhelmed to use that multitude of tools around me. I had to learn and understand how each tool needed to be used, and I had to do that by myself, there were no mentors around to teach me. Was it easy? Not really. Did I survive? Yes, and now I can say that I made a good choice to be patient and not give up. As humans, we have a natural tendency to stay in our comfort zone and have a daily routine. The reality is that things around us are evolving and I chose to adapt to changes, even if that implied some effort. What is the consequence in the present? That I’m open to new things and challenges and I really like them. It brings me satisfaction to face and deal with something new, and I’m not talking about programming only. I do encourage you to try new things you have in mind, even if your first thoughts are against doing it. Of course, you’d need to carefully weigh the benefits and consequences of your actions.

My Angular Journey

Back to my frontend career, the tools I mentioned earlier quickly evolved and so I discovered the Angular v2 framework. I played around with its “Heroes tutorial”, and I was excited to work on a real project with this new framework. The component based architecture with data binding and event handling really caught my interest, there were principles similar to what I had used in my own library. The way of encapsulating the view and the logic into a single entity was something I could have only dreamt about in the past.

Everything seemed straightforward when reading the documentation and the examples, but when I had to implement things that weren’t so common, I found myself a bit lost. I knew what I had to do, but couldn’t easily translate that into Angular language. A lot of things happened under the hood in Angular and I didn’t have enough knowledge to understand everything at once there.

I then discovered the change detection mechanism, which was a powerful feature, but with its own caveats. The view was automatically updated when a component property changed, that was very good, but there were times when things didn’t go as expected.

Then I encountered injectable services but couldn’t figure out where was the “new” keyword used, how was the class instantiated. I was used to writing code from scratch and having more flexibility in handling the logic, but Angular was like saying: “don’t worry about the internal workings, I’ll handle it for you”. I felt that Angular put some boundaries around me and couldn’t customize the implementation as much as I preferred, which was initially frustrating, but then realized that the role of a framework was to build an app on top of it, using its rules.

It was another moment when I had to adapt, and mostly to a new programming paradigm, that is, declarative programming. I came from an imperative programming world and had to learn a new way of writing code, basically it was like learning a new programming language, a new instruction set. Of course, I had to embrace this shift, there were a lot of benefits by using the framework. Angular became popular and clients wanted to adopt a standardized framework, and it was for an understandable reason: other developers that would work on the project had to follow a specific set of rules and it would be easier for them to adapt, without too much training from existing developers with more experience in the project.

More than this, there were times when a certain thing could be implemented in different ways. So, which one to choose? That was another uncertainty. Personally, I wanted to have one way of doing something and stick to it. But I wasn’t so lucky. I didn’t know if I implemented it in a good or bad way. I’ve always preferred to understand the reasoning behind the rules and practices, not just blindly follow them.

Taming Angular sometimes felt like a constant struggle, especially with the module system. Why should I declare so many imports and exports in a module? Why do I even need a module? I couldn’t find the answers to these questions until Angular introduced the standalone components in v14, which brought much needed relief!

Change detection

The interpolation syntax in the template looked similar to the Mustache template engine I had used before. If a property changed after a certain event, it was automatically updated in the view, without any manual intervention, a great feature we all appreciated. However, this came with a hidden cost: the entire application view was re-rendered, and if there were any unrelated functions anywhere in the template, they were executed again, even if we didn’t ask for that. For example, when I defined a click handler on a button, every time I clicked it, the entire view was recomputed, even if my button belonged to a small component. The initial excitement seemed like it faded away; why were other parts of the view executing when they had nothing to do with my button click? Things started to be more clear when I read and experimented with how the change detection mechanism worked, about the OnPush detection strategy and the ChangeDetectorRef provider with its methods detectChanges and markForCheck.

Dependency Injection

Initially this mechanism seemed very abstract and hard to grasp. When I needed a service, I had to declare a parameter with the type of that service class in the constructor of the component, and then I could have access to the service instance. It looked simple to use, but how were the things built under the hood? Later I realized that every component had an injector which could provide different services. These injectors were organized in a similar manner like the component tree, in a hierarchical way. When a service was requested, the injector started to search on the component equivalent node, then it went way up in the hierarchy. This meant that different components could request the same provider (basically the same provider token), but they could get different service instances. Evrika! A similar concept exists in React, but they declare the providers directly in the views. Compared to the change detection, I find the dependency injection mechanism a very clever one and I give all the credits to the ones that designed it.

Routing

The library I had built included the concept of pages, and only one page could be visible at a time. Opening a page was done by calling a globally accessible method, specifying the name of the page and optionally a set of parameters. Comparing my basic routing mechanism to Angular’s one, the similarities are the name of the page (or of the route), the component associated with the route, and passing optional parameters. Angular introduced more advanced features, like updating the URL in the browser when the route changes, listening to URL changes, lazy loading a route, and many more. Again, it mostly uses declarative programming, hiding the implementation details from the developer.

One of the unexpected things I encountered was that Angular destroys the components after navigating to another route. That was really strange for me in the beginning, my own implementation of pages simply hid the components in the DOM, and then showed them back; but Angular had a different approach. It was annoying to see that the previously fetched data was gone after returning to the earlier route. This opened the gate to understanding state management solutions and how to persist data in a component after it is destroyed. This also implied extracting data handling logic from the components and putting it into separate services, and later the component would just retrieve the data from those services. This approach is what’s called separation of concerns, which is considered a good programming practice.

Reactive forms

When I started working with Angular Reactive Forms I noticed some high-level similarities with the form handling from the PHP Zend Framework: associating validators to a field or multiple fields, or reading and displaying errors. I was pleased with the fact that form fields in Angular were conceptualized as abstract objects.

A challenge I faced was to efficiently display the errors. For that I created a custom directive to which I could pass the field errors object. The reason I implemented such a directive was to write less code in the templates, knowing that the traditional way implied many lines for displaying different field errors. I have always been tempted to find a solution when I see repeating code or code that follows a specific pattern.

A very useful feature of forms and form fields has been the representation of change events as observables, which makes total sense and adheres to RxJs concepts.

RxJs

The reactive programming introduced by RxJs can bring both satisfaction and headaches. I have to admit it can drastically shorten the amount of written code, but on the other side the learning curve has been very steep. Coming from a traditional imperative way of programming, where I had control of almost the entire code and I could easily debug and fix errors, when it came to using RxJs operators I really found myself a bit lost. It was again like learning a completely new and different language, with a new set of instructions. And indeed, these instructions are basically the operators and the building blocks of RxJs. What was a stream, how to visualize it, when to use it, these were questions that I struggled to find practical answers to.

It was another challenge I had to deal with. I don’t know how it happened, but when I stopped struggling to understand the RxJs concepts, I had a moment when I visualized the observable streams as conveyor belts with different items, and that those items could jump to other conveyor belts, creating a connection and dependency between them. This visualization proved to be very useful and thus helped me to create observables from other ones and to understand what flattening operators do.

The way I was initially performing an HTTP request when a button was clicked was to apply the traditional imperative way, that is, to define a handler for the click event and inside that method to subscribe to the HttpClient and then to assign the result to a component property. With the new approach I had, I visualized 3 connected streams: the click events, the HTTP requests, and the responses. So, the trigger of the final stream was the click event and the output was the data returned by the requests. This way I could also make use of the async pipe in the templates, without worrying about unsubscribing.

Even now when I write these memories I keep thinking about this mental transition from imperative to reactive programming. I really find it useful and couldn’t imagine how it would be to use the imperative approach again when dealing with multiple interconnected streams. I had a feeling of pride and joyfulness after grasping this new philosophy of programming.

Signals

I was curious when I heard the Angular community talking about signals. I have been impressed by this feature and I consider it a huge improvement introduced by the Angular team. I could easily visualize a signal as a piece of information that automatically informs the consumers when it changes and that it could depend on other signals, creating something like a dependency graph. Their usage is straightforward, there are some rules to follow though, and I see them as a very powerful feature and I’m sure that they’ll bring a positive impact on the Angular framework popularity. All my credits, again, for the Angular team for such a nice work!

Project architecture

From my perspective, simply learning how to use some features is not enough for building an application. One of the first aspects I consider is the folder structure, how to organize the files inside the project. I prefer having separate main folders for: a) the core of the application, that includes the main component, routing, guards, interceptors, and other things that are specific to the application itself; b) the main features (or pages) of the application; c) shared or reusable blocks that are used by other entities. I consider the folder structure a crucial aspect of the project architecture, I visualize it like cabinets with drawers and I want to easily pick the tools I need to build the project.

Nonetheless, another practice I have been adopting is the loose coupling and high cohesion. This practice came mostly from my own experience, rather than reading about it. I’ve developed it naturally, based on my desire to create self-contained independent modules, that have less dependency on other ones, being able to communicate using an interface, unaware of any implementation details. When I build an entity (that is, component, service, directive, pipe, and so on), I ask myself: What are the consequences if I completely remove it from the app, or if I change something in it or add something to it? How much does it affect the rest of the app? If I see that there is a significant impact, then I’ll refactor or restructure the code. My philosophy is to build code that won’t be affected by any future updates, or, at least, few changes will be needed to implement.

I want to add a few words here about how I deal with state management. Even before Angular I was aware of the benefits of separating the data handling from the components themselves. But how far do I go with this separation? The next question will help with an answer: Does a specific piece of information belong to a certain component only? Can that information exist without that component? I have been using NgRx for implementing the state management and I avoid seeing things as black or white only, but adapting based on specific scenarios. So, my approach is the following: if any data can live independently of a component, I consider it part of the global state; otherwise, it is strictly part of that component itself.

Software design practices

Even since I started to work on my first SPA I was focused on working in an OOP style and visualizing the UI as being composed as a hierarchy of building blocks. When I have a wireframe or prototype to implement, I first analyze what are the top main components. For this I take into account what parts have related functionalities, how they interact with each other, what type of information could be passed between them. I try to abstract out these blocks by temporarily ignoring visual presentational details and extract what is the purpose of those blocks. I’ve developed this technique through personal practice and later I realized I wasn’t the only one who had such a vision. I can say that there is not only a single solution for a problem, everyone could have their own approaches. I have to admit that I worked with senior developers who mostly had a procedural way of implementing the skeleton of an app (this was before Angular) and I was a bit surprised, I expected a more abstract and advanced approach from them. I prefer to see things neither good nor bad, but to be aware of and weigh the implications and consequences of choosing or designing a specific solution.

A real turning point in my career was the discovery of the book Design patterns, by The Gang of Four. It came at the right moment, when I was in a search of developing abstract solutions. Even though the book described a number of 23 design patterns, I was mostly interested not to learn each of them, but to understand the idea, the principles that lie behind developing a pattern. The two invaluable principles that I was attracted to were: "Program to an interface, not an implementation", "Favor object composition over class inheritance". I allow myself to compare these statements with Einstein's equation: E=mc2. They all look very simple, yet they are very powerful. Of course, the literature includes other principles as well, like SOLID, DRY, KISS, or YAGNI, and they are valuable too. Without being initially aware of them, I’ve been applying them naturally, as they came as a consequence of my continuous search of building modular software.

Talking about how I actually start writing the code to solve something new, most of the time my approach is from top to bottom, starting to define the main classes or functions, then how they should communicate. I look for giving proper semantic names to the entities, so the code is self explanatory as much as possible. For example, when I have a not so obvious condition in an if statement, I prefer to create a function or a variable for that expression, naming it according to its purpose. A rule of thumb is to group related code into separate functions or methods that would have the name as described above.

One of the first questions I ask myself when working on a task is: what do I have to do? Then I translate that answer into a simple code, without writing any implementation details. After that I answer: How can I actually implement what I had just written? It looks pretty simple. But in practice it is not really as it looks on paper. I know myself struggling to solve something, trying to find solutions to a lot of things, thinking in advance that maybe I would need other things, and so on. I was basically missing to define what was the direction I had to go, to define the WHAT. Many times I started right away to write code, then to write more code, then to realize that I had no proper direction. I saw this behavior at other experienced developers too, so it wasn’t just me who faced it. My best tool to solve a problem is a pen and a paper; I have been writing tens of pages with sketches and notes, and I’ll continue to do this because it helps me to visualize the problem and structure the solution. I draw boxes representing different entities, then draw arrows to illustrate how those boxes communicate with each other. One of the first things we were taught at informatics was the flowchart diagram to describe the operations of an algorithm, and I still use it today.

Presentations

Design patterns

One day my manager asked me if I wanted to hold some presentations in front of an audience of around 10 colleagues, about design patterns. Even though I didn’t know much about the topic, nor had I held presentations (except for those when graduating the faculty), I accepted it on the spot. I was aware that I had to learn and prepare. And it wasn’t just one presentation, there were multiple sessions with different groups of people and different design patterns. Fortunately the content to be presented was already prepared and I just had to deliver it. But it wasn’t just reading from the PowerPoint slides, I had to also understand how those patterns work. The main sections of a presentation contained the problem, the solution, an UML diagram, a Java code example, use cases, pros & cons. I found it very well structured, so I didn’t have to change any content, but to familiarize myself with the concepts.

Having all eyes in the room pointed at me when I started the first presentation made me freeze for a moment and forget everything I had to deliver. I felt very uncomfortable, but I managed to take a deep breath, drink a sip of water, and start talking about the topic. At the end I was relieved that the presentation had finished and all the colleagues had left the room. But it wasn’t completely over, I had to continue delivering other design patterns too. There’s no better way to learn something than actually doing it; reading from books is useful, but not enough. The best way to learn to swim is to jump into the water; I have no knowledge about someone who became a champion only by reading instructions or by watching how others did it.

I have no words to describe how valuable was that experience: it helped me to reduce my anxiety when speaking in front of people; it helped me to understand how to structure a presentation about a technical topic; it reduced the development time for the project I worked on, it increased to code quality, and reduced the number of major or critical bugs.

Workshops

Along my career I encountered problems that could be solved in different ways. I initiated a series of interactive workshops and encouraged the audience (usually no more than 6 people) to come up with their own solutions. We then discussed the benefits, disadvantages, followed by a presentation of my proposed solution. The idea was to involve my colleagues in the discussion too. I had situations when someone had a solution better than mine, so I find it very useful to create a debate, to have a two-way communication, not just to unidirectional deliver information. One of my favorite topics was “composition vs inheritance”. In one such session the challenge was to find ways to implement a reusable filtering component that could have multiple filters. That presentation was before the Angular era, so the solutions weren’t as standardized as nowadays.

The fact that there was an open discussion during the presentations put me in situations where I had to actively listen to what my colleagues said and try to understand their ideas. I was also put in various communication contexts. For example, one time a colleague started to talk over me, even though I didn’t finish my sentence. What I have learned from these interactions was to listen to any idea, to ask for clarifications or how their solution would cover specific cases. Another lesson learned was that I could be wrong too, I make mistakes, I’m not perfect, but I’m open to admit that when it’s the case.

JavaScript training

Other presentation sessions I held were about the JavaScript language for a group of about 20 people having other specializations. So I had to start with the basics, which, contrary to expectations, is not easy at all. The more experienced we become, the more we rely on advanced concepts; however, explaining something simple to beginners could be a challenging task. I realized that those “simple” things are connected to other “simple” things that I had to explain, and so on, so the initial explanation became quite a complex one. My approach was thus to start with the very basic things and gradually add new ones, so the audience would be focused and be able to keep track of the presented information.

Hackathon

I’m smiling when I recall the moments from that event. I was part of a team with a project manager (who owned the idea of the project), a backend developer, a tester, and myself. The company organized a hackathon and six teams took part in the challenge. We went in the mountains to a chalet surrounded by woods and a very fresh air. Our project was a scheduler for meeting rooms, so everyone could easily see when a room is available. We were awarded the second prize, and later the project was physically implemented in the company, so each meeting room had a tablet near the door that displayed its availability. I used React for the frontend part; it was new for me at that time, but I learned it pretty fast and I was able to complete the project. I spent the entire night working and when I got to bed I could still see the code in my mind when I closed my eyes. I was tired, but it paid the price, we were all satisfied by the work and the results.

Mentoring juniors

For the first 3 years of my web development career I had been working either alone or with people having a relatively similar or more experience. I was able to work independently and get the tasks done without any additional support, so I gained trust from my manager that I can conduct a project. It was a time when the company I worked for organized an internship training, and I was asked to supervise the interns. Some of them were very friendly and showed their willingness to learn, and I was happy to share my knowledge with them.

After a few weeks my manager told me that the other colleague from the team will be moved to another project and that the team will have 4 new members that had passed the internship. I felt honored, appreciated and happy for having this new responsibility. But this came with challenges too, as each of them had different knowledge levels, so I had to adapt and be patient with each of them. I started with assigning them low difficulty tasks, and, depending on their productivity, I later gave them more complex work. The main challenge I faced was to explain to them some custom mechanisms I had already implemented in the project. They weren’t used to abstract things, which was normal for their experience level. I thought about what could be an easy way for them to understand something new and abstract, and I came up with making analogies with situations or objects from real life, so they could better visualize the context. I tried to do my best to share my knowledge with them, but more important was their eagerness to learn new things.

After some months I could notice different levels of productivity from each of them and it came the time when I was asked to pick the best two to be officially part of the team. It was a hard choice to make, because I would pick three of them, they proved to have good results and we also built a “friendly” relationship. I spent almost 2 years mentoring them and I’m grateful to them that they were patient with me too, we helped each other grow.

Another experience I had for about one year was with a junior colleague in whom I saw real potential. He absorbed knowledge like a sponge, I could see him improving week by week. He was eager to listen and learn from me, and I had a pleasure working with him. One of the principles I wanted him to acknowledge was to define what needs to be done, then how it needs to be implemented; another good practice I taught him was to group related operations into separate methods, so the code could be cleaner and more structured. During our collaboration he was promoted to a mid-level, then, after he moved to another project, reached the senior level. Besides a productive professional collaboration we remained good friends and when we occasionally met he reminded me that my advice from that time was valuable to him and he could see over time the benefits of the practices I had taught him. I enjoy hearing such nice words from him.

I also had other opportunities to work with juniors, and I’ve realized that everyone is different. Some of them asked questions and told me when they didn’t understand what I had explained to them, others just pretended to understand, and others were hesitating to ask for clarification. In such a relationship it is paramount to be honest and openly say when something is unclear. Teaching is, in my opinion, a two-way communication from the same level, the teacher shouldn’t make the learner feel inferior just because of the different amount of knowledge.

Agile methodology

Writing code is just one part of the entire product development life cycle, which includes: planning, defining requirements, design, development, testing, deployment and maintenance. Apart from the development phase, I was curious about the early stages. A developer is more productive when they know clearly what needs to be done and also when their focus remains in the same area or related ones. The testing team has to have a clear understanding of how the product should work too. This means that the first two phases of SDLC need to be done with careful analysis. As I mentioned at the beginning of my story, I had moments when I worked on something, but didn’t have a clear picture of what needed to be done, in order to get quicker to a final working product.

In the early stages of my professional career I had to personally create tasks based on design prototypes. Those tasks were effectively statements about what needed to be done from the developer’s perspective, like: display the links in the application header; show a dialog popup when I click the Cancel button, create the login endpoint, and so on. Of course, each task could be broken into smaller ones. This approach proved to work, until the testing team came into play. They wanted to know how a feature works, not how it is implemented, and such details weren’t quite available in the task definition. So I had to update the way of working, by also describing the order of certain operations and the expected output.

Later on, I familiarized myself with Agile methodologies and some core concepts, like functional requirements, user stories, acceptance criteria, or use cases. Functional requirements provide a high-level overview of a product feature. On the other hand, user stories describe how a user interacts with the application. The acceptance criteria define some conditions that have to be met for a feature to be considered complete. A use case details a specific workflow in order to achieve a desired result. I found these concepts very useful and practical: I could more clearly visualize what needed to be done, how the application should behave, and also the testers could easily do their job. Even today these methods prove to be very useful and popular.

Most of the projects I worked on followed the Agile methodologies and the Scrum framework. Daily meetings, planning, refinement, estimations, retrospectives - all these were activities that didn’t require writing code, but they were necessary to define and get the tasks done. The teams I was part of started to have a Project Manager, a Product Owner, or a Scrum Master. I became able to analyze the validity of the requirements, to estimate the effort, to clarify details with the client. I was also in the position of writing complete user stories with acceptance criteria based on the design prototypes and I really enjoyed that work. Moreover, I identified inter-dependencies between user stories. For many years, Jira has been the solution for task management. I try to make sure that the logged issues are well written in order to avoid bugs or misunderstandings and to deliver what the client expects.

Client interactions

A job, at its core, means offering a service that meets someone’s needs. This implies that we have to make sure we understand what the client wants. This means communication, and the more direct it takes places, the easier it is for us to deliver what the client expects. In most of my outsourcing projects I worked on I had the privilege of direct interaction with the clients. When I say “client” I’m referring to individuals from outsourcing partner companies who have a deep understanding of the business needs and product requirements. This could include a product owner, a lead developer, or a business analyst. While I could implement something that makes technical sense, it should be valuable for the client too.

In the early first projects I had senior colleagues who told me what to do, based on their discussion with the clients. I felt a bit uncomfortable and isolated, and I wanted to be involved in the communication, even though I knew that I didn’t have enough experience. But an opportunity arose when my manager asked me to speak with the director of the company we outsourced for, to clarify some requirements of the project. While on the phone, alone and unprepared, I became anxious, without anyone to support me. It was the first time when I had a direct conversation on such a high-level. My English was not so good at that time and during the conversation I was mostly quiet. But I also felt a bit empowered that I had that opportunity and I knew it wasn’t the last.

The main challenge I’ve faced when discussing with a client is to extract the relevant information about what they actually want. Some of them prefer talking, while others prefer to write. For me it is more efficient and clear when I read because the information is most of the time well structured in the written form. However, I do understand that this requires some effort from the client, and that’s why others prefer to talk, and they will expect us to do the main work and extract the essential information. In situations when the client explains something, I try to rephrase that with my own words to check if I understood correctly. This has proven to be a useful technique and the client also sees my genuine interest which helps in maintaining a good professional relationship.

There were moments when things appeared to be clear at the beginning, but later we realized that we didn’t know what to do because of missing requirements. In such situations, if we had potential solutions, we presented them to the client and discussed the best approach. In my opinion, this approach is preferable to simply waiting for the client to propose a solution. However, this is not a rule of thumb, it depends on the client's expectations, some of them prefer to have a lot of control, while others prefer us to be proactive and suggest solutions. Sometimes neither the client knows exactly what they want or they expect us to provide both the requirements and solutions. From my experience I can say that the key here is to adopt an open communication and clearly express our expectations on both sides in order to avoid misunderstandings.

One of the most valuable discussions are demo presentations, where we walk to the client through our work. Many times I was responsible for the presentations and my approach was to start with a summary of the features I wanted to showcase. I then went with a happy flow scenario, eventually followed by an unhappy flow, trying to use realistic values for input data to simulate what a normal user would do. Most of the time the stakeholders are non-technical or less technical and I adapt my language so they could all understand. They are more interested in seeing their product meeting the requirements than to tell them we used specific design patterns or libraries.

A Journey of Learning

During my career as a software engineer I faced challenges that shaped the way I am now. Beyond the technical expertise I also worked on improving my communications skills, and this growth has been incredibly rewarding. I have always been dedicated to creating and delivering valuable software, as if I make it for myself. Moreover, I’ve discovered a passion in sharing my knowledge by writing online tutorials that describe how to build solutions starting from realistic use cases. This journey is far from over. The world of software development is full of possibilities and I believe anyone with dedication and a passion for problem-solving can be part of it.

Top comments (0)