From a business perspective, March has been one of the worst months since I started freelancing. Projects falling through, clients not paying, clients ghosting after not paying, proposals rejected: you name it, March had it.
And I couldn't be happier.
That's not sarcasm, and I'm not a masochist. Looking at what happened over the last month, the most obvious takeaway is that analyzing my failures offers a blueprint for how to improve in the future. Better pre-project contracts, more vetting of clients (no more Craigslist gigs), a better defined personal brand/website, and so on.
But there's a second reason, adjacent but unrelated to my business woes, why March has been a good month for my mind and a bad month for my wallet.
Here's why I'm happy: on the technical side, I feel as if I'm doing the best work I've ever done. My APIs are tested and documented. My initial solutions to the problems I encounter are no longer wildly off the mark -- some even work on the first try. I finally feel as if I'm not running like a gerbil from framework to framework, throwing packages at my problems in hopes of stumbling upon a working codebase. And, for the first time in my career, I'm able to (occasionally) solve problems without needing to spend hours to research and compare potential solutions.
All of this is miles further than where I was even a year ago, and certainly further than when I first started building websites.
You may ask what any of this has to do with writing. Simple: until about four years ago, I studied writing and writing alone.
By the time I graduated college, the only coding I had done was in an introductory computer science course in high school, where I spent much of my time playing Warcraft III. I began my journey into web development when I couldn't figure out how to expand a sidebar on a WordPress site I ran. "There are people right now who are curing cancer," I told myself, "and you can't figure out how to move a sidebar on your website." Before then, my only experience writing code in a public context involved customizing my Myspace profile in middle school.
So my first major point is that you can succeed and thrive in tech without a background in computer science. But you might have guessed that already -- others here and elsewhere have told this story, many with longer odds and less technical background than myself.
However, while it may be provably possible to work in technology without making it the focus of your studies, I haven't seen as many arguments that it is preferable to do so. After all, starting a career in development requires a wide body of domain-specific knowledge. Wouldn't it be easier to start your career with that knowledge already in hand?
I believe this assumption is incorrect, and that skills commonly taught in the humanities transfer equally well to technology careers when compared to skills acquired in computer science. I have three primary reasons for believing so:
- The humanities teach you how to learn -- and tech careers require lots of learning.
- Good code and good writing ultimately look alike: they both strive for accessibility and concision.
- While technology may be written with code, it is ultimately written for humans.
Technology as a Learning Process
In college, I studied two main subjects: history, specifically Latin American history, and writing, specifically screenwriting. Along the way, I took classes in everything from planetary geology to Assyriology to Russian science fiction. These subjects sounds entirely disparate, and in many ways they are. Starting a new class was again and again an introduction to a new world. Each had its own lexicon, its own luminaries, its own context that needed learning - and quickly.
In immersing myself in these new worlds of thought, I had no choice but to discover and refine my personal strategies for learning. Take notes often; write outlines whenever possible; ask questions of those who know more than you; do your homework. All of these skills have directly translated to my career in software development.
At this point in the conversation, I've had others say: wouldn't all of that learning not be necessary if you had studied computer science in college?
This claim does have a certain merit. As I mentioned before, there are numerous aspects of development that confound me to this day. I'm just now beginning to stumble my way through common algorithms and data structures. I learned how to provision my own servers reluctantly, and even after learning I default to platform-as-a-service solutions (Heroku, Zeit Now, etc.) whenever possible.
However, there are two main reasons why this tradeoff has been acceptable to me. First, from the limited anecdotal experience I've picked up via computer science grads, I've learned what is taught in the classroom doesn't always align with what's necessary to build software in real-world contexts.
I distinctly remember that one of the requirements for a computer science at my school was a course with the name "Build an Operating System". As illuminating as it might be to build an operating system, I can promise you that no website you build will require its own operating system.
Second, even if your comp sci education prepares you well for a job out of college, technology is a constantly-evolving field. Paradigms and frameworks change over time, new challenges arise, each opening up a range of new solutions. Perhaps most interestingly, the various disciplines and subfields of technology have a fascinating way of bleeding into each other. By imparting a generalized approach to learning, the humanities will give you the ability to keep pace with new tech as it enters the mainstream.
Good Code vs. Good Writing
As I grow in my ability to read and write code, I continually find new ways in which the arts of good code and good prose overlap. Both disciplines value structure, clarity and accessibility first and foremost.
There are many good resources that cover the basics of writing clear code, and I won't go over them in depth here. Instead, I'd like to point to some less-considered aspects of development that closely mirror the writing workflow.
One major part of the dev experience that's often left out of education is understanding/editing code written by someone other than yourself. The vast majority of learning experiences I've encountered offer students an empty page to craft and hone their solutions in an isolated environment. By contrast, when writing code for real-world scenarios, it's incredibly rare to build a solution without using any external code. Whether you're downloading code from a package repository or inheriting a legacy project that dozens of developers have worked on before you, your ability to work with existing code is oftentimes just as important, if not more so, than your ability to write new code.
Yet, because the skills required to parse and review external code are so infrequently taught, they end up never being used at all. I've worked with developers many years my senior who have asked me to review codebases for them -- not because they didn't know how to code but because the context switching required to parse someone else's code took too much time and effort to be worth it.
By contrast, because I had focused on the writing and editing process during my time in school, I was far more comfortable jumping into others' code from an early point in my career. And because good code resembles good writing in many ways, I've been able to make meaningful improvements to open-source code without any formalized development education.
Will I be the one to improve a codebase's performance 10x by my deft knowledge of algorithmic complexity? Likely not. But here's something I wished I had known from the start: performance isn't everything. I spent far too much time during my early days double-checking my code against every performance-squeezing resource I could find, worried that my lack of comp sci knowledge would lead me to write ineffective code and cripple my websites in the process.
This brings me to another parallel between the development and writing worlds: the tradeoff between complexity and accessibility. I've got a lot to say about this particular subject, and it's a topic I've planned to write about in a future column. But for now, here's a simple-ish metaphor to illustrate my point.
I spent a substantial portion of my education in writing workshops. In these workshops, a small group of writers would present our responses to a draft we'd received the week before. Without giving any context or explanation, we'd read our stories and receive constructive feedback from the other writers in the group. The "no context" part is important here -- without a window into the author's mind, we were forced to evaluate each story at face value.
This process was hugely helpful to my writing in several ways. The first was realizing that I had a tendency to depart from the real world as quickly as humanly possible. If the prompt was to describe a metropolitan bus ride, I'd find a way to have the bus make a wrong turn at Albuquerque and steer into an alternate dimension.
The second was noticing just how different that approach was from the methods of my fellow writers. With few exceptions, their stories were much more grounded. Their characters fell asleep and missed exits, mumbled awkward conversations into their phones to avoid eavesdropping passengers, and generally did the sorts of things one might expect people to do on buses.
The third thing I picked up on, however, is that the subject matter of a given story had very little to do with how easy it was to read and understand. While I'd often get comments on how weird my stories were, or guesses as to what sorts of drugs I must have ingested prior to writing them, other writers in my workshops were rarely in doubt as to what happened in the worlds I created. By contrast, other writers in my workshops would take greater risks than I when choosing how to order and present their stories. Sometimes these risks would pay off magnificently. I'd marvel at the way they could present an ordinary situation with and sometimes they'd leave everyone scratching their heads as to what, exactly, the story was trying to convey.
Bringing things back to development, I've noticed contrasts in coding styles that remind me of the divisions I saw in my writing workshops. Some developers value terseness above all else, and will avoid writing two lines of code where the opportunity exists to write one instead. Some developers live and breathe performance, and make sure that every line of code can scale exponentially from the very start. And some developers do neither of these things, and write code to solve problems and nothing more.
As much as I'd like to say that I'm in the third camp, the reality is that I draw from all three approaches. But I always strive to keep my code as accessible as my fiction -- and considerably more grounded. And without knowing how important it was to write something that others could understand and build upon, that lesson in sustainable development would have been a lot harder to internalize.
Writing Code for Humans
This is, again, a topic I have thought about at considerable length, and may or may not be expanded into a column of its own at some point. This present column has already run longer than anticipated, so I won't dive too far into extended and ham-fisted metaphors to make my point.
All code is meant for humans. Yes, computers are responsible for interpreting and acting on code, but that computer-interpreted code is ultimately written to advance some human end. The better you can define the relationship between code and the humans who benefit from it, the better positioned you will be to succeed in your career. And defining relationships between disparate topics is at the heart of the humanities.
If you can explain to freelance clients how the code you're writing can benefit them, you are more likely to win contracts than a developer who can code but cannot explain. If you can explain to your fellow developers why a given refactor is in the best interests of the team, you are more likely to have your changes approved and implemented than a developer who can code but cannot explain. If you can explain how the concepts you learn in bootcamps and tutorials apply to real-life scenarios, you are more likely to build audiences and gain employer interest than a developer who can code but cannot explain.
All of the tasks above are made easier if you know how to convert evidence into theses and actionable ideas. And to do that, you need to know how to write.
I haven't even begun to touch on the field of ethics in the technology sector, and how studying the humanities can equip you with a much more refined moral compass than studying the amoral world of computers. So please, if you're from a similar background to myself and you're wondering whether there's a place for you in technology, put those thoughts to rest for good. There isn't just a place for you -- there's a need for you.
Top comments (0)