editione1.0.1
Updated August 7, 2023Letβs look into some of the major contributors to technical risk that you should look out for in your day-to-day role.
Effective engineering is about shipping software quickly while preserving your ability to make additional changes quickly in the future. The goal is to move fast without putting yourself in a situation youβll later regret. In essence, we need to build software that meets the current requirements for our customers but leaves enough flexibility to easily extend the code to handle additional requirements in the future.
Seems easy, right?
The longer you work as a professional programmer, the more you will come to realize that good code approximates the complexity of the problem at hand. Good code is not needlessly complex, but not overly simple either. The best engineers are able to design and build solutions that match the complexity of the problems theyβre solving.
However, a lot of software engineers early in their career donβt have enough experience to know how to match their solutions to the complexity of the problem, so they end up either underengineering or overengineering their solutions. Thereβs no simple answer as to how to avoid these situations, unfortunately, but just being aware of each one is a step in the right direction. Over time youβll naturally gain an understanding of when a solution is being under- or overengineered. In the meantime, letβs look at each one a little deeper so you can better identify each situation.
When a developer underengineers a solution, they are not doing enough forward thinking when designing a solution to a problem. Although they may be focused on solving the immediate problem at hand, they may be losing sight of a better long-term solution. This tends to be a common trait among developers who are just learning how to write code, because most of their energy is spent on getting the program to work. Once they come to a working solution they move on to the next task. That can cause problems in the future.
Just because a piece of code works and compiles without errors doesnβt mean itβs ready to ship. There may be better ways to solve a problem that allow for more functionality in the future. While the original solution solves the problem right now, the code may need to be significantly refactored when it needs to handle additional use cases in the future.
Underengineered solutions often contradict the Donβt Repeat Yourself (DRY) principle. The DRY principle is a common guideline software engineers use while structuring their code so they are not repeating the same logic in different parts of the codebase. This is good because it encourages programmers to structure their programs so the logic can be written once and reused in multiple places throughout the codebase.
When you follow the DRY principle, you can often add additional functionality to your code with little effort because you only need to make changes to one part of the codebase when updating logic. Additionally, when updating logic that is repeated throughout the codebase, you risk the chance of missing a block of code. This increases the possibility of introducing bugs into the system during refactoring and may lower the quality of the codebase over time.
A common rule of thumb is if youβre noticing yourself copying and pasting blocks of code throughout your codebase, that could be a sign that you need to consolidate your logic so that youβre not repeating it. Itβs a simple technique that goes a long way to reducing the amount of risk involved in making future changes to logic.
Underengineered solutions also sometimes contradict the Single Responsibility principle, which states that modules, classes, and functions should have only one responsibility over a programβs functionality. If you find yourself writing a class or a method thatβs doing multiple things, such as calculating values, transforming data, and storing it in a data store, then you may want to rethink how your solution should be designed.
Underengineered solutions tend to try to do everything in a single class or function, when they really should be broken up into multiple pieces that each handle a separate task. Solutions that contradict the Single Responsibility principle tend to be difficult to extend and often need to be refactored when new functionality needs to be added. Just like the DRY principle, following the Single Responsibility principle is a simple technique that will reduce the risk of needing to rework the code in the future.
On the other end of the spectrum, overengineering is the act of designing an overly complex solution to a problem when a simpler solution would do the job with the same efficiency. Software engineers often fall into this trap because they add unnecessary complexity to the system just in case it will be needed in the future. In essence, itβs the act of solving one problem while optimizing for other requirements that donβt, and may never, exist. When developers overengineer solutions, theyβre often thinking about theoretical scenarios that could come up in the future but are never guaranteed to happen, which leads to extra time and energy spent writing, testing, and debugging code that isnβt required.
When you end up with code and logic in your system that is overengineered, it increases the difficulty of reading, understanding, and modifying the code for your teammates. Developers will need to work around the complexity in order to add enhancements or fix bugs.
Plus, overengineering a solution directly contradicts the Keep It Simple, Stupid (KISS) principle, which argues that most systems will work best if they are kept simple rather than made complicated. If you strive to write code a junior engineer will be able to understand and modify, youβre probably in good shape. If you add unnecessary abstractions or try to be clever with your solutions, youβre probably not thinking about the risk of later developers modifying your code without fully understanding what itβs doing.
The production lifetime of the code you write will likely be years, and you and other developers will eventually need to revisit that code and modify it to add new functionality. Code that is less complex will always be easier for future developers to understand and refactor than code that is more complex.
From a risk perspective, overengineering a solution may hinder your teamβs ability to move quickly in a different direction in the future. Complexity often adds rigidity to code, because it is harder to refactor or modify when the business priorities change. Your goal should be to write clean and concise code, but not so clean that it constrains your ability to move and adapt in the future.
If possible, try to strive for the Goldilocks Principleβjust the right amount of engineering and nothing more. Unfortunately, that comes with experience, and itβs easier said than done.
KISS Principle (wikipedia.org)
Under-engineering, over-engineering, right-engineering (blog.startifact.com)
Stop Overthinking Your Complex Solutions and Start Building Simple Ones (betterprogramming.pub)
Overengineering: Why We Do It and 10 Ways to Tackle It (betterprogramming.pub)
As software developers our job is never done. There is always more work to do on the codebase, whether thatβs adding new features, cleaning up technical debt, improving performance, or maintaining a legacy system. At some point in your career, youβll be faced with the decision to continue adding to an existing codebase or to rewrite the system from scratch in a new project.
Both paths involve significant risks that itβs good to understand before making any major decisions. When deciding whether to refactor a legacy codebase versus rewriting it from scratch, you should take a number of factors into account such as the type of application youβre dealing with, your teamβs capabilities, the available resources, future hiring plans, and your organizationβs general appetite for risk.
Fortunately (or unfortunately), the decision is most likely not yours to make. The most senior engineers on your team will probably be the ones to make the decision along with your manager, because they will be the ones with the most experience and will understand the implications better than you will.
That shouldnβt stop you from contributing to discussions and lending your opinion, however, so letβs look at some of the risks involved in both paths.
If you choose to refactor a legacy system, you will be making incremental changes to the codebase to clean it up over time in order to get it to a more manageable state. The goal is to improve the internal structure of the code without altering the external behavior of the system.
Pros
Doesnβt divert resources away from legacy systems.
Improvements can be isolated to specific parts of the codebase in order to limit the risk of introducing breaking changes.
Always an option; you can refactor as much or as little as you want as you have the resources.
Any codebase or architecture can be refactored incrementally.
Cons
Limits you to working within constraints of the legacy system.
While it improves code, sometimes you cannot fix underlying architectural issues.
Often difficult and complex to untangle the web of legacy code.
May require writing new automated tests prior to being able to refactor the business logic.
Refactoring maintains the status quo, so itβs difficult to introduce new features or functionality.
Requires discipline to manage the complexity. The application will be in a transitional state as individual parts of the codebase are refactored.
The big rewrite happens when you start from scratch with a new codebase. It may sound enticing and straightforward, but the amount of work is almost always underestimated. This is often done concurrently with changing to a new platform, such as moving from on-premises servers to the cloud or moving to a new chip architecture as hardware is upgraded.
Pros
Enables foundational changes to a part of the system, often introducing new capabilities thanks to new technologies or design decisions.
Eliminates the need to retrofit old code to meet new use cases because you can build for them without any technical debt.
Engineers are able to set new coding standards with a clean codebase.
Cons
Always takes longer than anticipated, eating up resources for other projects and increasing the possibility that management will abandon the project.
Not guaranteed to solve all problems that plagued the legacy system. Sometimes those are due to systemic or cultural processes rather than the technology or codebase.
Complex migration periods as you phase out the legacy system.
Duplicates the amount of work during the transition period. One team builds the new system while another continues to maintain the legacy system.
Requirements for the new system are a moving target as the legacy system still needs to be maintained and upgraded. New functionality may need to be implemented in both codebases.
Every codebase is unique, and every business has different competing priorities, so the decision to refactor or rewrite an application is not a one-size-fits-all problem. You and your team will need to weigh the pros and cons and determine the risks involved in either choice before making a decision.
In the previous section, we discussed the importance of adding or improving processes, and how they add value to an organization. Processes give you guardrails that enable consistency and allow teams and organizations to scale and pass down business knowledge.
But not all processes are created equal, and sometimes processes can feel like theyβre getting in your way. A lot of developers donβt want to deal with the βred tapeβ that processes add to the software development lifecycle, and most would rather just write more code instead of getting slowed down by seemingly unnecessary processes. Eventually, a developer may cut corners and break protocol.
βexampleβHere are a few examples where developers sometimes bypass processes:
They may merge code to the main branch without a proper code review because they donβt want to wait for feedback, leading to a bug that could have been easily caught.
They may elect not to use proper naming conventions because they donβt want to take the time to search the docs to find out the correct way to name an environment variable, leading to a broken deployment because the code expected the variable to use a certain naming convention.
They may do some work without creating a proper ticket in the bug tracking system, leading to changes that are hard to audit and track down.
They may commit an inefficient SQL query without running an EXPLAIN on it because they think itβs a harmless query, leading to a slowdown in database performance.
Yes, some processes can be frustrating, and it may feel like theyβre just slowing you down unnecessarily, but processes exist for a reason. When you bypass processes, whether itβs on purpose or by mistake, youβre actually introducing more risk that something in the system may fail.
Next time you find yourself frustrated and wondering why you have to follow a process, ask yourself why you think the process is there in the first place? What could go wrong if it wasnβt followed? Hopefully, thatβll help you understand and appreciate a little extra red tape here and there if it means saving you from making a catastrophic mistake.
Almost every codebase leverages third-party libraries and external dependencies to provide some part of its functionality. Why reinvent the wheel and build a library from scratch when you can use an open-source package that solves the problem better than you ever could? Add the fact that thousands of other developers use the library and consistently file bugs and contribute fixes to it so it improves over time, and it sounds like a no-brainer, right?
Most of the time, utilizing third-party libraries saves you time and money because you wonβt need to implement and maintain a solution yourself. But be careful, because there is a hidden cost to any third-party library you pull into your codebase. Every time you add a new dependency to your codebase, youβre introducing new areas of risk, because your system now relies on someone elseβs code in order to function properly.
Sure, you might be able to view the source code and gain confidence that the software does what it claims it does, but thatβs not the only kind of dependency risk you should be worried about.
βexampleβHere are some other examples of dependency risks:
Security risks. The third-party code that you add to your system may add new attack vectors to your codebase that you may be unaware of. Hackers often exploit known vulnerabilities in specific widely used libraries.
Upgrade risks. The third-party code may change over time as they add new features and apply bug fixes. They may introduce breaking changes that in turn cause your own code to break after upgrading to a new version, forcing you to drop everything to fix new bugs that were introduced into your system.
Dependency graph risks. You may be able to read the source code of your third-party dependencies, but those libraries may rely on their own dependencies, and those dependencies rely on their own dependencies, and so on. This creates a brittle dependency graph that can easily break your codebase. In some cases, it may be hard to remove or upgrade dependencies that have known bugs, because the library in question is a dependency of another library you installed, so youβre at the mercy of your dependencies to fix the issues for you.
Supply chain risks. Supply chain attacks are becoming more common in the software industry. Supply chain attacks occur when someone uses a third-party software vendor to gain access to your system. When you install third-party libraries into your codebase, you are granting that code access to your system. If an attacker is able to compromise a third-party library that has been installed on your system, theyβll be able to access your data and possibly your infrastructure. Sometimes hackers will target little-known but critical libraries that are deep down in the dependency graph, making supply chain attacks difficult to prevent and mitigate.
Hopefully that gives you a good understanding of how introducing third-party libraries into your codebase also introduces added risk. Next time youβre searching for a third-party library, ask yourself if itβs really needed. If the code is open source and relatively small, it may be better to study how it works and build your own similar solution. This is not always feasible, however, since some third-party libraries can contain tens of thousands of lines of code.
Supply Chain Attack (wikipedia.org)
Supply Chain Attacks: Examples and Countermeasures (fortinet.com)
People often compare the ability to program a computer with superhuman powers. Sure, it may seem like that at times when you see programs do things that are seemingly impossible or futuristic, but programmers are only human. Thereβs a limit to how much the human brain can comprehend at any given time, and we often find that limit when learning a new codebase or managing a large project at work.
There are some projects that are so incredibly complex that they cannot be built or fully understood by a single individual. To complete these projects, a team of developers needs to work together to build individual components that fit together to build a complete system.