Working with legacy systems
Every day, an overwhelming amount of material is written about creating projects, designing architecture, and developing applications from scratch. People obsessed with rapid launches eagerly share their experiences on LinkedIn, at seminars, press conferences, and in podcasts—all while sipping coffee in their oversized hoodies.
However, few people speak with the same enthusiasm about the other side of software development: working with legacy systems or already deployed production projects.
In this article, I’d like to provide an example of restarting development on existing software and outline a general action plan. This won’t be a universal how-to guide, but it should give an idea of what to expect in such situations. I will also focus on analyzing the worst possible scenario and defining it. Hopefully, this will help those still debating whether it's worth spending money to reassemble a team of strange people to dig into something that was originally built by another group of strange people.
Identifying a worst-case scenario is quite simple
1
By Product Goals and Functionality:
  1. The product is more complex than a personal blog and was developed in less than a quarter.

Sometimes, the client’s enthusiasm comes from the fact that a friend built a version they could click through in just two months. If it’s not a cooking blog, it's important to maintain the client’s enthusiasm until the real issues start being discussed.


2. The functionality the client needed was not initially planned but added on the fly.

Example: The project was originally developed as a trading platform, but later, the marketing team and product owner decided it was missing a

social media element. They decided that sellers and buyers should have personal pages where they could post content and comment on each other’s posts. All of this had to be dynamic, and sorting algorithms had to handle this type of content. This example is somewhat exaggerated, as such changes are generally a red flag in themselves, but the point is this: if, at the beginning of the project, the client had one goal, but somewhere in the middle they decided something was critically missing and introduced an improvement that wasn’t particularly related to the main functionality, it will add unplanned load during development and improvements.


3. Unrelated processes and requests affect each other’s results.

Example: If filling out a form and pressing a button in one window unexpectedly changes the content in another window—despite these actions seemingly being unrelated—it could indicate that the database was either poorly designed or that data updates were implemented with minimal understanding of the underlying processes.

2
By the Quality of Interaction:
3
By Data Security and Access:
  1. There is no clear understanding of what permissions users of the service should have. Initially, the project had several levels of access planned, but up to this point, neither the client nor the developers have provided clear guidelines on how access to functionality and data should be regulated in the code.
  2. Some personal data unexpectedly becomes accessible without an authentication process
  3. c. The client regularly or irregularly loses user data. The client either lacks a backup system or has an incorrectly functioning delivery process.
4
By Infrastructure:
The client doesn’t know where or how their server is located, but they are paying for it (or, even worse, they aren’t paying)
b. The client doesn’t own the repository for their project’s code
c. There is no continuous delivery system for changes to the production environment (or it exists, but see point 5)
5
By Communication Factors:
  1. TThe client has an unwritten source of "sacred knowledge." In the development world, "sacred knowledge" refers to the set of insights about a project that are not obvious. This information must be documented somewhere, ideally by both the development team and the client, but a single technical specification cannot be considered sufficient for this purpose.
  2. Every new request from the client took increasingly longer to implement by the previous team. This is best described by the concept of entropy in software. The longer a project exists and evolves, the more complex it becomes and the greater the chaos in its state. This characteristic can’t be directly measured, but you can track its growth. It’s inevitable, but with a structured, organized approach, streamlined workflows, proper documentation, supporting systems, and stable infrastructure, entropy will grow slowly.
  3. The previous team didn’t write tests. Some clients believe writing tests is unnecessary and only increases the budget and development time. However, whether they realize it or not, tests are one of the best ways to control entropy, reduce costs, and decrease debugging time.
  4. The project lacks an error collector. Logs without a processing tool do not count as an error collector. The best way to describe points (c) and (d) is as follows:

No tests + bug tracker = everything is fixed by reviewing logs = lots of time = many bugs = many development hours = high costs

Despite all of this, the client is convinced that not only does he want the service to work better, but also wants to expand its capabilities, integrate third-party solutions, etc.
What to do?
If a checkmark can be placed next to almost every item in the previous list, the situation can be described as the MVP suddenly transitioning into commercial use, and the best way to describe it can be found here. In an ideal world, this never happens.

What Should the Client Prepare For:

  1. First, critical issues, then add-ons. Initially, the most pressing issues that prevent the product from being used (to some acceptable degree) will be addressed.
2. Developers won’t be making "live" fixes. Any major fixes or feature additions will go through several stages:

  • Testing with automated tools (e.g., unit and integration testing).
  • Developer environment testing.
  • Testing on the client’s test environment.
  • Deployment to the user environment.
  • This means the client will effectively have three working versions of the software, each intended for different purposes. In the case of critical issues, developers will first make changes on the client’s test environment and then roll them out to the user environment.
3. Improving old functionality = creating an analogous new one.
If developers only fix existing features, they will likely end up slowly rewriting everything anyway. However, users will continue using what is available in the current version, and eventually, this rewrite will lead to the functionality working even worse. This can be compared to fully repairing plumbing and laying tile in a bathroom while the owner still uses it. In real life, we would ask the owner to stay away until the work is finished, but with software, we can’t afford that luxury. Instead, we can do something else: “build new plumbing” on the side, in parts, and gradually connect the new systems to the old bathroom, relaying the tile only over those systems that have been updated. This way, developers can create new functionality, completely or with improvements over the old, test it at all stages, and gradually deploy it to the client environment, where it will be integrated as progress is made.
4. A lot of time for preparation.
Essentially, a project with many of the problems outlined above is a "cat in the bag." No one knows exactly how it works, how it's structured, or what to expect from it. Therefore, the first couple of months will be spent by the new development team exploring the project, understanding its logic and existing issues, performing basic refactoring on some components that are not planned to be radically changed or improved, and so on
What the Development Team Should Prepare For:
  1. The Basics at the Start:
  • Set up the testing stage.
  • Integrate debugging tools.
  • Begin documenting.
2. "If it works, don’t touch it."
Changing something, even small modifications, can be risky. Therefore, the team should prepare for the possibility of needing to rewrite everything, sometimes copying and borrowing from old functions and flows, while adhering to the rule of separating environments and testing components.
3. Quickly get familiar with the database and its structure.
If changes to the database are necessary, make sure to create as many backups as possible at every step.
4. Find ways to segment logic.
If the project is on Django and the framework follows MVC (Model-View-Controller), make sure to stick to this structure—separate into model, view, and controller.
5. Write tests, write tests, and write tests.
6. Atomicity and Automation:
  • The process of deploying and integrating changes should be as automated as possible.
  • The testing process should be automated.
  • The process of starting auxiliary services should be automated.
  • Actions should be atomic.
Made on
Tilda