Backward and Forward compatibility in software development

Backward and Forward compatibility in software development

Play this article

Key Takeaways:

  • Changes are inevitable in software development
  • Backward and forward compatibility should be considered carefully when upgrading services
  • Use proper upgrading strategies depending on changes to keep services available, stable, and in reality

Evolvability is a characteristic of software development. There are many reasons foster this factor:

  • The market requests new features
  • New architecture to handle performance issues because of the enormous number of users/transactions
  • Bug fixes
  • ...

They are challenges for software developers and operators in maintaining the current code base and data besides designing and implementing new features. It becomes complex with nowadays software. They are distributed systems and have complicated communication flow. The upgrading process should guarantee:

  • making a change from a service impacts as less as possible other services
  • able to deploy without downtime to keep smoothly user experience

There are two concepts that we should consider when we are going to do any change in a system:

Technically, backward compatibility is the ability of old "client" implementations can work with the new change from the "services" and the forward compatibility factor allows new "client" implementations to work with old data from "services". Term "client" presents for consumers that fetch and read data. On the other hand, "services" indicate applications that provide data through APIs (web services), a network protocol (databases), etc.

Let's discuss some use cases we should deal with these two concepts in software development.

Use case 1: Modify fields in a database schema

In this use case, let take a look at three scenarios:

New code read old data

  • Old data is still there and they were written by the old schema. We can run a migration script at the database level to update the schema for old data. But this action is usually expensive (there are maybe terabytes of data with millions of rows) and it causes risks while running script because these data are in use on the production, so it requires handling carefully.
  • The cheaper and safer approach is from the application level where developers can control the way code handles new data and support old data (backward compatibility). For example:
    • If the new schema is adding new fields, the code can support the default value if it reaches the rows of data that don't have the new fields. We can handle the fallback mechanism at the database level by configuring the new schema to set default values for new fields.

New code reads old data and writes based on the new schema

Let's take a look at an example: The old User schema has a name field, and one day, the business requests to have firstName and lastName fields. So, the new implementation should be able to read name from the old data to response clients, otherwise, old users won't have display name. And the writing operator should write to new fields but not update the name anymore.

Old code reads and writes data to the new schema

When does this scenario happen?. The fact that new codes are implemented and will be deployed to the production, then "Where is the old code?"

In the world of competition between products and organizations, zero-downtime deployments are a key requirement for any deployment process. The idea is to avoid interruption of user experience while upgrading system services. A popular mechanism to have zero downtime is rolling upgrade.

To do rolling upgrade, there are instances (or nodes) of service are running in the old version, continuing to serve clients. And the new version of the service is deployed on some instances and starts serving requests. And if everything is fine on these upgraded nodes, they will be upgraded step by step until finishing all instances.

This is a great technique to keep service available. However, it introduces a problem, there are maybe instances with the old version of the service read and write data to the new schema database. These actions can be failed, let consider the following mapping table:

Operation\Type of changeAdd attributesRemove attributes
  • If old code reads new schema with added attributes, then it will ignore and continue to work as it is
  • If old code reads new schema with removed attributes, then it probably fails because it expects the removed fields presented in the old schema
  • If old code writes new schema with added attributes, then it can ignore or fail:
    • If added attributes aren't required or the new schema sets the default values, then the old code can ignore and save the data. But be careful with losing data
    • If added attributes are required to save and there are no default values in the new schema for them, then the write operator is failed
  • If old code writes removed attributes, then it fails

The precedent table describes just two of many scenarios of database change, for example, the data type of attributes. Old code read and write data to the new schema is a big challenge for upgrading services, depending on the situation, we must define the impact of the new schema with the old code and vice versa. Technical approaches are making new schemas to support old codes like setting default values for new attributes, keep removing attributes deprecated but don't remove them straight away, etc.

Use case 2: API change

This scenario is like the previous one in terms of changing the contract in reading and writing data. So, this use case can introduce problems like the prior one. However, this can be handled better and flexibly because both the service and client sides are applications. One of the most popular techniques is versioning API. The idea is to add one more indicator for APIs that allows clients to consume versions they want and deal with upgrading asynchronously. Versioning API indicator can be set as a param in the URL or the request header.

  • API versioning on URL param:

    GET /api/v1/users
    GET /api/v2/users
  • API versioning in request header:

    Accept: application/json;version=1.0
    Accept: application/json;version=2.0

That’s it for today folks, I hope you were able to find something new!

If you have other opinions or experiments, don't hesitate to leave comments below. I'm happy to hear from you and have discussion with you!

Enjoy programming!