by Peter M. Horbach

Migration in IT

Migration is a term that we encounter in everyday life. Whether on television or radio, we are confronted with migration issues in the news.

The origin of the word comes from the Latin migratio or the verb migrare, which means "to emigrate, to move out".

Migration is therefore a permanent spatial change in the center of life and can extend to the most diverse areas of life.

In this blog we deal with the aspects of migration within an IT. Migrations within IT are, for example, the conversion to new technologies and data formats or the conversion to new software or hardware components.

For almost 40 years, the mainframe has been the heart of IT. This is still the case today in companies that depend on constant 24/7 or 24/7/365 availability of their IT.

Towards the end of the 20th century, the focus shifted more and more to the use of open systems (Windows, Unix, Linux) and new database technologies. We have already dealt with this topic several times in this blog.

The availability of the new technologies leads companies to migrate their IT to these new technologies.

There are different types of migration:

  • Data Migration
  • Application Migration
  • Platform Migration

A platform migration inevitably entails a data and application migration, whereas a data or application migration can certainly affect the existing platform in addition to the new platform.

In this blog we will deal with data migration and assume a situation that we have found very often among our customers.


The company operates a mainframe-based IT system that was created in the early 1970s. The applications were created by the company's own programmers in accordance with the requirements of the company and have been adapted to the changed circumstances over the years. The company's data structure is based on hierarchical storage in IMS/DLI databases. In the course of a modernization project, the hierarchical data formats are to be converted into a relational format. The new data structures should be able to be used both on the mainframe platform and on an open platform.


Db2 was chosen as the new database for the migration, both on the mainframe and on Windows and Linux. In the run-up to the actual migration, the new data model was created and tested. tcVISION provided good help in creating the new model, as the existing hierarchical data structures were loaded into the repository and the new tables with the new data models were created and tested from them.

The first new applications were developed with these test databases. tcVISION handles the special features of the linking of segments of a hierarchical database independently and transparently, so that the application can concentrate on the new models.

The actual migration of the IMS/DLI databases consists of unloading the old database and loading the new table(s).

This so-called BULK load is a method of the tcVISION solution and is carried out in an efficient single-step operation. The output of a BULK load process is either the direct apply to the target database or the creation of a loader format for the corresponding target database. For performance and processing reasons BULK load processes can be parallelized.

The input on the mainframe to this process can either be the database itself or it can be a backup medium. In addition to processing on the mainframe - in the case of a backup medium - this can be transferred to an open platform via a file transfer and processed there by tcVISION.

Both database structures will continue to be used together for the period until the old database is finally abandoned.

The applications on both platforms are equal in our example. This type of processing is called a Master-Master scenario as opposed to a Master-Slave scenario in which the mainframe platform is the dominant one and all changes on that platform are replicated to the new database(s).

Both platforms are equal, i.e. users work with the respective applications of the platform and make changes to the data. These changes need to be detected and replicated to the other platform in real time. This type of processing usually requires certain organizational measures (e.g. different key and number ranges for the respective platform). However, this is not necessary for a functioning bidirectional replication with tcVISION.

We have already described in detail in another blog how bidirectional replication is carried out. Read the post here: Bidirectional Replication - An Analysis.

During the migration phase, it is extremely important that a good audit reporting is available of the applied changes. tcVISION creates meaningful logs of every change and also provides statistical data at resource level.


Unidirectional or bidirectional replication is an important component of business initiatives as part of an application or data modernization, perhaps together with a move to a new platform. Especially when a mainframe plays the central role in this scenario.

The tcVISION solution is characterized by the fact that a replication can be implemented in a Master-Slave concept as well as in a Master-Master environment without any problems.

In the case of bidirectional replication, it is important that traditional mainframe resources can be used as output targets without the need for additional software components.

Since the market launch of tcVISION, many customers have successfully carried out their migration scenarios using the tcVISION replication capabilities. Each of these scenarios had its own focus and characteristics, which were ultimately solved and found their way into the tcVISION solution.

You can find an overview of all supported input and output targets here.

Peter M. Horbach has been active in the area of data synchronization and replication with more than 40 years of IT experience. He manages the international partner business for BOS Software and writes for our blog.

zurück zur Übersicht