Real-time Data Replication

tcVISION is a cross-system solution for the timely, bidirectional data synchronization and replication based on changed data. It turns data exchange into a single-step operation. No middleware or message queueing is required. The data is exchanged in raw format, compressed and reduced to the processing of changed data. Unidirectional or bidirectional data transfers in real-time, time-controlled, or event-based are supported.

Areas of Use
Synchronization of data in a heterogeneous system environment consisting of a mainframe and distributed systems
Gradual migration of data and applications in heterogeneous system environments
Modernization of existing IT structures by integrating new technologies such as streaming and cloud platforms
Real-time mainframe offload to substitute ETL, or for real-time data in data hubs, data lakes, and Big Data
The tcVISION replication solution has a modular design. It supports mass data load from one source to one or more targets as well as continuous data exchange process in realtime with change data capture technology.
tcVISION Components
1. Transformation Platform with Repository
This contains all utilities of automatic data mapping to generate metadata for sources and targets, and the rule set for extracting the data from the source, the transformation/processing of the data for the target systems as well as the implementation into the targets. A cost-effective system platform such as UNIX or Linux is recommended for operating the tcVISION transformation platform.
2. Dashboard / Administration GUI – Command Line Editor
The tcVISION dashboard is provided for administration, review, operation, controlling, and monitoring of all data exchange processes. The tcVISION Command Line Utilities can be used to automate data exchange processes as well as for the "unattended" operation of data synchronization processes.
3. Data Sources
tcVISION Bulk Reader for the implementation of mass data (initial load or periodical mass data transfers)
Log-based Change Data Capture Agents to capture the change data on record level
4. Data Targets
tcVISION Bulk Loader for the efficient load of mass data into the targets
tcVISION APPLY to use DBMS-specific APIs for the efficient implementation of data changes in realtime in combination with CDC technology at the source
5. Efficient Data Exchange
The data is exchanged between source and target compressed and in "raw format" via TCP/IP. The data exchange is limited to a minimum.
Flexibility and Actuality
  • High integration potential of the tcVISION solution: Multiple Change Data Capture technologies can be used depending on change frequencies and latency times
  • Intuitive data mapping offers comprehensive functions for data type conversion and data transformation up to a complete change of the data model
  • Comprehensive conversion of historically developed mainframe data structures
  • Highest actuality through continuous real-time processing
  • Automatic or user-controlled data transformation (ASCII - EBCDIC) for the target (conversion, reformatting, interpretation, etc.)
  • Support of relational and non-relational databases
  • Intuitive dashboard for administration and controlling
  • Comprehensive monitoring and logging of all data movements ensure transparency across all data exchange processes
  • Integrated database-specific „Apply“ function to efficiently merge data into the target systems, e.g. direct Insert, Update, Delete, or via JSON through Kafka, or DBMS loader
  • Integrated data repository with history management to maintain all data structures and data exchange rules
  • Key management for non-indexed data
  • Elimination of programming efforts for data transfers
  • Integrated pooling/streaming processes avoid programming efforts
  • Message queueing prevents data loss because of unavailability of the target system or delays
Data Integrity
  • Practice-proven processes are available to restart a replication after system failures (database errors, transmission errors, etc.)
  • Master Data Management to ensure data consistency
  • Ensuring referential integrity through transaction-bound data transfer
Change Data Capture Mechanisms

Timely capturing of all change data

Obtains the change data information directly from DBMS

Secure data management – even across a DBMS restart

Minimum latency

File Processing
Event-based or time-controlled

Processing of DBMS log files

Transfer of the change data within predefined time intervals

Ideal for nightly batch processing

Processing occurs right after log commit

Bulk Transfer
Mass data transfer

Efficient transfer of entire databases and files

Periodic transfer of mass data with low frequency of changes

Ideal as „initial load“ prior to real-time synchronization

For periodic mass data transfers

Batch Compare
Snapshot processing

Comparison with data snapshots

Efficient transfer of change data since the last batch compare run

Automatic determination, creation, and transfer of deltas by tcVISION

Secure restart/recovery after error incidents

Cost Reduction
  • Reduction of the transfer volume for data synchronization
  • Less knowhow required for databases and platforms (e.g. mainframe skills)
  • Relocation of processes to more costefficient platforms (Linux, UNIX, Cloud)
  • Quick and easy implementation of data exchange processes across systems
  • No programming effort for the extraction, transformation, and implementation of data
  • Easier data conversion through integrated database-specific transformation logic
  • Real-time data as solid base for enterprise decisions and projections
  • Unlimited potential for growth and new technologies through a modular architecture and provided APIs
  • High innovative capabilities and agility – overcoming the data lock-in in historically grown IT environments
Freedom and Independence
  • Less dependency on database manufacturers and service providers
  • Better and more efficient use of internal resources
  • High transparency through central monitoring of all data exchange processes
  • Freedom of choice for innovations with the use of databases and platforms
Compensation of Lack in Expert Skills
  • Compensation of the decreasing mainframe knowhow
  • Automated processing of historically grown databases
  • No need for database-specific knowhow due to a relational view
Supported Sources and Targets
IBM z Systems
  • z/OS
  • z/VSE
  • Linux on z Systems
Distributed Systems
  • Linux on IBM Power Systems
  • Microsoft Windows
  • Unix
  • Linux
Mainframe Databases
  • IBM Db2
  • IBM IMS/DB / DL1
  • VSAM
  • Software AG ADABAS
  • PDS/PS
Non-Mainframe Databases
  • IBM Db2 LUW
  • IBM BLU Acceleration
  • IBM Informix
  • Oracle
  • Sybase
  • Microsoft SQL Server
  • Software AG ADABAS LUW
  • PostgreSQL
  • Teradata
  • MongoDB
  • Flat File Integration
  • SAP Hana
  • MySQL / MariaDB
Big Data / Hadoop
  • JSON
  • Avro
    • with Avro
    • with CSV
    • with JSON
  • Hadoop Data Lakes
  • HDFS
  • CSV
  • Elasticsearch
  • Snowflake
  • Aurora MySQL
  • Aurora PostgreSQL
  • AWS S3
  • Amazon Web Services
  • Amazon Kinesis
  • Microsoft Azure
  • Amazon Redshift
  • Azure SQL-Database
  • Azure Database for MySQL/MariaDB
  • Azure Database for PostgreSQL
  • Azure Event Hubs
  • Google Cloud SQL for MySQL
  • Google Cloud SQL for PostgreSQL
  • Google Cloud SQL for SQL Server
  • Google Cloud Storage

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.