Replication Examples

New DB row update

User changes the value in a single field of an existing row in the new DB. Within saving is activated AFTER UPDATE trigger that performs a series of processes. Among other inserts rows in history and update the subject table. During its activity retrieves user from CONTEXT_INFO, verifies authorization for record update, changes relational values ​​between local and remote keys, if the changes are in relational columns, binds the reference value and performs number of another processes required for proper row processing.

A LogReader is started in regular cycles by Job and on the local server evaluates history. Now it found the pending records and sends them to the distribution server queue global_Repl_CmdQue.

A Distribution Agent is started in regular cycles by Job on the distribution server. This is the main process of asymmetric replication, which correctly transforms and distributes data based on the records in the queue. Agent is divided into preprocessing, publications, subscriptions and postprocessing. If agent solves exclusively our update, then results a publication.

In the distributed transaction mode reads series of data from a queue, belonging to one TaskId and executes them. Execution is thus dependent on the loaded parameters from Map Bridge. Distribution Agent based on the loaded definitions creates update command and across the MSDTC executes it on the remote server.

New DB row insert

In the new system is inserted a new row by user for assistance INSTEAD OF INSERT. Row is inserted into a local table, without REF value, which is yet unknown. Then is activated AFTER INSERT trigger that records the requirement to complete all unidentified reference values, maps changes, and these changes insert into history. In this case it is a group of records (one column = one record). These records are designated by common TaskId.

Table of delayed publicized references (global_Repl_PubDelay) contains a record Id and reference Id request. Upon completion of each reference request record is deleted. Row can be published until all the reference requests for the publication are settled. Until then the queue is stopped. Easy said, until there is an entry in the reference request table, LogReader cannot send data to the distribution server and other requirements cannot be executed.

Table global_Repl_PubDelay is fulfilled by AFTER INSERT and AFTER UPDATE trigger, evaluaed by system_RQ_LogReader procedure and deleted by system_RQ_DistAgent procedure. It is intended to identify the relation before publication (uniqueidentifier is known, bigint REF is unknown).

On the local server is running a LogReader job that in cycles evaluates the history table. Now it finds a group of records, which it sent to a distribution server and at the history table those records marks as evaluated.

On the distribution server is running Distribution Agent job (system_RQ_DistAgent) that it receive comprehensive data Task (common TaskId) = (Acquire). This task is processed according to global_Map_Bridge regulations with defined exceptions (Manipulate) and writes to the remote target data stores (Write).

Agent supports two types generation of unique index on remote server. First automatically for Identity(1,1) but also by procedure with their own counter. If identity is activated, SCOPE_IDENTITY returns value of row identity generated on remote server. If the value is generated by the procedure, then this procedure is executed as RPC and Agent learns this output value in a different branch.

This value is immediately stored in the appropriate local table to inserted record in the REF field. If replicator updates the REF field, local AFTER UPDATE trigger it ignores (is not thrown write in history or increased version number, etc.).

Solving of delayed publication links

This is conceptually simple method, advantageous to the local system. With using delayed publication links users need not wait to complete the communication between servers.

For solution of storage delayed publication links is designed global_Repl_PubDelay table where trigger holds over records that could not identify within first pass. When the system knows the reference number REF of crm_Identity table, procedure scan global_Repl_PubDelay table. Then distributes reference number into rows where links are missing. By writing is caused other replication and the row is complete at the remote server.

Very similar principle is used for delayed subscription requests handling.

Original DB row update

In the original database user changes two fields at one row. The fields are then physically witten to the table (row is updated). Because we implemented replication triggers to the original DB, AFTER UPDATE trigger is activated. Trigger maps changes and write them into history. In this case, o the history are inserted two records and identified by common TaskId.

On the original server (with original DB) LogReader is operated. LogReader evaluates the history at periodic cycles. Now it finds two records and sends them to the distribution server. In the history table those records marked as evaluated.

On the distribution server Distribution Agent is operated. When receiving data of Task (common TaskId) = (Acquire), processes it according to regulation with defined exceptions (Manipulate) and writes them to the target storage (Write). This completes the entire data subscription (Subscription).

During writing are activated local triggers. These triggers recognize that replicator writes to the database and as the only exception set local flag ReplEvaluated=1. Within next LogReader cycle, these records will be no longer evaluated.

At the same time they are also evaluated potential conflicts of data sharing. Because from the source DB are sent not only new values ​​to be inserted into the target storage, but also the existing values, before saving of data is compare the current contents of the target DB with original content of the source DB. If these values are equal, then to the target storage are written the new values that came from the source DB.
If values are not equal, both update timestamps are compared (both original and new DB rows). If datetime with a coming value is higher than local, update is executed. Otherwise a data sharing conflict exception is thrown and update is not executed. If there is a data sharing conflict, such conflict needn’t be reported, but it is solved by to the rule of last winning.