Tuesday 28 September 2021

Replication Service to easily setup replication from SAP HANA on-premise to SAP HANA Cloud

A new Replication Service is now available in SAP HANA Cockpit as a Service. You can find this application as part of the monitoring view which allows creation and monitoring of replication jobs using Smart Data Integration. You can easily setup a new remote source and database object replication as the UI is a step-by-step wizard based.

 

Database%20Overview%20of%20Cockpit

Database Overview of Cockpit

 

The details of the Replication Service can be found in the official documentation but let me go through how you can setup a simple replication from On-premise SAP HANA to SAP HANA Cloud.

 

I have installed SAP HANA, express edition VM and DP Agent inside the VM but this can be applied to any SAP HANA on-premise or even other SAP HANA running on different hypervisors.

SAP HANA on-premise setup

The replication service will need to connect to the remote system and requires a technical user the following privileges as explained in the official documentation.

  • CATALOG READ for the wizard ui to list the objects to be selected for replication.
  • SELECT, DROP, and CREATE ANY on the schemas of users to be replicated for replicating users other than the technical user.
  • SELECT, DROP, and CREATE ANY on schemas with objects to be replicated objects to be replication assuming different from the user above.
  • SELECT on any other objects to be replicated, and TRIGGER if these objects need to be replicated in real-time. Trigger is needed as it is the means of real-time replication.

 

I have created TECH_USER for the technical user and WORKSHOP for the user and schema for replication. I will grant the following privileges to TECH_USER

grant select, drop, create any on schema workshop to tech_user;

One thing to keep in mind is for the object to be replicated, the owner needs to be other than SYSTEM and SYS as these objects are not allowed to be setup for replication.

Now for testing the replication, I have created WORKSHOP user and schema and a table named TEST_TABLE. Just insert the first record as the initial state prior to the replication.

create table test_table
(col_pk int, col_1 nvarchar(50), primary key(col_pk));

insert into test_table values (1, ‘First Row’);

 

SAP HANA Cloud setup

In addition to the source side, we need to create a technical user that DP Agent will use to connect to SAP HANA Cloud with the following privileges.

Technical%20user%20privilege%20for%20SAP%20HANA%20Cloud

Technical user privilege for SAP HANA Cloud

Replication Service in Cockpit

Now, let’s try to setup the replication from the Cockpit. In the Replication Service card, you can click on ‘Create Replication’ link on the bottom right to start the replication setup.

Replication%20Service%20application%20card

Replication Service application card

 

This will start the wizard for setting up the replication.  The first step is to either select an existing remote source if already configured or create a new connection. As this is a new trial instance, I don’t have any existing remote source connection and will have to create a new one.

Since there is no existing connection, we will select ‘Creating New Connection’ which displays the parameters to setup a remote source and advance to step 2.

Step%201%20-%20Create%20remote%20source

Step 1 – Create new replication

For step 2, setup a remote source name and select the DP agent that is registered with SAP HANA cloud.

Step%202%20-%20create%20remote%20source

Step 2 – create SDI remote source

For step 3, enter the host and port information with technical user and password. For this example, I will skip SSL encryption and validate server certificate but for production use both should be enabled. (Please refer to the official SDI documentation of how these can be configured for SDI)

Step%203%20-%20remote%20source%20parameters

Step 3 – remote source parameters

 

In step 4, a new remote source should be created and a list of users that was granted to tech_user will be listed for replication. I will select WORKSHOP user and also replicated any database privileges granted to this user as this user will be created in SAP HANA Cloud with the same username and privileges. Advance to step 5.

Step%204%20-%20select%20user%20to%20be%20replicated

Step 4 – select user to be replicated

In step 5, we would need to enter the password for the WORKSHOP user as the password cannot be automatically extracted from the source systematically and need to be entered manually. The password needs to be entered twice to make sure there is no mistake in the password. Enter password twice and advance to step 6.

Step%205%20-%20set%20password%20for%20replicated%20user

Step 5 – set password for replicated user

In step 6, we select the table to replicate and then select if the replication is just initial load or real-time replication. Click confirm to finish the setup.

Step%206%20-%20Select%20DB%20objects%20to%20be%20replicated

Step 6 – Select DB objects to be replicated

 

Now, the UI will be transitioned to the monitoring UI as follows showing the status of replication. The status will show ‘Initializing’ for a short time and then switch to ‘Loading’. The ‘Loading’ is the state where the replication is happening in real-time. The data is replicated to a column loadable table which stores the data in memory.

Replication%20Service%20Monitoring

Replication Service Monitoring

To pause the replication, the button on the top right can be clicked to pause all the replication jobs running against the remote source LOCAL. This can be very useful for any maintenance work to pause the entire replication and resume the replication once finished.

The default connection type is ‘In-Memory’ but can be changed to ‘Disk’ which is the page loadable column table (or NSE). For the replication schedule, it can be changed to None which will remove the subscription and stop the real-time replication. The opposite is also possible to change from None to Real time, but for this case, the data will be fully loaded again to sync the data with the remote source.

The Replication Service also supports SQL views to be selected for replication, which will replicate all base tables defined in the SQL view. If the view is nested with another view, it will configure all objects within the hierarchy definition of the view.

Now, let’s go back to the top level of the cockpit with the Monitoring View layout and check the Replication Service application card.

If another table needs to be replication, this can be done from the monitoring by clicking the ‘Add’ menu. Also, ‘Delete’ will remove the selected object from replication.

You can now see the remote source LOCAL is listed with 1 object in real-time replication mode. Click on it, will move directly to the monitoring of the replication mentioned previously.

Test real-time replication

Finally, let’s enter more data to the source table to check if the real-time replication.

insert into test_table values (2, ‘Second Row’);
insert into test_table values (3, ‘Third Row’);
insert into test_table values (4, ‘Fourth Row’);

 

Check the table in the target to validate the data was replicated in real-time.

Monday 27 September 2021

High Availability and DR for SAP HANA, SAP S/4HANA, and SAP Central Services

As SAP application touches many critical parts of a company such as its’ ERP, manufacturing, business processes, customer service etc. it has become the lifeline of many enterprises that depends on it for their business to operate properly. As such, high-availability has became one of the top concerns of company managements when it comes to their SAP systems.

In this article, we will discuss at a high-level what is HANA system replication, how it works, what are the limitations when it comes to high-availability, and how we can overcome them. We will also discuss about the options for SAP HANA, SAP S/4HANA and SAP Central Services’s high-availability and what are the key differences, so that you can choose the right tool for the right job.

Some of the key questions you may need to ask yourselves at the end of the day, in order to select the right solution to use for HA:

  • Meet Recovery Time Objectives (RTO)

—– How long can SAP be down before you recover?

  • Meet Recovery Point Objectives (RPO)

—–  How old can your data be when service is restored

  • Meet Availability Service Level Agreements (SLA)

—– How much uptime do you need?

SAP HANA system replication

With SAP HANA System Replication you can have continuous synchronization of a SAP HANA database to a secondary location either in the same data center, remote site or in the clou, which creates a reliable data protection and disaster recovery solution.

System Replication is a standard SAP HANA feature that comes with the software. Using this feature, all data is replicated to the secondary site and data is pre-loaded into memory on the secondary site which helps to reduce the recovery time objective (RTO) significantly. So in case of a failover, the secondary site will be able to take over without even performing a HANA DB (re)start and will work as primary DB instantaneously upon failover. However, the failover has to be triggered manually by the admin using the sr_takeover command, and for the replication to be reversed, or failback to primary, separate commands will need to be issued as well.

HANA System Replication failover high-availability and DR
Figure 1: SAP HANA System Replication failover high-availability and DR

Below are some key points of the SAP HANA system replication method for HA and DR:

  • Redundant Servers / Nodes
  • In-memory database replicated by HANA system replication (in “log replay” mode)
  • Multiple replication options: sync, sync-mem, async
  • Supports active-active (read-only on secondary)
  • Setup and admin through HANA cockpit, HANA studio or command line

Limitations

  • No monitoring of application process or replication failures and automated failover
  • Failover, reverse replication and failback has to be performed manually – many manual steps are needed
  • No virtual IP
  • No integrated HA failover orchestration together with SAP Central Services etc. components

As you can probably deduce from the above points by now, SAP HANA system replication is designed to protect against data loss. Such that when an issue happens with the primary node, an admin can manually run a “sr_takeover” command, so that a problem with the primary system will not take down the entire SAP setup which depends on the HANA database for the prolonged period of downtime. However, a lot of this work has to happen manually and depends on human manual intervention, which although is good enough for DR, it does not make an ideal situation for HA (where downtime needs to be prevented).

There are additional options to improve the high-availability of SAP HANA, SAP S/4HANA and other components, that you may choose from, to protect their services and data.

Open-source HA for SAP

Linux vendors like SUSE and Redhat has high-availability extensions that comes with their “Enterprise for SAP” subscriptions. They would bundle in Pacemaker, Corosync, DRBD opensource software that would allow you to build high-availability clusters for HANA database, ASCS, ERS and other SAP components.

Since there are already many online resources covering how to implement Pacemaker for HANA which you can refer to, I will not post the steps on how to use them. For instance this blog by fellow community member Tomas Krojzl is a good starting point for your reference – https://blogs.sap.com/2017/11/19/be-prepared-for-using-pacemaker-cluster-for-sap-hana-part-1-basics/

SIOS High Availability Clustering

SIOS high availability software for SAP can be used to protect SAP S/4HANA, SAP HANA and SAP Central Services in any configuration (or combination) of virtual, physical, cloud (public, private, and hybrid). It can provide easy and flexible configuration, high-performance replication, and comprehensive monitoring and protection of the entire SAP environment.

Specifically for SAP S/4HANA and the SAP HANA databases, SIOS can be used to complement what SAP is already doing with the HANA system replication. Adding onto HANA system replication, to provide complete automated high-availability – automated monitoring of key SAP HANA application processes, and provide automated failover, failback, including virtual IP(s), even if you have multi-instance setup.

It can also integrate the entire stack such that in a DR scenario, for instance, the entire primary datacenter did not fail, but only the SAP HANA database failed, you can orchestrate the entire stack including application servers, central services to failover to the other node to preserve performance.

SIOS HANA System Replication failover high-availability and DR
Figure 2: SIOS HANA System Replication failover high-availability and DR

Below are some key points of the SIOS Protection Suite for SAP HANA HA and DR:

  • Works in the cloud cross AZ and AR
  • Provides automated failure detection and failover for key SAP HANA DB components:
    — SAP HANA Host Agent
    — SAP HANA sapstartsrv
    — SAP HANA Replication
  • Enables automated SAP HANA replication takeover, switchback
  • Automatically reverse replication
  • Verifies and monitors that the HANA DB is running
  • Provides Virtual IP
  • “Full stack” failover orchestration with ASCS etc. SAP components

Four steps to install and configure HA for HANA database

We will not discuss the specific steps of how to configure SAP HANA, since there are already many on-line resources that cover those steps. But at a high-level, what you need to do are 4 basic steps:

  1. Install SAP HANA
  2. Configure HANA system Replication
    See – https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/676844172c2442f0bf6c8b080db05ae7.html
  3. Install SIOS protection suite
    See – http://docs.us.sios.com/spslinux/9.4.1/en/topic/sios-protection-suite-for-linux-installation-guide
  4. Use HANA recovery kit (wizard) in GUI to protect HANA
    See – http://docs.us.sios.com/spslinux/9.4.1/en/topic/sap-hana-recovery-kit

The installation process flow are similar for other SAP components (ASCS, ERS, PAS, Web Dispatcher etc.) as well.

With the HANA recovery kit included in the SIOS protection suite software, you can basically use a wizard in the SIOS Lifekeeper management GUI, to quickly protect a HANA database instance, assign the virtual IP address for clients to connect to it, and manage the entire stack from it. You can have multi-instance environment and the solution will manage all the instances, virtual IPs etc. within the a fully integrated GUI, which makes it very easy to configure, manage the entire SAP landscape that is on SIOS HA.

SIOS Lifekeeper Management GUI for SAP HANA ASCS and ERS
Figure 3: SIOS Lifekeeper Management GUI for SAP HANA  SAP Central Service

 

Additional HA/DR stack that SIOS can protect for SAP –

Other than HANA database, SIOS Protection Suite also provides protection for key SAP services and supporting applications, all of which can be managed from the same GUI :

  • Primary Application Server (PAS)
  • ABAP SAP Central Service (ASCS)
  • SAP Central Services (SCS)
  • Enqueue and message servers
  • Enqueue Replication Server (ERS)
  • Database (Oracle, Sybase, MaxDB, HANA, etc)
  • Shared and/or Replicated File Systems
  • Logical Volumes (LVM)
  • NFS Mounts and Exports
  • Virtual IPs

The deployment steps are similar to SAP HANA. The key differences however is how you would protect the shared filesystem which can be different depending in whether you are using NFS that can be replicated and protected by SIOS, or cloud provider’s fileshare service, bearing in mind that is has to be replicated and redundant across the AZs or region as well.

Clustering in the cloud

When moving SAP to the cloud, one of the key challenges is how to protect the SAP database, as well as the SAP applications stack in a SAP supported architecture. SIOS has been forefront of this move and are designed, certified and supported by SAP as well as all the major cloud providers.

The diagram below is a high-level design of how a pair of S/4HANA system can be deployed across different availability zones, or even regions. In cloud environments, as the providers do have very low latencies between AZs, it is entirely possible to use synchronous replication across the AZs, thereby creating a pair of highly available S/4HANA system, not just for HA but also for DR at the same time. This is because AZs are geographically separate datacenters, much like how on-premise DR datacenters are, which highly redundant high-speed network connectivity between them.

SIOS Protection Suite for SAP S/4HANA cloud architecture
Figure 4: SIOS Protection Suite for SAP S/4HANA cloud architecture

 

Should you use a commercial HA solution like SIOS over open-source HA for SAP?

This question will invariably come up in people’s mind, since some Linux vendors already provide their HA extensions (HAE) or clustering, why would anyone want to use a commercial 3rd party HA solution like SIOS?

If you recall at the top of this article, I mentioned that the key questions to ask yourselves when selecting a HA/DR solution – what RPO, RTO and SLA you need to meet? You will want to use those basic points as basis when reviewing the points below. If the solution being selected fits the objectives you need to meet.

  1. Open-source HA is being offered as part of certain OS flavors “enterprise SAP” extensions subscription – it comes at a cost, it’s definitely not free, and not all Linux flavors are supported. SIOS supports all the major Linux flavors including Redhat, SUSE, Centos and Oracle Linux. For customers who want to run Windows for their ASCS or Content Server etc. SIOS also has Windows based solution with Windows clustering support, making it a one-stop-shop for the entire SAP landscape regardless of platform.
  2. Commercial HA support – OS vendors depend on open-source community for bug fixes, which can be a problem if the bug requires a longer time to get solved by a less active contributor. SIOS provides commercial support with dedicated support and development team just for its’ high-availability solution, and has immediate 24×7 support resolution, which would give customers much more confidence when there are issues that may develop.
  3. Complex setup and admin via command line is needed by open-source tools. They are made up of different components like Pacemaker, Corosync etc. maintained by different open-source initiatives. SIOS provides all-in-one GUI for wizards-based setup and admin. It allows one to deploy SAP HA in a matter of hours instead of weeks/months.
  4. SIOS provide pre-built application monitoring and failover orchestration for all SAP and cloud components requiring HA through a wizard in the GUI, as opposed to using HA extensions that still requires alot of manual configuration.
  5. Automatically ensures SAP ERS is always running in opposite node of ASCS – SIOS provides the intelligence even in a multi-node ASCS setup, if a failover occured and ASCS failsover to the node with the running ERS, when the original ASCS node recovers, ERS gets automatically switched across so that the locks are always getting the redundancy needed. Opensource solution requires this to be done manually, hence impacts reliability and availability especially in times of multiple failures and recovery.
  6. SIOS reduces implementation/management time and costs, the lesser time you spend implementing and maintaining HA, the more time you will have for other more important tasks.
  7. Open-source use its STONITH mechanism which had been hardly reliable especially in cloud environments, SIOS provides multi-throng approach to prevent false failover and split-brain – quorum witness, multiple comm. path (heartbeat) which has been proven for over 20 years to be highly reliable in many scenarios.

 

Summary

SAP HANA system replication feature comes as part of the software and works well to protect the database from dataloss in case a problem arise from hardware or system failures. However if high-availability is the requirement, it would still need a 3rd party solution in order to get some of the automated monitoring, failover orchestration, virtual IP and so on. While there are opensource options in the form of enterprise Linux OS subscriptions for SAP, they certainly do not come free, and technical support is still limited as they purely relying on opensource community to maintain the Pacemaker, Corosync etc. projects. and to get support from contributors. There are also limitations in the native System Replication, opensource HAE which can be overcome by a commercial software solution vendor like SIOS.

Hence, SIOS as a reliable 3rd party high-availability solution provider that is certified by SAP could be a viable option to help to ensure enterprise customers get the reliability and high-availability that they need in their mission critical SAP systems operations, for a peace of mind.

Parts of this blog post was from my blog posting on https://www.sios-apac.com/2020/04/high-availability-dr-for-s-4hana-and-other-sap-platforms

Sunday 26 September 2021

Sybase Replication Step by Step

 

Replication Concept & Methods

Replication methods

  • Regular replication: – table level replication from a primary to a replicate database
  • Multi-site availability (MSA):- database level replication from a primary to replicate database
  • Warm Standby: – database level replication to maintain a standby database
  • Hot Standby: – transactions are run simultaneously in the primary and replicate database using distributed transaction management
  • Warm Standby: – transactions are captured in the primary database and distributed to the replicate database
  • Cold Standby: – the primary data is copied to the replicate database on a scheduled basis

Concept in replication

Primary and replicate data ( prim db to repl db )

Peer–to–Peer replication ( multiple prim db, each db is both prim & repl, update anywhere)

Heterogeneous replication ( any db to any db e.g oracle,mssql,db2 )

Replicated operations:- can not replication min log operation and manually DML on sys tbl

DML – data modification Lang (ins, upd, del, slow bcp, trun)

DDL – data definition Lang (create, alter, drop, sp_* )

DCL – grant & revoke, sp_adduser, sp_addalias, sp_changegroup

Min log operation: – select into, fast bcp, reorg rebuild, write text, update stat


Internal of sybase replication server

  1. RepAgent reads the record from Transation log of the PDB for the tables which are marked for replication.
  2. Logs into the PRS and write transactions in inbound queue of PDB in stable device.
  3. Holds Data in inbound queue , untill it recieves commit.
  4. Uses subscription information in its RSSD to decide what to do with the each transaction, after the commit:

i Discards the trans if there is no subscription.

ii Writes the transaction to the out bound queue if there are subscription.

  1. Writes commited trans only in outbound queue according to subscription.
  2. Sends transactions to their destination, it depends upone two things
  3. i) if Replicated Database is managed by PRS
  4. Apply changes to RDB using the DSI thread our the connection.
  5. ii) If their are two server, RRS is managing RDB
  6. Send commited trans to RRS over route.RRS apply that changes in RDB
  7. If appropriate, uses function string information in RSSD to compose command to submit to replicate database.

As you people are seeing, there are lot of movement of trans/record , and for these trans movements Replication Server uses lot of threads(DSI,RSI,SQT,SQM etc).

If any one of thread stops , replication ceases , even it can hamper PDB performance.

Thats why it is little bit difficult to manage, not difficult, we can say, its need better monitoring.

Architecture

SD:- Stable Device

  • The nonvolatile store area on which replication server stores the transactions it receives from repagents or from other replication servers
  • Each replication server requires its own stable device
  • Managed directly by a replication server, but is not a database
  • Contains multiple stable queues
  • The disk device on which replication server stores data during processing

SQ:- Stable queues

  • Data structures within each stable device that hold data rows during the replication process
  • A stable device contains many stable queues, including one queue for eachRepagent connected to the replication server
  • Replication server to which the replication server has a route
  • Dataserver to which the replication server has a connection
  • For warm standby, a single inbound queue exists for the logical connection
  • For other replication relationships, there are four types of queues
  • Inbound Queue (IBQ):- There is one inbound queue per primary database which contains data rows and transactions that have begun at the primary database but have no yest been committed
  • Outbound Queue(OBQ):- There is one outbound queue for each RDB to which replication server has a connection, replication server to which it has a route , contains data for transactions that have been committed, but are still in the process of begin ( applied at the RDB, written to the stable device of another replication server )
  • Materialization Queue(MatQ):- holds data and commands during automatic materialization ( select the entire subscription set from the PDB, writes the subscription set to the materialization queue, insert the data into the RDB in one large transaction )
  • DeMaterialization Queue(DeMatQ):-

Rs_lastcommit:-

  • Replication server adds a table called rs_lascommit to each primary/active database in the replication system
  • Rs_lastcommit stores the last committed transaction that is in the stable device of the database
  • Replication server uses the origin_qid to ensure that no duplicate transactions are processed

Route:-

  • A route is a uni-directional connection between two replication servers
  • A replication server thread called the replication server interface(RSI) uses routes to send data and messages to other replication servers
  • Setting up a route automatically creates subscriptions in the destination replication server’s RSSD for the table in the source replications server’s RSSD

ID Server:-

  • is one replication server in the RSS that registers all the replication servers and databases with in the system
  • Provide each RS, DS, DB with a unique identifier
  • Does not have to participate in replication
  • Doses not to be running unless you are adding or removing RS, DB or routes to RSS

Threads

REP AGENT: – Replication agent Thread

  • Reads the primary database transaction lo to find transactions (SQL statements or sp exec ) that have occurred against tables that are marked for replication
  • Forwards transactions to the replication server using a proprietary language called Log Transfer Language (LTL)
  • Function as a connection manager for the repagents and passes the changes to SQM
  • Maintains a secondary truncation point in transaction log, which prevents transactions from being truncated until they are safely stored in the replication server stable device.
  • Coordinate recovery between the transaction log and replication server
  • Each database may only have one Repagent thread
  • Repagent is enabled on the standby, but is not turned on

DSI EXEC: – data server interface execution thread

  • Translates the replication transactions functions into destination command language (TSQL) and applies the transaction to replicate database.

DSI: – data server interface thread

Connection:-

  • Connection exists between replication server and the database they manage
  • A Replication Server has a connection to reach replicate database it manages.
  • A Replication thread DSI uses this connections to send updates to the replicate database
  • The DSI logs into the RDS as a regular client connection using the maintenance user login.
  • A maintenance user login is a special userid used by replication servers to make changes in replicate database and RSSD.

SQM: – Stable Queue Manager thread

  • is the only thread that interacts with the stable queue it performs all logical I/O to the stable queue (physical i/o actually performed by the dAIO daemon)
  • writes the logged changes to disk via os i/o routine, notify that async i/o deamon (dAIO)
  • The SQM is responsible for the following:
  • “Queue I/O”. All reads, writes, deletes and queue dumps from the stable queue “Duplicate Detection”. Compares OQID’s from LTL to determine if LTL log row is a duplicate of one already received.

dAIO:- async i/o deamon thread

  • Polls the o/s for completion and notify the SQM that i/o completed

SQT: – The Stable Queue Transaction thread

  • Responsible for sorting the transactions into commit order.
  • Request the next disk block from the SQM and sort the transaction into commit order again read request is done via SQM->dAIO once the commit record for transaction has been seen he SQT alert distribution thread (DIST) that transaction is available.

DIST: – Distribution thread

  • Read transaction ad determine who is subscribing to it whether subscription migration is necessary once all of the subscriber is identified the DIST thread forward the transaction to the SQM for the outbound queue for destination connection.

RSI :- Replication server interface

dSUB

dCM

dREC

dSTATS

dALARAM

USER


MSA Replication Configuration

PDS : Primary Data Server

PDB : Primary Data Base

RDS : Replicated Data Server

RDB : Replicated Data Base

PRS : Primary Replication Server

RRS : Replicated Replication Server

RSSD : Replication System Database

isql -Usa -SPDS -Psapasswd << EOF

use master

go

EXEC sp_dboption DBNAME,'abort tran on log full',true

go

EXEC sp_dboption DBNAME,'select into/bulkcopy/pllsort',true

go

EXEC sp_dboption DBNAME,'trunc log on chkpt',true

go

use DBNAME

go

sp_dropuser mnt_usr

go

sp_addalias mnt_usr,'dbo'

go

use master

go

sp_adduser mnt_usr,mnt_usr,'public'

go

use DBNAME

go

sp_reptostandby DBNAME,'all'

( ***** sp_reptostandy will pass transactions that have been performed by the maintenance user )

go

use master

go

grant set session authorization to mnt_usr

( ***** With the proxy authorization capability of Adaptive Server, System Security Officers can grant selected logins the ability to assume the security context of another user, and an application can perform tasks in a controlled manner on behalf of different users. If a login has permission to use proxy authorization, the login can impersonate any other login in Adaptive Server.

***** )

go

use DBNAME

go

( *****

scan batch size 'no_of_qualifying_records'

Specifies the maximum number of log records to send to Replication Server in each batch. When the maximum number of records is met, RepAgent asks Replication Server for a new secondary truncation point. The default is 1000 records.

***** )

sp_config_rep_agent DBNAME , "send warm standby xacts", "true"

( ***** Replication Server automatically puts a begin tran and commit around all transactions. This will cause this behavior on the replicate database. There is a paramtater in the Replciation Agent "send warm stanby xacts" that needs to be set to true when setting up MSA replication. This paramater allows the DDL to be replicated properly without recieving the above server message.

Error :-

MSA Replication with DDL – Some Store Procedure's don't replicate. Creation message "Message from server: Message: 2762, State 3, Severity 16 —

'The 'CREATE TABLE' command is not allowed within a multi-statement transaction in the 'tempdb' database.

***** )

go

sp_config_rep_agent DBNAME , "send buffer size","8k"

( *****    Controls the size of the send buffer that RepAgent uses to communicate with Replication Server. increasing the size of the send buffer reduces the number of times RepAgent communicates with Replication Server, but increases the amount of memory used.

The default value is 2K.

***** )

go

sp_config_rep_agent DBNAME , "short ltl keywords","TRUE"

( ***** Specifies whether RepAgent sends an abbreviated form of LTL to Replication Server, requiring less space and reducing the amount of data sent. The default value is “false.”

***** )

go

sp_config_rep_agent DBNAME , "priority","4"

( *****    Sets relative priority values for individual RepAgents.

4 – high priority

5 – medium priority

6 – low priority

The default value of priority is 5.

***** )

go

exec sp_config_rep_agent DBNAME , "send structured oqids","TRUE"

( ***** Specifies whether RepAgent sends origin queue IDs (OQIDs) as structured tokens, which saves space in the LTL and thus improves throughput, or as binary strings. The default value is “false.”

***** )

go

sp_stop_rep_agent DBNAME

go

sp_start_rep_agent DBNAME

go

EOF

isql -Usa -SPRS -Psapasswd_rs << EOF

alter connection to PDS.DBNAME set dsi_replication_ddl 'on'

go

( ***** Bi-directional MSA setup, DDL is replicating back to PDB.

***** )

alter connection to PDS.DBNAME set dsi_keep_triggers 'off'

go

alter connection to PDS.DBNAME set dynamic_sql 'on'

go

suspend connection to PDS.DBNAME

go

resume connection to PDS.DBNAME

go

EOF

isql -Usa -SPDS -Psapasswd << EOF

use DBNAME

go

sp_start_rep_agent DBNAME

go

EOF

isql -Usa -SRDS -Psapasswd << EOF

use master

go

EXEC sp_dboption DBNAME,'abort tran on log full',true

go

EXEC sp_dboption DBNAME,'select into/bulkcopy/pllsort',true

go

EXEC sp_dboption DBNAME,'trunc log on chkpt',true

go

use DBNAME

go

sp_dropuser mnt_usr

go

sp_addalias mnt_usr,'dbo'

go

use master

go

sp_adduser mnt_usr,mnt_usr,'public'

go

use DBNAME

go

sp_reptostandby DBNAME,'all'

go

use master

go

grant set session authorization to mnt_usr

go

use DBNAME

go

sp_config_rep_agent DBNAME , "send warm standby xacts", "true"

go

sp_config_rep_agent DBNAME , "send buffer size","8k"

go

sp_config_rep_agent DBNAME , "short ltl keywords","TRUE"

go

sp_config_rep_agent DBNAME , "priority","4"

go

exec sp_config_rep_agent DBNAME , "send structured oqids","TRUE"

go

sp_stop_rep_agent DBNAME

go

sp_start_rep_agent DBNAME

go

EOF

isql -Usa -SRRS -Psapasswd_rs << EOF

alter connection to RDS.DBNAME set dsi_replication_ddl 'on'

go

alter connection to RDS.DBNAME set dsi_keep_triggers 'off'

go

alter connection to RDS.DBNAME set dynamic_sql 'on'

go

suspend connection to RDS.DBNAME

go

resume connection to RDS.DBNAME

go

EOF

isql -Usa -SRDS -Psapasswd << EOF

use DBNAME

go

sp_start_rep_agent DBNAME

go

EOF

Tips  :-

drop subscription manually from rs_subscriptions, if it has no other subscription for database than delete it from rs_repdbs, and if ct has filter on the database than rs_dbsubsets. Than recycle the rep server.

***********************************************************************

To reset the locater, update the rs_locater table in the RSSD for the Replication

Server controlling the database using the Adaptive Server stored procedure

rs_zeroltm:

rs_zeroltm data_server, database

***********************************************************************

Use admin who, sqm – check if First.Seg.Bock and Last.Seg.Block are the same indicating the queue is empty

***********************************************************************

Use admin who, sqt – verify that there are no transactions in the ‘Closed’ column. ‘

Closed’ shows the number of committed transactions in the SQT cache.

The transactions have been read from the stable queue and await processing.

***********************************************************************

Once a stored procedure is marked as replicate, it is no more possible to replicate data updated by this procedure.

***********************************************************************

Create proxy table and drop proxy table are not supported in warm standby and MSA replication.

CR 324773 has been filed. There is no workaround but resume connection and skip transaction

***********************************************************************

Dump marker can only activate one db repdef at a time. The correct way to do this is:

Create db sub for r1. Dump p1, wait and load r1.

Create db sub for r2. Dump p1, wait and load r2.

***********************************************************************

*MSA When there are two database (db) replication definitions (repdefs) –

One primary db with two db repdefs each with filters and each replicates to a seperate replicate database.

If the 2nd db repdef’s filter is modified by RM or isql, the 1st db repdef’s filter stops working.

***********************************************************************

Error :- RS1501ESD#3. Bi_directional MSA w/table repdef. Can not drop connection after dropping all table replication definition,The error is:Database ‘DBCI2.ci’ is the primary for some replication definitions. but all repdef have been dropped.

Solution :- When a repdef was ever being used MSA or standby connection (with or without the send standby option), when it is dropped, the repdef is renamed to rs_drp0x0s. However, it is a bug that the user have to manually delete rs_drp0x0s before they can drop a connection.Workaround:update rs_objects set dbid = 0 where objname like “rs_drp%” and dbid =

before dropping the connection.

Repserver Bug 500356 will be fixed in 15.0.1 ESD #4 and 15.1 ESD #1.In this case, customer manually deleted rs_drp0x0s from rs_objects before drop the connection. Or you update rs_objects set dbid 0 where objname like ‘rs_drp%’. Then.drop the connection.

***********************************************************************

sp_setrepproc stored_procedure_name

go

exec stored_procedure_name

go

This will force the stored procedure to execute on the target instead.

***********************************************************************

To set traces, add the following lines into the .cfg file of the Replication Server

you want to trace.

1 This line forwards the output to the standard output or to the errorlog:

trace=GEN,TRC_STDERR or trace=GEN,TRC_ERRLOG

2 This line gives the output of commands that are sent to the replicate site.

trace=DSI,DSI_CMD_DUMP

***********************************************************************

Dumping stable queues

sysadmin dump_queue, q_number, q_type, seg, blk, cnt [,RSSD | Client ]

q_number, q_type :- check with admin who or admin who,sqm

seg :-  -1 1st active segement , -2 1st segement ( act or inactive )

Blk :- start from 1 to 64

if seg -1 and blk -1 : start with 1st undeleted blk of queue

if seq -1 and blk -2 :- start with 1st unread blk of queue

cnt :- number of blk to dump

-1 end of current seg is last

-2 end of queue

RSSD :- o/p to rssd table (rs_queuemsg, rs_queuemsgtxt )

Client :- o/p to client isql

If no option then written into rep server log file or dump_file

eg :- sysadmin dump_queue,103,1,0,15,65

sysadmin dump_file, qdump

sysadmin dump_queue,103,1,-1,1,-2

sysadmin dump_file

*********************************************************************


SAP HANA DB ANALYSIS AFTER ISSUES

To be able to further analyze your issue and environment please download the attached shell script you can get from KBA: 3218277 - Collectin...