Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save nov05/5093a11707440a33a3b750b4e1c19371 to your computer and use it in GitHub Desktop.
Save nov05/5093a11707440a33a3b750b4e1c19371 to your computer and use it in GitHub Desktop.

🟢 ChatGPT output (could be wrong. verify carefully)

To export SAP HANA data in real time continuously, you can use several methods depending on the target system and the purpose. Here are some of the most common approaches:

1. Smart Data Integration (SDI)

  • Use Case: Real-time data replication and transformation.
  • How: SDI allows you to create real-time data replication tasks between SAP HANA and other systems. You can define data flows that continuously export data from HANA and send it to another system, such as another HANA instance or a non-HANA database.
  • Steps:
    1. Set up a Data Provisioning Agent.
    2. Configure the SDI connection to the target system.
    3. Define a replication task for continuous replication.
    4. Start the replication task to begin exporting data in real time.

2. SAP HANA Smart Data Access (SDA)

  • Use Case: Virtual real-time access to external data.
  • How: SDA allows SAP HANA to read from remote sources in real time. While not explicitly exporting, it enables you to expose HANA data for real-time consumption by external applications without moving the data.
  • Steps:
    1. Create a virtual table that maps to the remote target system.
    2. Use the virtual table to access SAP HANA data from external applications in real time.

3. SAP HANA Replication (SLT)

  • Use Case: Real-time data replication and transformation for continuous data export.
  • How: SLT (SAP Landscape Transformation) allows continuous replication of data from SAP HANA to other target systems in real time. It captures changes in the database (insert, update, delete) and applies them to the target.
  • Steps:
    1. Set up the SLT configuration to define the source (SAP HANA) and target system.
    2. Specify the tables to be replicated.
    3. Start the real-time replication process.

4. Change Data Capture (CDC) with ETL Tools

  • Use Case: Real-time data export to other databases, data lakes, or cloud systems.
  • How: Many ETL tools like Apache NiFi, Talend, Informatica, or cloud-based tools (e.g., AWS Glue, Google Dataflow) offer CDC capabilities to continuously export data from SAP HANA in real time by capturing changes.
  • Steps:
    1. Configure the ETL tool to connect to SAP HANA.
    2. Enable CDC to capture incremental changes (like inserts, updates, deletes).
    3. Define the target system where the data will be exported.
    4. Set up the pipeline for continuous data flow.

5. Custom HANA SQL Script with Web Services

  • Use Case: When you need to export HANA data to a specific endpoint.
  • How: You can create custom stored procedures in SAP HANA that export data continuously to an external API or web service.
  • Steps:
    1. Create a SQLScript procedure that extracts the required data.
    2. Use HANA’s HTTP Client or OData to export the data to a target system.
    3. Set the procedure to run on a scheduled basis using HANA’s task scheduling.

6. SAP HANA Streaming Analytics

  • Use Case: Continuous processing of streaming data and exporting real-time results.
  • How: You can use SAP HANA Streaming Analytics to capture and process real-time streams of data, and export processed data to external systems continuously.
  • Steps:
    1. Configure a stream in HANA to capture data.
    2. Create a continuous query or stream job to process the data.
    3. Set the output to export the processed data in real time to an external system.

7. SAP HANA Data Services

  • Use Case: Real-time data integration and continuous export to other systems.
  • How: SAP HANA Data Services can extract, transform, and load data from HANA continuously into other systems. You can set up real-time jobs that capture changes from the HANA database and export them to the desired target.
  • Steps:
    1. Configure a real-time data integration job in SAP Data Services.
    2. Set up the connection to HANA and the target system.
    3. Define the data extraction, transformation, and export processes.
    4. Run the job in continuous mode for real-time export.

8. Third-Party Data Replication Tools

  • Use Case: Continuous replication of HANA data to external systems (e.g., Big Data, Cloud).
  • How: Tools like Qlik Replicate, HVR, Attunity, and GoldenGate offer CDC capabilities for SAP HANA, allowing continuous real-time data export to various databases and platforms.
  • Steps:
    1. Set up the replication tool to connect to SAP HANA.
    2. Configure the target system for data export.
    3. Start continuous data replication using CDC.

Considerations:

  • Latency: Make sure the replication tool or method you use can handle low-latency requirements if near real-time exports are critical.
  • Volume: Consider the amount of data being exported, as high volumes may require partitioning or batching strategies.
  • Security: Ensure that data is secured during transfer, especially if exporting to cloud or external systems.

Depending on your exact scenario and the target system, one of these methods should suit your needs for real-time continuous export of SAP HANA data.


The most commonly used methods for real-time continuous data export from SAP HANA largely depend on the specific use case and the target environment. However, the most frequently used approaches across different industries are:

1. SAP Landscape Transformation (SLT)

  • Usage: Widely adopted within SAP environments for real-time replication between SAP systems (e.g., SAP ERP, SAP BW) and SAP HANA or non-SAP databases.
  • Why: SLT is native to SAP, highly integrated, and well-supported for SAP-to-SAP and SAP-to-non-SAP continuous replication scenarios.
  • Common Use Case: Real-time data synchronization between SAP ERP (ECC or S/4HANA) and HANA for analytics or reporting.

2. Smart Data Integration (SDI)

  • Usage: Frequently used for real-time replication and integration when dealing with a broader range of systems, both SAP and non-SAP.
  • Why: SDI supports a wide range of connectors to different databases, data lakes, and cloud environments. It’s highly versatile and used in many real-time integration projects.
  • Common Use Case: Continuous integration of HANA data with external cloud or on-premise systems (e.g., databases like Oracle, SQL Server, or data lakes like AWS S3).

3. Smart Data Access (SDA)

  • Usage: Popular for real-time virtual access to external systems.
  • Why: SDA is efficient when you need real-time access to data without physically moving it, useful for federated data access scenarios.
  • Common Use Case: Reporting or analysis using real-time data from multiple sources without actually replicating the data.

4. Change Data Capture (CDC) with ETL Tools

  • Usage: Many organizations use third-party ETL tools that offer Change Data Capture (CDC) for real-time continuous export from HANA to other systems (data warehouses, big data platforms, etc.).
  • Why: ETL tools like Informatica, Qlik Replicate, Talend, and Apache NiFi are popular for their flexibility and ability to integrate with a variety of target systems.
  • Common Use Case: Real-time streaming of data from SAP HANA to a data lake (e.g., AWS S3, Google BigQuery) or non-SAP databases.

Summary of Most Popular Methods by Context:

  • SAP-to-SAP environments: SLT is most widely used.
  • SAP to non-SAP environments (e.g., data lakes, cloud): SDI and CDC tools are popular.
  • Federated access without data movement: SDA is favored for real-time virtual access.

For large-scale enterprise environments, SLT and SDI are the most commonly used, particularly in SAP-centric ecosystems. For broader, cross-platform data replication and integration, CDC with ETL tools and third-party solutions like Qlik Replicate or Informatica are often preferred due to their flexibility and multi-environment capabilities.


Implementing SAP SLT (SAP Landscape Transformation) to replicate data from SAP HANA on-premise to SQL Server involves several steps. Here's how you can achieve this:

Prerequisites

  1. SAP SLT System: You need access to the SAP SLT system, either as a standalone instance or integrated within your SAP landscape.
  2. Source System (SAP HANA): Make sure your SAP HANA on-premise system is available and has SLT support enabled.
  3. Target System (SQL Server): Ensure SQL Server is set up and accessible from the SLT system. You will need a valid connection string and credentials.
  4. SAP SLT Authorizations: The SLT user should have the necessary authorizations to create and manage replication configurations.
  5. Network Access: Ensure that your SAP SLT system can communicate with both the HANA and SQL Server databases over the network.

High-Level Steps for SAP SLT Setup

1. Install and Configure SLT (if not already installed):

  • You can deploy SLT as part of an SAP system or as a standalone system (DMIS add-on required).
  • Ensure the SLT system can connect to the source SAP HANA system and the target SQL Server system.

2. Define the Source System (SAP HANA) in SLT:

  • In the SLT system, define your HANA source system that will act as the data provider.
  • Steps:
    • Log in to the SLT system.
    • Go to Transaction LTR (SAP LT Replication Server Management).
    • Click Create to define a new configuration.
    • Provide a configuration name, description, and define the source system:
      • Choose SAP HANA as the source type.
      • Enter the connection details for your SAP HANA system (hostname, instance number, and DB credentials).

3. Define the Target System (SQL Server) in SLT:

  • Define the target system, which in this case is SQL Server.

  • Steps:

    • In the same configuration screen in Transaction LTR, choose to add a new target system.
    • Define the target as SQL Server.
    • Enter the SQL Server connection details:
      • Connection String (Hostname, Port, etc.)
      • Database Name
      • Credentials (User ID, Password)
  • Ensure that the database schema in SQL Server matches the structure expected by SLT or create the necessary tables manually before replication starts.

4. Configure Replication Settings:

  • Once the source and target systems are configured, you will need to define which tables should be replicated and how.
  • Steps:
    • Open Transaction LTRC to manage the replication settings.
    • Select the replication configuration you created in LTR.
    • Go to the Data Provisioning tab.
    • Choose the tables you want to replicate from HANA to SQL Server.
    • You can choose to start replication for the selected tables immediately or schedule it based on your requirements.

5. Define Transformation and Filters (Optional):

  • If you need to transform the data before replicating it (e.g., renaming columns or filtering data), you can configure transformation rules.
  • Steps:
    • In Transaction LTRC, go to the Advanced Replication Settings tab.
    • Set up field mappings, filters, or any transformation logic if needed (e.g., only replicate data with a specific status or condition).

6. Start Initial Load and Replication:

  • Once the configurations are complete, start the initial load to replicate the existing data from SAP HANA to SQL Server.

  • Steps:

    • In Transaction LTRC, select the table(s) to replicate.
    • Click on Start Load/Replication. The initial load will copy all existing data from the HANA tables to SQL Server.
    • After the initial load, SLT will switch to delta mode to capture any ongoing changes (insert, update, delete) in real-time.
  • The real-time replication captures changes via Change Data Capture (CDC) mechanisms and updates the target SQL Server tables as changes occur in SAP HANA.

7. Monitor Replication Status:

  • You can monitor the status of the replication via Transaction LTRC. This includes tracking the initial load, delta replication, and any errors.
  • Steps:
    • In LTRC, go to the Application Logs or Data Transfer Monitor to check the status of data transfer, replication delays, or errors.

8. Handle Data Conflicts or Performance Tuning:

  • In case of conflicts (e.g., duplicate keys or foreign key violations) or performance issues (slow replication), you can fine-tune the replication process:
    • Modify transformation rules.
    • Increase parallelization for large datasets.
    • Apply filters to reduce the data volume replicated.

Example Workflow in SLT:

  1. Transaction LTR:

    • Create a new configuration.
    • Define HANA as the source system.
    • Define SQL Server as the target system.
  2. Transaction LTRC:

    • Select tables for replication.
    • Start initial load.
    • Enable continuous replication (delta mode).
  3. Monitor:

    • Use LTRC to monitor and manage ongoing replication.

Key Considerations:

  • Latency: SLT provides near real-time replication but network latency, system load, and table size may affect performance.
  • Data Volume: Large tables may need to be handled with filters or split into chunks for better performance during the initial load.
  • Transformation: You can apply transformations or filters if the target SQL Server schema differs or only certain rows need replication.
  • Maintenance: Monitor for any replication failures and apply fixes as needed.

Yes, in this case, you need to implement the BAdI (Business Add-In) mentioned in the error message, specifically:

  • BAdI Name: BADI_ODQ_QUEUE_MODEL
  • Enhancement Implementation: ODQ_ENH_SLT_REPLICATION

The error indicates that the BAdI is not yet active in your system and is required to run the Operational Data Provisioning (ODP) replication scenario for SAP LT Replication Server.

Steps to Implement the Required BAdI:

  1. Go to Transaction SE18:

    • This is the BAdI definition transaction where you can find the definition of the BAdI.
  2. Enter BAdI Name:

    • Enter BADI_ODQ_QUEUE_MODEL in the BAdI Name field and press Display.
  3. Check for Existing Implementations:

    • In the BAdI display screen, check if there are any existing implementations of this BAdI. If none exist, you need to create one.
  4. Go to Transaction SE19:

    • This is the transaction for BAdI implementations.
    • In SE19, choose "New BAdI" and enter the BAdI name BADI_ODQ_QUEUE_MODEL.
  5. Create a New Enhancement Implementation:

    • Create a new implementation for the enhancement spot by clicking Create.
    • Provide a name for your implementation and description.
  6. Assign the Enhancement Implementation:

    • Use ODQ_ENH_SLT_REPLICATION as the enhancement implementation as mentioned in the error message.
  7. Activate the Implementation:

    • Once created, ensure that the implementation is activated.
  8. Check in Transaction LTRC:

    • After activation, return to Transaction LTRC to verify that the BAdI is active and that replication can proceed without errors.

This should resolve the issue, and you can continue with the replication configuration.


In this case, the BAdI BADI_ODQ_QUEUE_MODEL is required for the Operational Data Provisioning (ODP) replication scenario in SAP SLT. The error message suggests that a specific enhancement implementation (ODQ_ENH_SLT_REPLICATION) is needed to support SLT replication.

Steps to Write the Logic in the BAdI:

  1. Understand the Purpose:
    The purpose of this BAdI is to enable queue-based data transfer using the ODP framework in SAP SLT. This means the BAdI will help in managing how SLT queues data and how the queue is processed in the replication process.

  2. Transaction SE19 for Implementation:

    • You will use Transaction SE19 to implement the enhancement for the BAdI.
  3. Implement the Enhancement (ODQ_ENH_SLT_REPLICATION):

    • Once inside SE19, you will define the logic for handling the queue. Depending on the requirements, this logic might involve:
      • Queue management: Defining how data is queued for replication.
      • Queue assignment: Defining how queues are assigned for data provisioning.
      • Data transfer mechanism: Handling how data is fetched from SAP HANA, queued, and transferred to the target system (SQL Server).

What to Write Inside the BAdI:

In most cases, the BAdI enhancement logic for BADI_ODQ_QUEUE_MODEL in SLT would focus on the following points:

  1. Queue Initialization:

    • You might need to initialize and create a queue in SLT for each table or dataset being replicated.
    • Sample code could involve setting up the queue based on the source (HANA) system and target system (SQL Server).
  2. Queue Management:

    • Define how records are inserted into the queue, especially for large datasets that require real-time or near-real-time replication.
    • Handle the replication logic for changes (delta replication), ensuring that the correct records (insert, update, delete) are queued for replication.
  3. Queue Consumption:

    • Define how the records from the queue will be consumed and replicated to SQL Server.
    • This might involve setting up specific conditions for consuming the queue (e.g., batch size, time intervals).
  4. Error Handling:

    • The BAdI implementation should also handle error scenarios, such as how to manage failed records or retries in case of issues during replication.

Example Pseudocode:

METHOD IF_ODQ_QUEUE_MODEL~CREATE_QUEUE.
  " Create queue logic
  DATA: queue_name TYPE STRING,
        queue TYPE REF TO CL_ODQ_QUEUE,
        result TYPE REF TO DATA.

  queue_name = 'SLT_REP_QUEUE_' && SY-DATUM && SY-UZEIT.
  
  " Initialize a new queue for replication
  CREATE OBJECT queue EXPORTING queue_name = queue_name.
  
  " Set queue properties (e.g., source system, table)
  queue->set_source_system( 'HANA_ONPREMISE' ).
  queue->set_target_system( 'SQL_SERVER' ).

  " Handle delta replication based on table changes
  queue->enable_delta_mode( ).

  " Assign the queue for data replication
  result = queue->start_replication( source_table = 'HANA_TABLE' target_table = 'SQL_SERVER_TABLE' ).

  " Error handling
  IF result IS INITIAL.
    RAISE EXCEPTION TYPE cx_odq_queue_error.
  ENDIF.

ENDMETHOD.

Key Points in the Code:

  • Queue Creation: The CREATE_QUEUE method creates a new queue for replication.
  • Delta Mode: Enables delta mode so that only changes (insert, update, delete) are replicated after the initial load.
  • Queue Properties: Sets properties such as source and target systems.
  • Error Handling: Handles any issues during queue creation or replication.

Final Steps:

Once you have written the logic and activated the BAdI, you can test the implementation by starting the replication process in Transaction LTRC and ensuring that data is replicated to SQL Server as expected.

@nov05
Copy link
Author

nov05 commented Mar 14, 2025

🟢 ChatGPT output (could be wrong, verify carefully)

To replicate data from SAP HANA to SQL Server in real-time using an SAP HANA ODBC Driver, you'll need to create a replication setup that streams data efficiently from SAP HANA to SQL Server. Real-time data replication involves capturing changes made in SAP HANA and transferring them to SQL Server as soon as they occur. Here's how you can achieve this:

Approach: Using SAP HANA ODBC Driver with ETL or CDC Tools

While an ODBC driver facilitates connection between SAP HANA and SQL Server, it’s typically used for querying data, not for real-time replication. However, you can combine the ODBC driver with ETL (Extract, Transform, Load) tools or CDC (Change Data Capture) technology to implement real-time replication.

Here’s a step-by-step approach:

1. Install the SAP HANA ODBC Driver on SQL Server

Ensure that you have the SAP HANA ODBC driver installed on the SQL Server machine to enable a connection to your SAP HANA database.

  • You can configure an ODBC Data Source using the SAP HANA ODBC driver to test connectivity with SAP HANA.
  • Use this driver to query SAP HANA tables and perform extracts.

2. Set up Change Data Capture (CDC) or Real-Time ETL

To achieve real-time replication, you can use one of the following methods:

Option 1: Using Microsoft SQL Server Integration Services (SSIS)

SSIS is a powerful ETL tool provided by SQL Server that can be used for real-time data loading from SAP HANA. To implement real-time replication, you can schedule frequent or event-driven jobs in SSIS, using the SAP HANA ODBC driver to extract data.

Steps:

  • Create a connection to SAP HANA using the ODBC driver in SSIS.
  • Extract data: Use a Data Flow Task to read data from SAP HANA tables via ODBC.
  • CDC Implementation: To replicate data in real-time, you need to capture incremental changes (inserts, updates, and deletes). This can be achieved by implementing Change Data Capture (CDC) in SAP HANA (or using timestamps) and extracting those changes in each ETL run.
  • Load data: Use another Data Flow Task in SSIS to load the extracted data into SQL Server tables.

You can schedule the SSIS job to run frequently or in near real-time.

Option 2: Using SQL Server Replication Services (CDC)

You can set up Change Data Capture (CDC) in SQL Server to track and replicate changes from SAP HANA. Although SQL Server’s built-in CDC works primarily on SQL Server databases, you can still replicate changes from SAP HANA using custom CDC logic:

Steps:

  • Identify changes in SAP HANA: Set up a mechanism to track changes in SAP HANA. This could involve using HANA-specific triggers, delta queues, or audit tables to capture inserts, updates, and deletes.
  • Extract changes via ODBC: Periodically query the changes (captured through triggers or audit tables) from SAP HANA using the ODBC driver.
  • Insert into SQL Server: Use an SSIS package or a custom script to replicate these changes in SQL Server. SQL Server’s CDC will then keep track of further changes on the SQL Server side.

Option 3: Third-Party Real-Time Replication Tools

There are several third-party replication tools designed to replicate data in real-time between SAP HANA and SQL Server. These tools often provide seamless integration and Change Data Capture (CDC) capabilities:

  • SAP SLT (SAP Landscape Transformation Replication Server): SAP SLT is a real-time replication solution that replicates data between SAP HANA and external systems like SQL Server.

    • SLT captures real-time changes in SAP HANA and transfers them to target systems (such as SQL Server) using a mapping.
  • DBSync or Qlik Replicate (formerly Attunity): These tools allow real-time, low-latency replication of data from SAP HANA to SQL Server. They have built-in support for CDC, making it easier to replicate data with minimal configuration.

    • DBSync: Provides replication services that connect to SAP HANA through ODBC or other protocols, tracking changes and pushing them to SQL Server.
    • Qlik Replicate: A robust real-time data replication tool supporting various databases, including SAP HANA and SQL Server.

Option 4: Custom Python or Java Solution

If you prefer a custom approach, you can develop a Python or Java application that uses the SAP HANA ODBC driver to connect to the HANA database, capture real-time changes using HANA’s CDC mechanisms (like triggers or logs), and then push the changes into SQL Server.

Steps:

  • Use ODBC libraries in Python or Java to query and extract data from SAP HANA.
  • Monitor for real-time changes (using polling, timestamps, or HANA CDC) and extract the delta changes.
  • Insert or update records into SQL Server tables using a corresponding SQL Server ODBC connection.

3. Testing and Optimization

Once you set up your ETL or CDC pipeline, it’s important to:

  • Test the replication process: Ensure that all the data is being replicated accurately.
  • Optimize performance: Use bulk inserts for large volumes of data and ensure the jobs are optimized for real-time performance.
  • Handle errors and data consistency: Set up mechanisms to log errors and ensure consistency between SAP HANA and SQL Server during replication.

Summary

To replicate data from SAP HANA to SQL Server in real time, you can use the SAP HANA ODBC driver combined with tools like SSIS, third-party replication tools (e.g., Qlik Replicate, SAP SLT), or custom solutions (Python/Java). You’ll need to implement Change Data Capture (CDC) or real-time ETL pipelines to achieve true real-time replication, depending on your specific use case.

@nov05
Copy link
Author

nov05 commented Mar 14, 2025

AWS for SAP
SAP Data Services and SAP LT Server for near real-time replication to AWS data lakes
by Howard Lee, Ganesh Suryanarayanan, and Milind Pathak on 09 SEP 2020 in SAP on AWS
https://aws.amazon.com/blogs/awsforsap/sap-data-services-and-sap-lt-server-for-near-real-time-replication-to-aws-data-lakes/

@nov05
Copy link
Author

nov05 commented Mar 14, 2025

@nov05
Copy link
Author

nov05 commented Mar 14, 2025

🟢 ChatGPT output (could be wrong, verify carefully)

When deciding between SAP Landscape Transformation (SLT) and SAP Replication Server for real-time table extraction from SAP HANA to SQL Server, it is important to consider the differences in functionality, ease of use, and supported use cases for each tool:

1. SAP Landscape Transformation Replication Server (SLT)

SLT is a real-time data replication tool designed specifically for replicating data between SAP systems (and between SAP and non-SAP systems, including SQL Server). It supports table-level replication from SAP HANA to SQL Server in real time.

Key Features of SAP SLT:

  • Real-time replication: SLT supports near real-time data replication with minimal latency.
  • Trigger-based replication: SLT uses database triggers to capture changes in source tables. It extracts both initial full loads and delta changes (inserts, updates, and deletes) in real-time.
  • Integration with SAP systems: SLT is highly integrated with SAP environments (including SAP ECC, SAP S/4HANA, and SAP HANA) and has built-in support for extracting data from SAP tables.
  • Non-SAP target support: SLT allows replication to both SAP and non-SAP databases, including SQL Server, Oracle, and others.
  • Flexible transformation capabilities: You can apply data transformations (e.g., filters, mappings) to the data during replication.
  • Data consistency: SLT ensures data consistency during replication, especially in distributed system landscapes.

Pros of Using SLT:

  • Designed for SAP HANA: SLT is optimized for SAP environments and works seamlessly with SAP HANA, providing easy extraction from HANA tables.
  • Real-time replication: It supports real-time, low-latency replication with automatic change detection using triggers.
  • Easy configuration: SLT is relatively easy to set up and configure for table-level replication. No need for extensive custom coding.
  • Transformation and filtering: You can filter and transform data during replication, making it suitable for selective replication use cases.
  • Error handling: SLT includes error-handling mechanisms and logs to track replication issues.

Considerations:

  • SAP-centric tool: SLT is tightly integrated with the SAP ecosystem, which may be overkill if your environment has limited SAP components.
  • Cost: Licensing and resource costs can be higher compared to other replication tools.

2. SAP Replication Server

SAP Replication Server is a tool that provides real-time data replication and synchronization between different database systems, including SAP and non-SAP environments. It is traditionally more suited for replicating data between heterogeneous database systems, such as replicating data from SAP ASE (Sybase) to other databases.

Key Features of SAP Replication Server:

  • Database-centric replication: Replication Server focuses on database replication across various platforms, supporting heterogeneous systems like Oracle, SAP HANA, SAP ASE, SQL Server, etc.
  • Log-based replication: Unlike SLT’s trigger-based approach, Replication Server uses log-based replication (reading from database transaction logs). This method minimizes the performance impact on the source system.
  • Highly scalable: It supports large-scale, complex database replication scenarios, with high throughput and low latency.
  • Advanced data movement: It enables near real-time replication of large volumes of data, including table replication and entire database synchronization.
  • Disaster recovery: Replication Server is also used for disaster recovery by keeping a replica of your database in sync with the primary system.

Pros of Using Replication Server:

  • Log-based replication: It offers efficient log-based data replication, which can reduce performance impacts on the source database (compared to SLT’s trigger-based method).
  • Cross-database support: Suitable if you need to replicate data across different types of databases (e.g., SQL Server, Oracle, SAP ASE) or if you require advanced database replication.
  • Large-scale replication: Designed for complex, large-scale database replication scenarios and environments where high availability or disaster recovery is a key requirement.

Considerations:

  • Complexity: Replication Server can be more complex to configure and manage than SLT, especially if you are focused only on table replication between SAP HANA and SQL Server.
  • Not SAP-specific: While Replication Server works with SAP HANA, it is not specialized for the SAP ecosystem in the same way that SLT is. It’s more of a general-purpose database replication tool.
  • Overhead: Log-based replication requires access to the transaction logs of the source system, which could be more resource-intensive.

Which Should You Choose?

  • If your primary goal is to replicate tables from SAP HANA to SQL Server in real time and you are looking for an SAP-native, easy-to-configure solution, SAP SLT is likely the better choice. It is specifically designed for SAP-to-non-SAP replication, supports real-time table replication, and is easier to set up with minimal impact on the source system.

  • If you require log-based replication, disaster recovery capabilities, or need to replicate data between multiple heterogeneous databases (e.g., SQL Server, Oracle, SAP ASE), SAP Replication Server might be more suitable. It is better for scenarios where you need extensive cross-database synchronization beyond SAP HANA.

Summary

  • SAP SLT is optimal for table-level real-time replication from SAP HANA to SQL Server, with built-in support for SAP-specific features, transformations, and low configuration complexity.
  • SAP Replication Server is better for large-scale, log-based, cross-database replication and disaster recovery scenarios, though it may be overkill for simple HANA-to-SQL Server table replication.

@nov05
Copy link
Author

nov05 commented Mar 14, 2025

we have DMIS 2011 SP09 setup on a standalone replication server including all current correction notes. We have a situation where we replicate from SAP ERP towards a non-HANA target, being SQL Server. With the active configuration we can perform transformations via transaction LTRS and it works fine.

What we are still searching for however, is the mass data type change for an entire configuration. Per individual table we are capable of changing the mapping of for instance data type DATS into DATE for the target system in transaction LTRS. When having a HANA system as a target, there is however an extra option called 'Global Mapping Values' which allows to set this for an entire configuration. This way, any table you are replicating as of then that contains a field with data type DATS, will receive the mapped data type in the target. When using SQL Server as a target this option is not present.

We were facing similar issues. We have SLT configured for "ODP replication scenario" and we were looking for an option where in we can mass convert all "NUMC" to "CHAR", however the response we received from SAP is that at this stage only SLT configured to HANA have the functionality to mass convert a field and none other scenario is applicable.

‎2015 Nov 10
https://community.sap.com/t5/technology-q-a/slt-to-sql-server-as-target-change-data-type-mapping/qaq-p/11382910

@nov05
Copy link
Author

nov05 commented Mar 14, 2025

🟢 This is likely the solution.

https://community.sap.com/t5/technology-q-a/slt-with-mssql-as-target/qaq-p/11377232

bhupinder_sasan
Product and Topic Expert
‎2016 Feb 22

This is not a bug.
You need to change the following entry in the table DMC_RT_PARAMS:
PARAM_NAME = REPL_ENABLE_OTHER_TARGET
PARAM_VALUE = X
Though keep in mind we do not officially support non-ABAP source to non-ABAP target if the traget is not HANA.    
So you will do this at your own risk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment