Skip to content

Instantly share code, notes, and snippets.

@ikromnurrohim
Created August 11, 2023 14:06
Show Gist options
  • Save ikromnurrohim/2b8784e9dda106308be739349884fff8 to your computer and use it in GitHub Desktop.
Save ikromnurrohim/2b8784e9dda106308be739349884fff8 to your computer and use it in GitHub Desktop.
Webmethods Integration Server

Integration Capabilities

  • Robust Runtime : Bedrock for reliable execution, available as both server and microservices
  • Services : Everything is a service are independent of connectivity and protocol; i.e., the same service can be called via HTTP/JSON or even SFTP/XML
  • Connectivity : Connect to wide range of on-premise applications and SaaS platforms quickly via fully supported connector ADK (adapter development kit) for creating custom connectors
  • Monitoring : End-to-end traceablity of service execution including filtering, resubmissions and tracking accros asynchronously calls, with both modern UI and API access
  • Developer Tooling : Developing tool to design, develop and test

be awar if integration server shutdown improperly it's possible that Tanuki wrapper files will be left behin, and prevent the IS server starting again ... therefore delete ".lock" and "wrapper.anchor" within that folder SAG_Dir/profiles/IS_ _instance_name/bin

what is a pipeline ?

  • a pipeline is an "in-memory data structure" in which "input" and "ouput" values are maintained for integration server.
  • a pipeline is managed per-service (context)
  • it is instantiated by the integration server when the service run and destroyed when the service completes
  • the pipeline start with input to the service and collect input and output from subsequent services in the flow
  • when a service executes it has access to all data in the pipeline at that point
  • field in the pipeline can be removed by dropping them

keep it mind that all in pipeline is consuming memory of integration server that why you must considering about dropping unused varibales

service pipeline variable substitution local variables and global variables that can be called inside the pipeline by using sign %variable_name% then check on the properties check to perform pipeline variable substitution or perform global variable subtitution. depends on you need.

pub.flow:clearPipeline - clear the current content of the pipeline - It take argument a string list of pipeline variables that should not be cleared. > Note: While clearPipeline us convenient, using DROP wihtin a MAP step to clear pipeline variables has been shown to have performance advantages.

Lock for Edit can be found by righ-clicking an editable object it's hierarchical (all sub elements will become the lock too)

Designer Compare Tool The designer compare tool can compare and merge:

  • packages and elements on the same server or on different servers
  • To compare packages and folders, the integration server on which they are located must have pub.assets:getChecksums service in the WmPublic package.

for example:

  • compare package*, folder* flow service and integration server document types.
  • merge different from on element to another
  • Identity the different between the development, staging and production versions of package or element

change list panel annotations:

  • [Changed] an item is present in both package or element being compared buy has changed
  • [Added] an item is present only in first package or element being compared and is not present in the second package or element
  • [Removed] an item is present only in the second package or element being compared and is not present in the first package or element.
  • [Repositioned (x to y)] an item has the position x in the second element being compared and the position y in the element

Merge differences: Use icon > < in the commpare editor to merge differences from on flow or document type to another

  • icon < copy to left
  • icon > remove from right

How to validate documents:

  • Using built-in validation service pub.shema:validate
  • custom coded validation services
  • validate input/output option on the Input/Output tab (This is not recommended because it is harder to catch the error and process it)

MAPPING in flow service: Conditional mapping or linking contoh kalo kita mau mapping count ke sum tapi kita mau mapping nya jika variable Number > 2. kondisi seperti itu bisa di setting di properties waktu mapping, di click di garis mapping nya, di properties "Copy condition" di define, kondisi seperti apa nya Conditional linking have blue line

Link Indices ketika ada list (doc list or string list) di input output, dan kita hanya mau mapping itu value dari input di index ke 0 ke output index ke 0 misal, itu bisa di setting di mapping di click di garis nya lalu nnti di top right muncul icon table, nah disetting disitu mapping indices nya

MAP STEP

  • set value
  • drop variable from the pipeline
  • link data (copy) from one variable to another
  • transform data during copying using Transformers

TRANSFORMERS

  • are executing independently
  • do not cascade (it's mean the output of one transformer cannot use as input in the next transformers on one MAP)
  • do not implicitly link pipeline variable
  • output are not automatically added to pipeline
  • can be the most effective way to MAP - they do not copy and pass a whole pipeline. (it mean can be increase the performance when using transformers in MAP, instead invoking services as a flow step. you can invoke MAP STEP and inside MAP STEP you can use multiple transformers and it will increase performance of the code)

FOR EACH MAPPING

  • allows you to loop over source and target lists in single MAP STEP, without the need for a LOOP operator
  • Input list and Output list must be of "same type" (stringList to stringList but not to stringTable)
  • supports all styles of mapping, transformers and nested ForEach mapping.
  • ForEach button appears if a pipeline Input and Output list are marked
  • ForEach properties:
  •   Filter Input: Matching conditions (if variable number equal 1 then we want to mapping that)
    
  •   Copy Mode: Append, Merge(default), Overwrite
    
  •   Element Selection Index (if you enter index "0", then it will mapping if the index of input array is "0")
    

MAP SERVICES

  • be used to map document type of different format
  • can be reused in different flow services
  • perform a complex mapping of services signatures (input and output variables)
  • you can invoke mapping services when the same transformation or required in other flow or java services
  • map services can be also developed by less experienced developers (aka Citizen Developers)

Use case for a MAP SERVICES: A map services exists as an independent elements in the Integration Server namespace, allowing the map service logic to be "reused" by other services. example:

  • if you requrlary need to map from one format to another Services that need the "same data transformation", can invoke this mapping service(MAP SERVICES)
  • if you have "a set of complex mapping" that need to be "performed repeatedly", you can invoke this map service when the other services require the same complex mapping.

creating MAP SERVICES: right click on folder and click new then select "Map Service" Do: Link Input -> Ouput like in a MAP STEP Can't do: Usage of Flow Steps this need to be done in dedicated flow Services

inside MAP SERVICE:

  • can use on or more Transformers
  • use ForEach for mapping of Lists

BEST PRACTICE:

  • always link or set something to your declared outputs

  • always drop unused fields as far as there are not necessary for following steps

  • declared inputs appear in all steps

  • declared outputs appear in the "pipeline Out" of the last step in the service "only" (if you need them before, copy/paste the variables and set its needed value before saving your work)

    mapping of large documents: in the designer service development "independent scrolling" is supported on a MAP STEP

    • allows developers to see different parts of the input and output documents
    • support complex mapping scenarios that involve large input and/or output document structure you can enable/disable independent scrolling on a MAP STEP. click the pipeline view(tab) and then the toggle scroll bars on and off

    using Find to find Fields in the pipeline: to search everything in the complex documents, right click in the top heararcy of document and select Find, then enter the variable you are trying to find in the dialog

RENAME & REFACTORING: when renaming a field or other element in a "document type or service input/output", maps and flow steps "are not automatically updated" to use the new field name

Use "Refactor" option to update all references to a field name change

  • flow and java services signatures
  • webmethods messaging trigger filters
  • flow steps: - MAP: source and target fields, variable substitution, copy condition, index, drop - STEP Properties: Label, Scope, Switch, Input Array, Output Array, Timeout, Repeat Count, Repeat Interval, Exit, Failure Message how to use:
  • right click on field name on a document type or in input/output service
  • select "Refactor" -> "Rename"
  • review all objects and flow step that will be updated, optionally unselect any changes you want to skip
  • click "Finish"
  • reviews changes in the "Refactor Log" (View in Designer) "NOTE: REFACTOR CHANGES CAN NOT BE UNDONE!"

Integration Monitoring:

  • built-in auditing ensures that critical services are audited across platform via a centralized database
  • monitoring dashboard shows end-to-end traceability of services and transitions
  • the webmethods API-First approach ensures you have a rich monitoring API for integration with your dashboards and reporting tools
  • monitoring also includes the ability to manually restart failed transactions, retrieve trransaction data for testing offline, etc.
  • we also provide cloud based monitoring to simplify on-premise infrastructure and to allow cross-product auditing including hybird integration

Auditing Services: The auditing will be log in the Integration Server

  • click on the service name, then go to properties tab, and in the section "Audit"
  • "Enable auditing" you can chose "Never | When top-level service only | Always"
  • "Log On" you can chose "Error only | Error and success | Error, success and start"
  • "Include pipeline" you can choose "Never | On errors only | Always"

Prerequisites For Service Monitoring:

  • Does not require My WebMethods Server
  • Integration Server or Microservice Runtime 10.7 "WmMonitor" and "WmAdmin" Package is installed and enabled
  • Logging is enabled for services within Integration Server
  • Service executions are being logged to an external database and not to a file
  • External "ISCoreAudit" database components are available
  • Remote Server alias has been created

Monitoring Services:

  • edit service properties and set "Audit"
  • "Enable auditing" to "Always"
  • "Log On" to "Error, success and start"
  • "Include pipeline" to "Always"
  • Open Admin console UI "try new Administration"
  • click on tab "Monitoring" and you will see the services monitor

Secure Conectivity

  • Port Setting: when we configure a port setting we can choose which "Package Name" will asign to that port, if we choose "WmRoot" that mean all of resource ("all packages") Integration Server will be accessible within that port but if we specify other "Package Name" example package "TelkomNBTest" that mean this port can just be used for accessing resources from package "TelkomNBTest", anyway you need to allow access mode to "Allow" before you accessing resources from package "TelkomNBTest" if no that will be access denied.

above statement are not true, because when i set package name "TelkomNBTest" i'am still can accessing other resources from other package, but if we want to deny specify resources we can configure that on the "Security > Ports > Edit Access Mode" in the "Add Folders and Services to Deny List" we can specify which folder or services will deny when using this port, example you want to deny to accessing service "security.connection.services:inspectLineItems" then you must be add that service path to "Deny List" or you can also deny specify resources by adding folder name, if we specify folder name "security" that mean all of sub resources inside folder "security" will be access denied, when using folder name it will effect as hierarchy that mean all sub resource below that folder will be access denied.

SECURITY FLOW:

Port -> HTTP -> JWT -> simpleOrderRequest -> Basic -> simpleOrderRequest Port -> FTP -> Basic

Basic XML Processing:

  • Inbound Converting XML Data OrderRequest (XML) -> pub.xml:xmlStringToXMLNode -> Node (Object) -> pub.xml:xmlNodeToDocument -> OrderRequest (IData)

Debugging Built-in Services in the WmPublic Package:

  • pub.flow:debugLog : write a message to server log
  • pub.flow:tracePipline : Write the name and values of all fields in the pipeline to the server log
  • both services contain a "level" input parameter:
    • debug level at which to the display the message(s). default is "fatal"
    • if the Integration Server logging for the "0090 pub Flow service facility" is set to the same level or higher then this parameter, then:
      • the message(s) appear in the server log
      • otherwise, the message(s) do not appear in the server log

Debugging on Integration Server:

  • pub.flow:debugLog
  • pub.flow:savePipline
  • pub.flow:restorePipline
  • pub.flow:savePiplineToFile
  • pub.flow:restorePiplineFromFile
  • pub.flow:tracePipline

Debugging using Service properties:

  • Click on the specific service and see on the properties
  • on the run time section
  • "pipeline debug" set to Save
  • then you hit the service using selected input
  • after hit that, then change "pipeline debug" to Restore (Ovveride)
  • then run Debug on this service, input will automatically appear
  • after debuging please set "pipeline debug" to "None"

Debugging Using Java Object or XML File Input:

  • right click on the service and select debug as "Run Configuration"
  • then go to tab input, browse the file

FLAT FILE:

  • flat file contain hierarchical data in a record based format and is often in use in EDI (Electronic Data Exchange)
  • flat file metadata is seperate from its data, unlike XML
  • a flat file contains:
    • Records: contain fields and composites
    • Composites: Multiple related fields (address)
    • Fields: Atomic data (zip code)
  • flat file type in integration server
    • record identifier present in flat file structure
    • no record identifier present, default record mus be set in Properties.

More on Flat File Format:

  • flat files without record ID (e.g CSV files) example flat file

    Acme Hammer Company, 123 Wilson St., Sacramento+CA+95833 Johnson Supply Co.,456 Nadia Ave.. Seattle+WA+98188

  • delimited the example data below is have
    • Record delimiter = newline
    • field delimiter = ,
    • subfiled delimiter = +
  • fixed length
    • suppose you have been created flat file for zip code which have fixed length 0 to 19

Flat File Architecture:

  • A Flat File is a reusable repository of record definition. A Flat File Dictionary is simply repository for elements that you reference from flat file schemas. this allow you to create record definitions in a dictionary that can be used across multiple flat file schemas.
  • A Flat File Schema contain structural information of a Flat File. A Flat File Schema can contain either record definitions or references to record definitions that are stored as well in the namespace in a flat file dictionary. a flat file schema contains structure information of a flat file it acts as a blueprint for passing and creating flat files and includes record format delimiters and the model which inbound flat files are optionally validated against flat file schema
  • pub.flatfile:convertToValues convert flat file structure to IS document type. this service uses a flat file schema to parse flat file inbound to an integration server
  • pub.flatfile:convertToString convert the IS document structure back to the record definition flat file structure. this service uses a flat file schema to create a flat file outbound from integration server

How ro apply Dictionaries:

  • Record named ADDRESS containing three field:
    • RecordID
    • Company Name
    • Street
  • Composite named CityStateZip with three subfileds:
    • City
    • State
    • Zip
  • You can add Field or Reference the Field (mean you can use another field to this field)

Best Practice Always use a dictionary so that if there are any changes in your flat file you can change in dictionary and this can be used in your schema. so schema will be updated and then you can create document from schema.

Flat File Schema - How to apply Extractors: Extractors Type:

  • Nth Field (like index, if the record is "Jakarta, 102, 1218" then if the set position to "0" that mean "Jakarta" if set position to "0" that mean "102" )
    • Nth Filed: Used for Delimited records
    • Record Identifier is Position 0
    • if no record identifier, then first field is position 0
  • Fixed Position (it's like slicing string in python [0:10])
    • so we need to set the "Start" and "End" index

Flat File Schema - How to apply a Schema Definition:

  • Record parser type can be:
    • Delimiter
    • Fixed length
    • Variable length
    • EDI document type
  • suppose whe create using parse type "delimiter"
  • Record/Character "newline"
  • Field or composite ","
  • Subfield "+"
  • Record identifier/Start at position "0"

Best Practice is Reference record definitions of a Flat File Dictionary

Create IS Document Type for Flat File Schema:

  • once you create schema, then open that schma in edito you can select "Flat File Structure" tab.
  • click the Create Document Type icon in the main menu, "top right"
  • Creates an IS Document Type to store records read in from flat File that use this schema:
    • Name: DT
    • Located in Flat File Schema's folder

Summarize of FLAT FILE:

  • Creating Flat File Dictionary
  • Creating Flat File Schema
  • Creating IS Document Type from that Flat File Schema

Note: if any changes on Flat File Dictionary you need to re-create IS Document Type, to get updated from Dictionary changes

Test Flat File Schemas and Set a Default Record:

  • To test the Flat File Schema in Designer, right-click in the schema and select Runs As > Flat File Schema. if the schema is true, if it is validated, it means the schema is a valid schema, if there are any errors, then it will appear in the document list, you can expand the document in your result tab and check what went wrong.
  • if the flat file does NOT have a record identifier, then set the default record. must specify
    • dictionary
    • record every record (row) in file should follow this record format

Inbound Converting of Flat File Data:

Address Data (csv file example) -> pub.flatfile:convertToValues -> AddressDAta (IData (IS Document Type))

  • once you inbound using pub.flatfile:convertToValues you can also validate the constrain defined in the flat file schema by setting validate to true. and also define that namespace of flat file schema

REST API Service in Integration Server

REST Operation sample:

  • when we call REST that in integration server with Content-Type: application/xml the REST response will also response with Content-Type: application/xml
  • when we call REST that in integration server with Content-Type: application/json the REST response will also response with Content-Type: application/json

Integration Server Features to support REST REST Providers:

  • support for HTTP Methods:
    • GET: retrieve resources representation/information
    • PUT: update all file in one resource
    • POST: create on or more fields in on resource
    • PATCH: partially update on or more field in on resource
    • DELETE: delete on or more resource(s)
    • HEAD: to get the HTTP infromation about the specific resource
  • support for REST-style URLs
    • routing of URL and HTTP method to IS service
    • service input include value from the URL path and HTTP method
  • support for Content-Type negotiation

REST Consumers: using pub.client:http already provides all necessary functionality

REST Architecture (Code First):

  • Programmed flow service in integration server
  • API Description (swagger file) can be generated from an API Resource
  • swagger file that can be use from External API Point to access our Flow Service

REST Architecture (Contract First):

  • External API Point has a swagger file
  • swagger file can be used on the Designer to generated flow service on Integration Server
  • Integration now can access the external API Point

How to work with REST Resource: when we have Rest resources with "URL Template" -> "book/{id}" with "Supported Methods" -> "GET" which {id} is dynamic variable, Integration Server will automatically grap the "id" from request and provide the "id" as input to string "id" that it has on specific "Mapped Service"

for the methods POST, PUT, and PATCH you normally need a "Body" for the data.

define the body as a Document Reference in the Input of the Mapped Service.

if the input is of type Document or Document Reference, designer will automatically assing the input as of type HTTP "BODY"

Code First REST Services : You do not have a swagger file that describes your REST Resource and Methods.

  1. Create on or more service whose inputs contain zero or more Strings representing values you which to dynamically passed to the REST method in its URL. for PUT, POST, PATCH create dicument reference for body
  2. create a REST V2 Resource which specifies the resource name and maps:
    • REST URL to the Supported Methods, and appropriate service you just created
    • the Rest Url dynamic parameters must correspond to the input Strings of your mapped service.
  3. Optionally create a REST API Descriptor that points at your REST V2 Resource, the Descriptor will display your REST Directive, supported methods, supported URL, input/output parameters, REST Definition, and Swagger file.

Contract First REST Services: First, create a new REST API Descriptor specifying:

  • REST API Descriptor Name
  • Swagger File you will see a generated folder _ containing:
    • Your REST V2 Resource
    • A generated flow service for each unique combination of Resource URL and method
    • you need to code each generated service

REST V2 Implementation Service: REST Requires the body of a POST, PATCH, or PUT methods to be defined as either: "JSON", "XML".

For the HTTP Content-Type XML you need to assign a "node" Object for the Body content in the implementation Service, which mean if you have xml input then in the Service Input must have variable with type "Object" with name "node" example.

For the HTTP Content-Type JSON you need to assign a document for the Body content in implementation Service.

webMethods Messaging Features: - Publish/Subscribe pattern - Durable and Exacly Once Messaging support - Designer tools for rapid development - message monitoring - full IData (native Documentats) support - Use the Universal Messaging Provider - Active Clustering Support

Architecture: - Integration Server 1 using built-in service pub.publish:publish to publish Payload: OrderRequest to the Universal Messaging Cluster - Universal Messaging accept that payload - Integration Server 2 has WM Trigger that subscribe to Payload: OrderRequest and after consume that Payload WM Trigger invoke specific service

Publish and Subscribe Messaging Patten:

  • pub.pulish:publish sends Document to all connected clients
  • pub.publish:deliver forwards Document to on specific client only
  • Message storage in Persistent/Guaranteed mode until Trigger acknowledge it
  • Message copies (Encoded in IData(JMS) or protocol buffer binary)
  • trigger sends acknowledge back to indicate reading of message

Publish and Wait Messaging Pattern:

  • pub.publish:publishAndWait waits asynchronously or synchronously for reply
  • pub.publish:deliverAndWait forwards/receives Document to on specific client only
  • first client sends back reply message

Publishable Document Types: If Publishable is set to True, then the following will occur upon saving:

  • Connections alias name, connection alias type and provider definition will be populated.
  • provider definition = name of UM channel
  • Endcoding type = (Google) protocol buffer or IData
  • storage type (default = Guaranteed)
  • _env document type will be added
  • an envelope will be added to the document's icon (will occur on package navigator)

Synchronisation of Publishable Document Types:

  • when you create the document type it will get automatically stick to Universal Messaging, however when you change the document type you need to synchronize suppose you make a change in publishable document type when integration server was store or you made a change in properties, then you need to sync the document to Universal Messaging.
  • Synchronzation will create a channel definition in the UM Realm.
  • To sync right click on the document type in the designer and select sync document type
  • Synchronizing will create/update a channel definition in the UM Realm

How to create a Publish Service:

  • invoke pub.publish:publish in our service
  • set parameter documentTypeName to the full namespace name of our document type example acmeSupport.simplePubSub:OrderCanonical (namespace of document type)
  • map the appropriate document type to the document parameter (mapping document input to "document" in service invoke pub.publish:publish)

How to Subscribe to Messages:

  • Subcribtions are created via webMethods Messaging Trigger
  • webMethods Messaging Trigger define:
    • what document type to listen for
    • what handler service to invoke when the Document arrives

Create and use webMethods Messaging Trigger:

  • Handling service to invoke when conditions are true: document(s) arrive, filters evaluate to true
  • Filtering can be use for individual messaging routing
  • connection alias need to be same as the Document Type

Create and Use a Handler Service:

  • handler service is normal flow service
  • handling service can be assigned to a webmethods messaging trigger and will be invoked by trigger when matching document(s) arrive.
  • webmethods messaging trigger will place the document in the input pipeline, Use fully-qualified Namespace name for variable!

Create Webmethods Messaging Trigger - Joins A single webmethods messaging trigger may subscribe to multiple document types. webMethods Messaging Trigger will only invoke the service when the Join condition is met:

  • All (AND): all of the subscribed document types have been recieved
  • Any (OR): any of the subscribed document types have been recieved
  • Only One (XOR): only one of the subscribed document types has been recieved

Join - Watch Out for Activation IDs:

  • Join work only if the activation IDs are the same
  • Set the activation in the _env documentation
  • the publishing service must set the activation IDs

what is activation IDs ? suppose there are three documents and your trigger is subcribing to all three documents. you have set the join condition as "ALL", how it will be determined that all three document belong to the same service, there come activation ID. suppose they're document a b c because all three documents are published within a service called test service, integration server assing all three document the same activation ID. an activation ID is unique identifier, assigned to a published document. Subscribers trigger use the activation ID to determine whether a document satisfies a join condition. Integration Server will store activation ID in the "activation" filed of document envelope. By default Integration Server assing the same activation ID to each document published within a single top level service. You can override default behaviour by assigning an activation ID to a document manually.

Testing Without Publisher Client:

You can also easily write a simple test Flow service to publish a Document (IData) using pub.publish:publish

  • Right click on the Publishable Document Type then Run As > Publishable Document testing capability built-in to Designer

Work FLow Publish and Subscribe:

  • Document Type set to Publishable, it will automatically create channel on universal messaging
  • Create service handle
  • create trigger and set the Document type and Service Handler
  • Trigger will listen to the definition of document type that has define before on Trigger, if any message recieve and met all condition in trigger then trigger will invoke service handler that was define in trigger

webMethods Platform Error Handling:

What Failover Means: Fail-over is about guaranteeing a certain quality of service and eliminating single-points-of-failure

quality of service:

  • Uninterrupted system up-time and/or
  • Transactional integrity (guaranteed, once-only execution of all transactions)
  • Very application specific

single-points-of-failure: any component in the integration whose failur will prevent transaction from executing, e.g firewall, 3rd party application, LAN, etc.

Quality of Service Expectations & Objectives:

  • "First Things First": Establish acceptance criteria
    • set quality of service expectations criteria upfront
    • include in your calculation the impact on performance
  • set realistic target to compare against final mean time between failur number. example: 99.9999% uptime means that you are down for only 30 unscheduled seconds per year. if it takes 40 seconds to switch from a failed node in a cluster, you can't achieve 99.9999%
  • Identity different kinds of "Quality of Service"
    • transaction ordering
    • once only execution
    • dropped transactions per day
    • down-time

webMethods Platform Error Handling:

    • Service autditing/Retry
    • publishing retry
    • exception handling
    • active clustering
    • storage
    • guaranteed delivery
    • exactly once (duplicate detection)
    • handler service retry
    • transactions
    • adapter error handling

Integration Server Failur: two approach for handling service failures

  • develop stateless ("idempotent") services (rerun w/o problems)
  • record and check a transaction status during execution. example if we had insert service that mean we must check that record are new or it was exist in database, if status isExist then no need to insert them, if status isNew then we insert them.

Exactly-Once Processing Overview:

  • pub.publish:publish: Publishing (automatically queues guaranteed documents to outbound document store - no code necessary)
  • Exactly-Once (duplicate detection) (status: NEW, DUPLICATE, IN_DOUBT)
    • Redelivery count
      • check redelivery count
        • 0 = New
        • <> 0 = DUPLICATE or IN_DOUBT
        • check document history and use document resolver service to solve status precisely
    • document history
      • document history: WM_IDR_MSG_HOST
        • Trigger ID
        • UUID: unique ID for Doc
        • Processing State: Either "P" for Processing or "C" for Completed
        • Time: the time the trigger service began
    • document resolver service document resolvers service is a service created by a user to determine the document status.
      • using pub.publish:publishDocumentResolverSpec as a service signature, integration service process a document resolver service value for each of the variables declared in the input signature. this service catch and handle any X occure, including the IS runtime exception and return a status of NEW, IN_DOUBT, or DUPLICATE.
      • Integration server use the status to determine wheter or not process the document.
      • if the status of document resolve spec is DUPLICATE or IN_DOUBT it acknowledge. if it is DUPLICATE integration server discard the document. if it is IN_DOUBT it log document
      • if the status is NEW and if it not processed due some network issue or server failure, it will aknowledge and once server is up, it is processed

Exactly-Once Parameter:

  • on JMS Trigger Properties
  • navigate to Exactly Once section
    • Detect duplicates to True
    • Use history to True
    • History time to live to 5 minutes
    • Document resolver service (its a final method to determine the status of document, the status can be NEW, DUPLICATE or IN_DOUBT)

Build a Document Resolver Service:

  • the resolver service must use the WmPublic package's pub.publish.publishDocumentResolverSpec
  • resolver service must set status to : NEW, DUPLICATE or IN_DOUBT
  • resolver service may set message="reason why status is DUPLICATE or IN_DOUBT". this message will be displayed in the logs
  • use Log Message to find Duplicates or In Doubt Document (in Administrator UI server log)
  • to see the statistic of duplicate or in doubt documents, navigate to setting > resource > exactly once statistics

Handler Service for webMethods Messaging Trigger - Error Handling:

  • in the trigger properties you can set the setting like auditing and suspend when error
  • normally log the error using auditing (setting on properties trigger)

Handler Service for webMethods Messaging Trigger - Transient Error Handling: Trigger service must throw a ISRuntimeException

  • if trigger service is flow service, use WmPublic package's pub.flow:throwExceptionForRetry
  • if trigger service is java service use com.wm.app.b2b.server.ISRuntimeException()
  • Transient error handling in properties:
    • Retry until:
      • max attemps reached
      • successful (WARNING: possible infinite loop if error is not transient!)
    • Max retry attemps propery enter the maximum number of times integration server should attempt to re-execute the trigger service.
    • Retry interval: in ms, seconds, minutes, hours, days, weeks, months, or years
    • On retry failure:
      • Thow exception (causes fatal error of is runtime exception)
      • Suspend and retry later: it will have one property again Resource monitoring service, example the value is the namespace (myPack.utils:isDatabaseIsUp). whe the trigger is suspend, a schedule task calls the Resource Monitoring Service every minutes (interval can be controlled by IS property watt.server.trigger.monitoringInterval). the monitoring service (example myPack.utils:isDatabaseIsUp) must implement the WmPublic pub.trigger.resourceMonitoringSpec. if the Resource Monitoring Service returns: true (the trigger re-enabled) false (the monitoring service is re-invoked regarding the set interval (by default 60 second or you can change that value in IS property watt.server.trigger.monitoringInterval))

Handler Service - Service Failur:

  • the handler service fails if the service throws an:
    • Exception OTHER than ISRuntimeException
    • ISRuntimeException, but the trigger's "Max attempts" is out of retries
  • if the handler service fails, the IS will reject the document:
    • if the document guaranteed, the IS will return ACK to the messaging provider
    • if audit logging is enabled the trigger service, the error will be logged and available for view/resubmit (enable audit logging in properties > "Audit" section Enable auditing set to always, Log on set to Error only, and Include pipeline set to On error only)

WmMonitor Package: Use to get data from logged documents and service, document services include:

  • getDocument: retrieve the document with the given documentId
  • resubmit: publishes the document again Service Services includ:
  • getPipeline: retrieve the input pipeline from the audited service
  • resubmit: re-execute

Summarize Processing Rules: Exactly-Once processing will determine that each document is:

  • NEW: will call the processing service
  • DUPLICATE: will discard the document
  • IN_DOUBT: will log document
  • all IN_DOUBT documents need to be processed by your custome "cleanUpDocs" service
  • you need to set the trigger's processing service to Log on error
  • all failed service need to be processed by your cleanUpService service

DON'T just RESUBMIT the doument! WHY NOT??

Universal Messaging Failover Handling:

  • Universal Messaging fails, potentially interrupting:
    • publication & subscribtion
    • volatile doument are getting lost
  • when Universal Messaging is restored:
    • client will automatically re-connect
    • guaranteed documents will be sent
    • once again, clients may have already processed the document
  • fail-over via load balancing
    • active-active configuration
    • does not effect development

Universal Messaging Clustering:

  • Nirvana Cluster (Active/Active) in the cluster have tree node - realm 1 in location 1 as a master - realm 2 in location 2 as a slave - realm 3 in location 3 as a slave

    Recommended solution for high availability and redudancy. state is replicated across all active realms

  • Cluster with Site (Active/Active) in the cluster have two node - realm 1 in site 1 as a master - realm 2 in site 2 as a slave

    Provides most of the benefits of the UM Cluster but with less hardware and occasional manual intervention

  • Shared Storage (Active/Passive) in the cluster have two node and accessing the same Shared HA Disk - realm 1 in location 1 is active - realm 2 in location 2 is standby

    As an alternative to native UM Cluster, Shared Storage configurations can be deployed to provide another failover option. Allow storage to be shared between multiple realms - of which one is active at any one time

Adapter Failure Handling:

  • Adapter fails, potentially interrupting:
    • Target resource interaction
  • will generate an exception in the adapter service
    • Service should be coded to handle two exception situations:
      • Adapter fails before target resource processed data
      • Adapter fails after target resource processed data but before resource reported success
  • when adapter is restored:
    • nothing will automatically execute
    • service can be resubmitted via Monitor (if audited)
  • fail-over via service exception handling and auditing

note: very API dependent (not all resources are XA-compliant)

Service Cancel/Kill: if you suspect that flow service or java service is become unresponsive because it is waiting for an external resource or it is an infinite loop, you can stop execution of on or more, that's you have two option integration server requires you to cancel a thread before it allows you to kill. once you successfully cancel a thread you no longer have the option of killing it, you cannot cancel or kill a threads when jdbc connection is waiting on a response from database. Thread can be canceled are marked with yellow check and mark icon in the cancel coloumn. threads that can be killed are marked with delete icon in the kill coloumn. cancel or kill service thread is controlled by wat.server.threadKill.enabled parameter and this parameter must be set to true

  • statistics > system threads
  • Kill thread:
    • Forced interrupt
    • Used if Cancel does not cause thread to exit

Enabled globally by extended setting wat.server.threadKill.enabled parameter and this parameter must be set to true

  • Cancel thread for gracfull interrupt

Demo How to Kill Thread:

  • create flow service with REPEAT step
  • configure with Count set to 5
  • and Repeat Interval to 20
  • cand Repeat on to SUCCESS

webMethods Adapter Runtime Architecture:

  • WmART package provides logging transaction management and error handling for adapter connections services notification and listeners. it is automatically installed when you install integration server. you should not need to be manually reloaded wm art package.
  • Each webMethods adapter is provided as a seperate package that has dependency on wm art package.
  • WmJDBCAdapter is dependency to WmART

webmethods Adapter for JDBC:

  • is one of the many webmethods adapter types
  • enables your integration server solution to exchange data with relational database through the use of a JDBC driver.
  • the adapter provides seamless and real-time communication with the database without requiring changes to your existing applications infrastructure
  • it can be use to retrieve data from, and insert and update data in, relational database.
  • for details on database supported, refer to the webmethods adapter system requirements document.
  • we will focus on the webmethods adapter for JDBC in this training, since it is very popular and used in most integration solutions

webMethods Adapter for JDBC - Invocation Model (Outbound Connectivity):

  • invoke adapter service using adapter templat(e.g. SelectSQL)
  • JDBC Adapter Connection Execute SQL
  • results are returned into service pipeline

webMethods Adapter for JDBC - Notification Model (Outbound Connectivity):

  • adapter notification service monitors a specified database table for changes such as an insert, update or delete operation.
  • insert notification and update notification can use the exactly was notification feature this, this feature ensures that notification data is non-duplicated.
  • when you enable a notification the database trigger monitors the table and insert data into buffer table.
  • the integration server invokes the notification it retrieve a row of data from buffer and publishes each row in the notification publishable document and then remove the row from the buffer table.
  • suppose DB is updated by a client. and data is copied to buffer table by DB trigger. you can specify pooling interval to retrieve the data from buffer table and it will automatically map the publishable document to local or you can use JMS provider as topics and queues. you can set thus adapter notification properties to publish to local or JMS provider while creating the notification. once there are new perimissable documents. it will invoke the handler server and perform the further operation. this is how you can create notification and how you can monitor a database and if there are any changes in the database you can invoke the server and drive the data.

Configure JDBC Adapter:

  • you can configure the connection pooling, so that one connections can be used to create multiple adapter service
  • connection pools improves a performance by enabling adapter service to reuse the open connection instead of opening new connections.
  • Integration server maintaince connection pools in memory, you can configure minimum pool size maximum pool size and increment size
  • whenever an adapter service need a connections integration server provides a connection from the pool

Adapter Service Templates: once an adapter connection is configured and enabled, Sservice Templates are available in the Designer to perform operations:

  • included with each Adapter
  • perform very specifc operations with the target resource of the adapter
  • communicate with the resource
  • are aware of the data format that the adapter resource expects for input and output
  • there are invoke and notification templates

Creating an JDBC Adapter Service in Designer:

  • right-click | New, select Adapter Service, Confirm Adapter location and enter Element name, click Next
  • Select Adapter Type, click Next
  • Select Adapter Connection Alias
  • Select a Template to use, click Finish
  • Adapter Service, Configure the Template appropriately, you can provide table name, select field, give codition in where and more.

Adapter Notification Templates: Adapter publishes/sends a document in response to a specific database activity

  • row insert, update or delete operation which
  • will be noticed by a database trigger to the table
  • adapter perform notification by periodically pooling the buffer table for entries
  • if entries are found, adapter maps data to publishable document, publish/send through Universal Messaging and remove entries from buffer table
  • a subcribing webmethods messaging or JMS trigger and handling service has to be created to subscrib to the published/send document and process it

webMethods JDBC Adapter Connection Example:

  • suppose there is a customer table and there is a new entry into the table.
  • once the data is inserted into tables database trigger will copy the data from table to buffer table.
  • adapter periodically pulls the message with criteria at specified interval and make it publishable document.
  • it publishes through Universal Messaging and removes the entries from the buffer table
  • Publishable document can be subscribed via webMethods trigger and it will invoke the service which have logic to process the publishable document.
  • you can choose the destionation to which a synchronous notification should publish the message
  • JMS protocol with are synchronous notifications. you must configure a JMS connection alias on Integration Server So this is how you can use the notification in your server if there any database operation on the table to which you are connected, it will get notified to an integration server while notification, and then you can perform logic in your service.

Creating an JDBC Adapter Notification in Designer:

  • right-click | New, select Adapter Notification, Confirm Adapter location and enter Element name, click Next
  • Select Adapter Type, click Next
  • Select a Template to use, click Finish
  • Select Adapter Connection Alias
  • View the new publishable IS Document Type, and click Finish.

Note: You need Trigger/DML grants on the Database !

NOTICE the adapter notification details:

  • to ensure uniqueness, the resulting resournce name combines the following element:
  • resource prefix (WMB (buffer table), WMT (trigger) or WMS (sequence))
  • the name you typed in the base name field and a suffix, based on a system timestamp

Note: you cannot edit this name!

once you create adapter notification it will create a notification publishable document and it will have the same name as your notification, but at the and it will have published document

Adapter Notification Table/Field Configuration:

  • Select the table
  • Select Field(s) to be published
  • once you save the notification publishable document will be created and it will have and added envelope into the document

Adapter Notification Messaging Settings:

  • can be send to any condigured JMS Provider
  • use default webmethods or JMS connection alias

Scheduling a JDBC Adapter Notification: Prerequisite: JDBC Adapter Notification created in Designer In the Administrator UI:

  • Navigate to JDBC Adapter page
  • Select Pooling Notifications
  • Disable Notification (if enabled)
  • schedule the notification
  • set polling interval (seconds)
  • set other options
  • enable notification

Adapter Notification Messaging Settings:

  • once you created a notification in Administrator UI you can enable and disable the status you can add it the schedule, you can provide the interval to pull the database.
  • and you can provide setting like maximum process time maximum set of time, and it will be in seconds
  • you can provide the node as well, it can be standby
  • for all notification except basic notification the state of notification can be suspend disabled and enabled.

for all notification except "Basic", state can be:

  • Suspend: stop polling, do not delete DB Trigger & Buffer Table
  • Enabled: start polling, create DB Trigger & Buffer Table
  • Disabled: stop polling, delete DB Trigger & Buffer Table

for "Basic" Notification:

  • Suspend: stop polling
  • Enabled: start polling
  • Disabled: stop polling it will be stopping if it is suspend and disabled but it start polling and only if it is enabled
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment