Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save mancubus77/1c166846bac1f07a14e40a844ce9fbfa to your computer and use it in GitHub Desktop.
Save mancubus77/1c166846bac1f07a14e40a844ce9fbfa to your computer and use it in GitHub Desktop.
My Red Hat PAM/DM (jBPM/Drools) random notes

Generating new Projects using maven archetypes

jBPM project

Manually create business application In case you can’t use jBPM online service to generate the application you can manually create individual projects. jBPM provides maven archetypes that can be easily used to generate the application. In fact jBPM online service uses these archetypes behind the scenes to generate business application.

  • Business assets project archetype
org.kie:kie-kjar-archetype:7.46.0.Final
  • Service project archetype
org.kie:kie-service-spring-boot-archetype:7.46.0.Final
  • Data model archetype
org.apache.maven.archetypes:maven-archetype-quickstart:1.3

Example that allows to generate all three types of projects

mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-model-archetype -DarchetypeVersion={PRODUCT_VERSION_FINAL} -DgroupId=com.company -DartifactId=test-model -Dversion=1.0-SNAPSHOT -Dpackage=com.company.model

mvn archetype:generate -B -DarchetypeGroupId=org.kie -DarchetypeArtifactId=kie-kjar-archetype -DarchetypeVersion={PRODUCT_VERSION_FINAL} -DgroupId=com.company -DartifactId=test-kjar -Dversion=1.0-SNAPSHOT -Dpackage=com.company

mvn archetype:generate \
   -DarchetypeGroupId=org.kie \
   -DarchetypeArtifactId=kie-service-spring-boot-archetype \
   -DarchetypeVersion={PRODUCT_VERSION_FINAL} \
   -DappType=bpm

When generating projects from the archetypes in same directory you should end up with exactly the same structure as generated by jBPM online service.

drools-project

mvn archetype:generate -B \
-DarchetypeGroupId=org.kie \
-DarchetypeArtifactId=kie-drools-archetype \
-DarchetypeVersion=7.30.0.Final \
-DgroupId=com.redhat.demos \
-DartifactId=drools-demo-project \
-Dpackage=drools.demo.project \
-Dversion=1.0-SNAPSHOT

excelent blog post on howto create Drools projects: http://www.mastertheboss.com/jboss-jbpm/drools/drools-and-maven-example-project

Cloning from Business-Central git through http

git clone http://localhost:8080/business-central/git/<sapace name>/<project name>

Case Management references

KIE Service Spring Boot Archetype

Documentation here: https://github.com/kiegroup/droolsjbpm-knowledge/blob/master/kie-archetypes/kie-service-spring-boot-archetype/README.md

Business Central Project Templates

Documentation Here: https://github.com/kiegroup/droolsjbpm-knowledge/blob/master/kie-archetypes/kie-kjar-archetype/README.md

Adding/importing external Data Model (Pojo classes) into your Project in Business Central

  1. on project settings, add you external JAR (has to be a Maven JAR with GAV) as a dependency
  2. check the Package white list All radio button
  3. save and build your project
  4. Now your external classes should be available to import into Business Assets (eg: Rules)

Note that if your external Pojo classes has the same package as your rules you don't need to add them Package white list. It should be available automatically to be used in your rule definition.

Maven BOM

Instead of specifying a Red Hat Decision Manager for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project.

<dependency>
  <groupId>com.redhat.ba</groupId>
  <artifactId>ba-platform-bom</artifactId>
  <version>7.6.0.redhat-00004</version>
  <scope>import</scope>
  <type>pom</type>
</dependency>

What is the mapping between Red Hat Decision Manager and the Maven library version? https://access.redhat.com/solutions/3363991

Working with Nested Objects in Drools (working memory)

Java kie client api

Using the Java Client API you must add the whole object graph individually in the session's working memory. Like this snippet:

   Customer customer = new Customer(); //(1)
   customer.setCategory(Customer.Category.GOLD);

   Order order = new Order(); //(2)
   order.setCustomer(customer);

   kieSession.insert(customer); //(1) child object
   kieSession.insert(order); //(2) parent object

   int fired = kieSession.fireAllRules();

REST kie client api

suggestion from the drools-usage mailing list

The standard rest api only inserts the object in the insert command not the nested one. If you want to insert the nested, two possibilities

  1. Write you own kie server extension
  2. Write rules that will insert the nested objects (easy) the rule may look like this
rule "insert object"
when
  Enrollment(m : member)
then
  insert (m)
end

Enable/disable Useful feature switches through JVM System Properties

Reference: https://access.redhat.com/documentation/en-us/red_hat_process_automation_manager/7.7/html-single/installing_and_configuring_red_hat_process_automation_manager_on_red_hat_jboss_eap_7.2/index#business-central-system-properties-ref

  • -Djbpm.enable.multi.con allows Web Designer to use multiple incoming or outgoing connections for tasks. If not enabled, the tasks are marked as invalid.
  • -Dorg.kie.prometheus.server.ext.disabled=false enable metrics exposure through prometheus exporter
  • -Dorg.uberfire.nio.git.dir=$HOME/dev/gitlocal/rhpam change the nio GIT root directory
  • -Dorg.guvnor.m2repo.dir=$HOME/dev/gitlocal/rhpam/repository/kie change the maven repo root dir used by Kie Server
  • -Dorg.uberfire.metadata.index.dir=$HOME/dev/gitlocal/rhpam change the lucene index root dir
  • appformer.experimental.features: Enables the experimental features framework. Default value: false.
  • org.kie.demo: Enables an external clone of a demo application from GitHub.
  • org.kie.workbench.controller: The URL used to connect to the Process Automation Manager controller, for example, ws://localhost:8080/kie-server-controller/websocket/controller.
  • org.kie.workbench.controller.user: The Process Automation Manager controller user. Default value: kieserver.
  • org.kie.workbench.controller.pwd: The Process Automation Manager controller password. Default value: kieserver1!.

.gitignore when cloning a project from RHPAM Git Server

target/
.classpath
.project
.settings/

Reaction to SLA violations in Case Management

jBPM offers out-of-box the following ProcessEventListeners to react to SLA violations in Case Management:

  • org.jbpm.casemgmt.impl.wih.EscalateToAdminSLAViolationListener
  • org.jbpm.casemgmt.impl.wih.NotifyOwnerSLAViolationListener
  • org.jbpm.casemgmt.impl.wih.StartProcessSLAViolationListener

src code: https://github.com/kiegroup/jbpm/tree/master/jbpm-case-mgmt/jbpm-case-mgmt-impl/src/main/java/org/jbpm/casemgmt/impl/wih

The same approach can be used to implement custom SLAV Violation reaction for Processes.

BPMN2 subprocesses in process designer

A subprocess is an activity that contains nodes. You can embed part of the main process within a subprocess. You can also include variable definitions within the subprocess. These variables are accessible to all nodes inside the subprocess. [which I infer that parent's process'v]

A subprocess must have one incoming connection and one outgoing connection. If you use a terminate end event inside a subprocess, the entire process instance that contains the subprocess is terminated, not just the subprocess. A subprocess ends when there are no more active elements in it.

The following subprocess types are supported in Red Hat Process Automation Manager:

Embedded subprocess, which is a part of the parent process execution and shares its data
Ad hoc subprocess, which has no strict element execution order
Reusable subprocess, which is independent from its parent process
Event subprocess, which is only triggered on a start event or a timer
Multi-instance subprocess 

In the following example, the Place Order subprocess checks whether sufficient stock is available to place the order and updates the stock information if the order can be placed. The customer is then notified through the main process based on whether or not the order was placed.

Process Signaling

Ref: http://mswiderski.blogspot.com/2015/09/improved-signaling-in-jbpm-63.html Exemples created b Maciek here https://github.com/mswiderski/bpm-projects

  • single-project - contains all process definitions that working with same project
  • external-project - contains process definition that uses external scope signal (includes a form to enter target deployment id)

But what are the results with these sample process??

  • When using process that signals only with process instance scope (process id: single-project.throw-pi-signal) it will only signal event based subprocess included in the same process definition nothing else
  • When using process that signals with default scope (process id: single-project.throw-default-signal) it will start a process (process id: single-project.start-with-signal) as it has signal start event (regardless of what strategy is used) but will not trigger process that waits in intermediate catch event for other strategies than singleton
  • When using process that signals with project scope (process id: single-project.throw-project-signal) it will start a process (process id: single-project.start-with-signal) as it has signal start event and will trigger process that waits in intermediate catch event (regardless of what strategy is used)
  • When using process that signals with external scope (process id: external-project.throw-external-signal) it will start a process (process id: single-project.start-with-signal) as it has signal start event and will trigger process that waits in intermediate catch event (regardless of what strategy is used) assuming the SignalDeploymentId was set to org.jbpm.test:single-project:1.0.0-SNAPSHOT on start of the process

Manipulationg Variables in a Process Context

From RHPAM Business Central Docs: https://access.redhat.com/documentation/en-us/red_hat_process_automation_manager/7.7/html-single/designing_business_processes_in_business_central/index

  • global variables

Global variables are visible to all process instances and assets in a particular session. They are intended to be used primarily by business rules and by constraints and are created dynamically by rules or constraints.

Global variables exist in a knowledge session and can be accessed and are shared by all assets in that session. They belong to the particular session of the Knowledge Base and they are used to pass information to the engine. Every global variable defines its ID and item subject reference. The ID serves as the variable name and must be unique within the process definition. The item subject reference defines the data type the variable stores.

Global variables are initialized either when the process with the variable definition is added to the session or when the session is initialized with globals as its parameters.

Values of global variables can typically be changed during the assignment, which is a mapping between a process variable and an activity variable. The global variable is then associated with the local activity context, local activity variable, or by a direct call to the variable from a child context.

  • process variables

Process variables are defined as properties in the BPMN2 definition file and are visible within the process instance (not by the subprocess insances). They are initialized at process creation and destroyed on process completion.

A process variable is a variable that exists in a process context and can be accessed by its process or its child elements. Process variables belong to a particular process instance and cannot be accessed by other process instances

  • local variables

Local variables are available within their process element, such as an activity. They are initialized when the element context is initialized, that is, when the execution workflow enters the node and execution of the onEntry action has finished, if applicable. They are destroyed when the element context is destroyed, that is, when the execution workflow leaves the element.

Values of local variables can be mapped to global or process variables. This enables you to maintain relative independence of the parent element that accommodates the local variable. Such isolation might help prevent technical exceptions.

A local variable is a variable that exists in a child element context of a process and can be accessed only from within this context. Local variables belong to the particular element of a process.

For tasks, with the exception of the Script task, you can define Data Input Assignments and Data Output Assignments in the Assignments property. Data Input Assignment defines variables that enter the Task and therefore provide the entry data needed for the task execution. The Data Output Assignments can refer to the context of the Task after execution to acquire output data.

From Official jBPM Docs> https://docs.jboss.org/jbpm/release/7.30.0.Final/jbpm-docs/html_single/#_variables

  • Variables

While the flow chart focuses on specifying the control flow of the process, it is usually also necessary to look at the process from a data perspective. Throughout the execution of a process, data can be retrieved, stored, passed on and used.

For storing runtime data, during the execution of the process, process variables can be used. A variable is defined by a name and a data type. This could be a basic data type, such as boolean, int, or String, or any kind of Object subclass (it must implement Serializable interface). Variables can be defined inside a variable scope. The top-level scope is the variable scope of the process itself. Subscopes can be defined using a Sub-Process. Variables that are defined in a subscope are only accessible for nodes within that scope.

Whenever a variable is accessed, the process will search for the appropriate variable scope that defines the variable. Nesting of variable scopes is allowed. A node will always search for a variable in its parent container. If the variable cannot be found, it will look in that one’s parent container, and so on, until the process instance itself is reached. If the variable cannot be found, a read access yields null, and a write access produces an error message, with the process continuing its execution.

Variables can be used in various ways:

Process-level variables can be set when starting a process by providing a map of parameters to the invocation of the startProcess method. These parameters will be set as variables on the process scope.

Script actions can access variables directly, simply by using the name of the variable as a local parameter in their script. For example, if the process defines a variable of type "org.jbpm.Person" in the process, a script in the process could access this directly:

// call method on the process variable "person"
person.setAge(10);

Changing the value of a variable in a script can be done through the knowledge context:

kcontext.setVariable(variableName, value);

Service tasks (and reusable sub-processes) can pass the value of process variables to the outside world (or another process instance) by mapping the variable to an outgoing parameter. For example, the parameter mapping of a service task could define that the value of the process variable x should be mapped to a task parameter y right before the service is being invoked. You can also inject the value of process variable into a hard-coded parameter String using #{expression}. For example, the description of a human task could be defined as You need to contact person #{person.getName()} (where person is a process variable), which will replace this expression by the actual name of the person when the service needs to be invoked. Similarly results of a service (or reusable sub-process) can also be copied back to a variable using a result mapping.

Various other nodes can also access data. Event nodes for example can store the data associated to the event in a variable, etc. Check the properties of the different node types for more information.

Process variables can be accessed also from the Java code of your application. It is done by casting of ProcessInstance to WorkflowProcessInstance. See the following example:

variable = ((WorkflowProcessInstance) processInstance).getVariable("variableName");

To list all the process variables see the following code snippet:

org.jbpm.process.instance.ProcessInstance processInstance = ...;
VariableScopeInstance variableScope = (VariableScopeInstance) processInstance.getContextInstance(VariableScope.VARIABLE_SCOPE);
Map<String, Object> variables = variableScope.getVariables();
Note that when you use persistence then you have to use a command based approach to get all process variables:

Map<String, Object> variables = ksession.execute(new GenericCommand<Map<String, Object>>() {
    public Map<String, Object> execute(Context context) {
        KieSession ksession = ((KnowledgeCommandContext) context).getStatefulKnowledgesession();
        org.jbpm.process.instance.ProcessInstance processInstance = (org.jbpm.process.instance.ProcessInstance) ksession.getProcessInstance(piId);
        VariableScopeInstance variableScope = (VariableScopeInstance) processInstance.getContextInstance(VariableScope.VARIABLE_SCOPE);
        Map<String, Object> variables = variableScope.getVariables();
        return variables;
    }
});

Finally, processes (and rules) all have access to globals, i.e. globally defined variables and data in the KIE session. Globals are directly accessible in actions just like variables. Globals need to be defined as part of the process before they can be used. You can for example define globals by clicking the globals button when specifying an action script in the Eclipse action property editor. You can also set the value of a global from the outside using ksession.setGlobal(name, value) or from inside process scripts using kcontext.getKieRuntime().setGlobal(name,value);.

8.6.2. Scripts Action scripts can be used in different ways:

Within a Script Task,

As entry or exit actions, with a number of nodes.

Actions have access to globals and the variables that are defined for the process and the predefined variable kcontext. This variable is of type ProcessContext and can be used for several tasks:

Getting the current node instance (if applicable). The node instance could be queried for data, such as its name and type. You can also cancel the current node instance.

NodeInstance node = kcontext.getNodeInstance();
String name = node.getNodeName();

Getting the current process instance. A process instance can be queried for data (name, id, processId, etc.), aborted or signaled an internal event.

ProcessInstance proc = kcontext.getProcessInstance();
proc.signalEvent( type, eventObject );

Getting or setting the value of variables.

Accessing the Knowledge Runtime allows you do things like starting a process, signaling (external) events, inserting data, etc.

jBPM supports multiple dialects, like Java, JavaScript and MVEL. Java actions should be valid Java code, same for JavaScript. MVEL actions can use the business scripting language MVEL to express the action. MVEL accepts any valid Java code but additionally provides support for nested accesses of parameters (e.g., person.name instead of person.getName()), and many other scripting improvements. Thus, MVEL expressions are more convenient for the business user. For example, an action that prints out the name of the person in the "requester" variable of the process would look like this:

//Java dialect
System.out.println( person.getName() );
// JS dialect
print(person.name + '\n);
//  MVEL dialect
System.out.println( person.name );

To get the TaskInstance Id from the onExit action

((org.jbpm.workflow.instance.node.WorkItemNodeInstance)kcontext.getNodeInstance()).getWorkItemId()

Set Human Task priorit using Rules

Is it possible to set the task priority with task name... need to set the task priority on task start

It can be done through BusinessRulesStrategy. Go through kcs https://access.redhat.com/solutions/3714251 for more details on this approach.

Rule to set priority based on name will look like as

rule "Assign using name"

    when
        $task :  Task(name=="IMP tasks")
    then
        InternalTaskData taskData = (InternalTaskData) $task.getTaskData();
        taskData.setPriority(1);
end

Access Service Registry in jBPM

RuntimeManager runtimeManager = RuntimeManagerRegistry.get().getManager(deploymentId);
RuntimeEngine engine = runtimeManager.getRuntimeEngine(ProcessInstanceIdContext.get()); 
WorkItem wi = ((DefaultWorkItemManager)engine.getKieSession().getWorkItemManager()).getWorkItem(wid);

or

ServiceRegistry serviceRegistry = ServiceRegistry.get();
ProcessService processService = (ProcessService) serviceRegistry.service(ServiceRegistry.PROCESS_SERVICE);
processService.getWorkItem(0L);

Process Event Listener used to insert facs into Drools Woking Memory

you can use the same approach to insert new facts when some event occurs during the process instance execution. For example, implementing the TaskLifeCycleEventListener interface to let the knowledge session storage know about the completion of the human task so you can trigger rules with that information..."

Case Management on RHPAM references:

Using Rules to drive a Case Management process

DRL example of that approach:

!!!ATENTION!!!: when creting a Case Management Project remember to register the following in your project deploymnet configuration

Mashalling Strategies:
   org.jbpm.casemgmt.impl.marshalling.CaseMarshallerFactory.builder().withDoc().get();
   new org.jbpm.document.marshalling.DocumentMarshallingStrategy();


Work Item Handlers
   StartCaseInstance: new org.jbpm.casemgmt.impl.wih.StartCaseWorkItemHandler(ksession);
   Service Task: new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession, classLoader);
package com.redhat.demos;

import org.jbpm.casemgmt.api.CaseService;
import org.jbpm.casemgmt.api.model.instance.CaseFileInstance;
import org.jbpm.services.api.service.ServiceRegistry;

rule "watch task completion"

when 
    $caseData : CaseFileInstance()
    Boolean(true) from $caseData.getData("cDocumentationIsNeeded")
          
then 
    $caseData.remove("cDocumentationIsNeeded");
    CaseService caseService = (CaseService) ServiceRegistry.get().service(ServiceRegistry.CASE_SERVICE);
    java.util.Map<String, Object> parameters = new java.util.HashMap<>();
    parameters.put("reason", "How did it happen?");
    caseService.addDynamicTask($caseData.getCaseId(), caseService.newHumanTaskSpec("Please provide additional details", "Action", "manager", null, parameters));    
end

Decoupling Process from Rules using either BusinessRuleTaskHandler (rules in a different project) or Remote BusinessRuleTaskHandler (rules in a remote kie-server)

!!!ATENTION this can save some precious time!!!: Since projects are decoupled they need to have common data model that both will operate on. Currently that model must be on parent class loader instead of kjar level due to classes being loaded by different class loaders and thus making rules not match them properly. In case of execution server (KIE Server) model jar must be added to WEB-INF/lib of the KIE Server app and added to both projects as dependency in provided scope.

new org.kie.server.client.integration.RemoteBusinessRuleTaskHandler("http://kieserver-host/services/rest/server", "rhpam", "secretkey", classLoader)

Access process instance and process variables from a Rule

Configure email service

If you would like to have global configuration of email integration (meaning will be shared by all kjars) then create file named email-service.properties in kie-server.war/WEB-INF/classes with following content:

host=imap.googlemail.com
port=993
smtp.host=smtp.gmail.com
smtp.port=587
smtp.from=
smtp.replyto=
username=
password=
inbox.folder=INBOX
domain=jbpm.org
mailSession=mail/jbpmMailSession

this is a sample config for GMail, so make sure you set the values properly for the mail server you use, all of the properties are mandatory.

If you prefer (which is actually recommended) to use email service per kjar, then create kjar-email-service.properties in the root folder (src/main/resources) of your kjar with exact same content. That way you can have different kjars listening and sending with different email account.

kjar can also include *-email.ftl or *-email.html that will be used as email templates for your user tasks notifications. Template name (part of the template file without -email.ftl/html) is given on user task via TaskName property (same that is used by forms).

Environment entries/variables in the deployment descriptor

if you want to set the entry in the XML deployment descriptor on the project level, add the following to the kie-deployment-descriptor.xml file:

<environment-entries>
  ..
    <environment-entry>
        <resolver>mvel</resolver>
        <identifier>new String ("false")</identifier>
        <parameters/>
        <name>Autoclaim</name>
    </environment-entry>
  ..
</environment-entries>

UserInfo config to send email notifications in Human Tasks

config format:

entityId=email:locale:displayname:[member,member]

ref: https://github.com/kiegroup/jbpm/blob/c4eda77138c976016b301383db1369416b76029b/jbpm-human-task/jbpm-human-task-core/src/main/java/org/jbpm/services/task/identity/DefaultUserInfo.java#L54

Config file example: https://github.com/kiegroup/jbpm/blob/master/jbpm-human-task/jbpm-human-task-core/src/test/resources/userinfo.properties

Configuring Postgres Data Source

Ref: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/configuration_guide/datasource_management#example_datasource_configurations

https://access.redhat.com/documentation/en-us/red_hat_process_automation_manager/7.6/html-single/installing_and_configuring_red_hat_process_automation_manager_on_red_hat_jboss_eap_7.2/index#eap-data-source-add-proc

EAP_HOME/bin/jboss-cli.sh --connect

module add --name=com.postgresql --resources=/path/to/postgresql-9.3-1102.jdbc4.jar --dependencies=javax.api,javax.transaction.api
  • register the jdbc driver
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=com.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
  • create the PostgresDS Data Source
data-source add --name=PostgresDS --jndi-name=java:jboss/datasources/jbpmDS --driver-name=postgresql --connection-url=jdbc:postgresql://localhost:5432/jbpm --user-name=jbpm --password=jbpm --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter
  • create a user and a database name jbpm

IMPORTANT!!! On Postgres you have to add SuperUser role to the jbpm user otherwise you will get permission errors as described in https://access.redhat.com/solutions/4867921

ALTER ROLE jbpm
	SUPERUSER
	CREATEROLE;
  • run the script: /ddl-scripts/postgresql/postgresql-jbpm-schema.sql (available in the rhpam-7.6.0-add-ons)
  • add these two system properties in the standalone-full.xml
...
<!-- Data source properties. -->
    <property name="org.kie.server.persistence.ds" value="java:jboss/datasources/KieServerDS"/>
    <property name="org.kie.server.persistence.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
</system-properties>

Configuring BC to reach an external Maven Repository

Note: reference: https://access.redhat.com/documentation/en-us/red_hat_process_automation_manager/7.7/html-single/installing_and_configuring_red_hat_process_automation_manager_on_red_hat_jboss_eap_7.2/index#maven-repo-using-con

  • Given you have a cantral maven repo like Sonatype Nexus
  • create a standard Maven settings.xml pointing to your maven repo and save it in some place on your file system
  • add this System Property into your standalone(-full).xml:
<property name="kie.maven.settings.custom" value="/opt/custom-config/settings.xml"/>
  • If you want BC upload your KJAR to a external Repo add the <distributionManagement> to the Project's pom.xml inside your Business Central.
  <distributionManagement>
        <repository>
          <id>nexus-container</id>
          <url>http://dockerhost:8081/repository/maven-releases/</url>
          <layout>default</layout>
        </repository>
        <snapshotRepository>
          <id>nexus-container</id>
          <url>http://dockerhost:8081/repository/maven-snapshots/</url>
          <layout>default</layout>
        </snapshotRepository>
  </distributionManagement>  
</project>

Accessing Business Central internal maven repo

NOTE: when you use the Build and Install button on Business Central it will install the projects artifact both in internal repo and the host's (where BC is running on) maven repo ~/.m2/repository

  • configure your ~/.m2/setting.xml with:
    <!-- Configuring pre-emptive authentication for the repository server -->
   <server>
     <id>rhpam-local</id>
     <username>pamAdmin</username>
     <password>secret</password>
   </server>

   <!-- Configure the JBoss EAP Maven local repository -->
   <profile>
     <id>nexus-container</id>
     <repositories>
       <repository>
         <id>maven-releases</id>
         <url>http://dockerhost:8081/repository/maven-releases/</url>
         <releases>
           <enabled>true</enabled>
         </releases>
         <snapshots>
           <enabled>false</enabled>
         </snapshots>
       </repository>
       <repository>
         <id>maven-snapshots</id>
         <url>http://dockerhost:8081/repository/maven-snapshots/</url>
         <releases>
           <enabled>false</enabled>
         </releases>
         <snapshots>
           <enabled>true</enabled>
         </snapshots>
       </repository>
     </repositories>

     <pluginRepositories>
       <pluginRepository>
         <id>maven-releases</id>
         <url>http://dockerhost:8081/repository/maven-releases/</url>
         <releases>
           <enabled>true</enabled>
         </releases>
         <snapshots>
           <enabled>false</enabled>
         </snapshots>
       </pluginRepository>
     </pluginRepositories>
   </profile>    

 </profiles>

 <activeProfiles>
   <activeProfile>nexus-container</activeProfile>
 </activeProfiles>

...

jBPM bpmn2 test scenarios:

https://github.com/kiegroup/jbpm/tree/master/jbpm-bpmn2/src/main/java/org/jbpm/bpmn2/handler

jBPM bpmn2 Handlers/Decorators

https://github.com/kiegroup/jbpm/tree/master/jbpm-bpmn2/src/main/java/org/jbpm/bpmn2/handler

Signaling and error handler/decorator

  • Register Work Item Handler in Project Settings:
 new org.jbpm.bpmn2.handler.SignallingTaskHandlerDecorator(org.jbpm.bpmn2.handler.ServiceTaskHandler.class, "even-signal-name-to-be-catch"); MVEL
  • the Signal emited by the the decorator contains an instance of org.kie.api.runtime.process.WorkItem
  • Add a ScriptTask to handle/log the exception
System.out.println( "Handling exception caused by work item '" + workItem.getName() + "' (id: " + workItem.getId() + ")");

Other Error hadling aproaches:

Embedded vs. Remote Execution Server deployment option

execution server pros:
- better separation of concerns
- it can be scaled regardless the application
- cheaper: the subscription covers just the rule workload (whereas when you run together app and rules you have to pay for the whole workload)
- it's easier to share the decision logic among multiple consumers (applications) and have a single point of control / deployment

the main drawback of the kieserver is the communication overhead:
- network bandwidth / latency
- serialization overhead

The previous drawback can be mitigated using a custom serialization protocol (you should find references in the list archive). 
When you have a heavy batch workload, in general is preferable an embedded execution.

Drools BatchExecutionCommands for Kie Rest API

Note: pay attention to the order of your commands in the request body payload. Kie server will execute the commands in the order they are defined in the request body.

Although not fully documented is worth to check the source code to see all the available commands you can use for calling Drools Rest API when issue remote commands...

Approach to start a rule-flow process via Rest API

Have a standalone rule to start the rule-flow process

rule "fire rule-flow process"
WHEN me: MedicalEvent()
THEN
 Map<String, Object> params = new HashMap<String, Object>();
 params.put("medicalEvent", me);
 kcontext.getKieRuntime().startProess("ruleflow-process", params);
END

REST WIH with SignallingTaskHandlerDecorator

new org.jbpm.bpmn2.handler.SignallingTaskHandlerDecorator(new org.jbpm.process.workitem.rest.RESTWorkItemHandler(classLoader), "Error-serviceErrorSignal")

     <work-item-handlers>
        <work-item-handler>
            <resolver>mvel</resolver>
            <identifier>new org.jbpm.bpmn2.handler.SignallingTaskHandlerDecorator(new org.jbpm.process.workitem.rest.RESTWorkItemHandler(classLoader), "Error-serviceErrorSignal")</identifier>
            <parameters/>
            <name>Rest</name>
        </work-item-handler>
    </work-item-handlers>

Cleaning your Demo PAM Database tables

1) Stop the server

2) *IMPORTANT: * Backup database

3) Run below SQLs Assuming <process instance id> is the processInstanceId which you want to abort.
~~~~~~~~~~
delete from EventTypes where InstanceId = <process instance id>;
delete from WorkItemInfo where processInstanceId = <process instance id>;
delete from ProcessInstanceInfo where InstanceId = <process instance id>;
delete from ContextMappingInfo where CONTEXT_ID = <process instance id>;
update Task set status = 'Exited' where processInstanceId = <process instance id> and status in ('Created', 'Ready', 'Reserved', 'InProgress', 'Suspended');
update ProcessInstanceLog set end_date = current_timestamp(), status = 3 where processInstanceId = <process instance id>;
~~~~~~~~~~
current_timestamp() is a function for MySQL. If you use a different database, please look up similar one.

If you use CorrelationKeyInfo, you will find a result by the following query
~~~~~~~
select * from CorrelationKeyInfo where processInstanceId = <process instance id>
~~~~~~~~

Then, run these 2 SQLs as well. (Assuming <correlation key id> is CorrelationKeyInfo.keyId which you found)
~~~~~~~~
delete from CorrelationPropertyInfo where correlationKey_keyId = <correlation key id>;
delete from CorrelationKeyInfo where keyId = <correlation key id>;
~~~~~~~~

4) Start BPMS.

Creating new kie container through the Rest API

#you can Create containers using the controller's API.
curl -X PUT "http://<business-central-IP>:<business-central-port>/business-central/rest/controller/management/servers/default-kieserver/containers/hello" -H "accept: application/json" -H "content-type: application/xml" -d @create-container.xml

#to start this container, execute POST request as in the example below:
curl -X POST "http://<business-central-IP>:<business-central-port>/business-central/rest/controller/management/servers/default-kieserver/containers/[container ID]/status/started" -H "accept: application/json 
  • create-container.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<container-spec-details>
    <container-id>hello</container-id>
    <container-name>test</container-name>
    <release-id>
        <artifact-id>test</artifact-id>
        <group-id>com.myspace</group-id>
        <version>1.0.0-SNAPSHOT</version>
    </release-id>
    <configs>
        <entry>
            <key>RULE</key>
            <value xsi:type="ruleConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
                <scannerStatus>STOPPED</scannerStatus>
            </value>
        </entry>
        <entry>
            <key>PROCESS</key>
            <value xsi:type="processConfig" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
                <runtimeStrategy>SINGLETON</runtimeStrategy>
                <kbase></kbase>
                <ksession></ksession>
                <mergeMode>MERGE_COLLECTIONS</mergeMode>
            </value>
        </entry>
    </configs>
    <status>STARTED</status>
</container-spec-details>

Generic Async Work Item Handler (backed by Executor Service)

Impl code: https://github.com/kiegroup/jbpm/blob/master/jbpm-services/jbpm-executor/src/main/java/org/jbpm/executor/impl/wih/AsyncWorkItemHandler.java

Kie Server Openshift Image customization:

Kie Server Date Marshalling tweaks

  • Using JSON @JsonFormat:
    @com.fasterxml.jackson.annotation.JsonFormat(pattern = "yyyy-MM-dd", shape = com.fasterxml.jackson.annotation.JsonFormat.Shape.STRING)
    private java.util.Date effectiveDate;
    @com.fasterxml.jackson.annotation.JsonFormat(pattern = "yyyy-MM-dd", shape = com.fasterxml.jackson.annotation.JsonFormat.Shape.STRING)
    private java.util.Date applicationDate;
  • Using XML @XmlJavaTypeAdapter
package com.redhat.pocs;

import java.time.LocalDate;
import javax.xml.bind.annotation.adapters.XmlAdapter;

public class LocalDateJaxbAdapter extends XmlAdapter<String, LocalDate> {
    public LocalDate unmarshal(String v) throws Exception {
        return LocalDate.parse(v);
    }

    public String marshal(LocalDate v) throws Exception {
        return v.toString();
    }
}
	@javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter(com.redhat.pocs.LocalDateJaxbAdapter.class)
	private java.time.LocalDate patientDateOfBirth;
  • Kie Server system property (seems to be the simpler way)
-Dorg.kie.server.json.format.date=false
  • To use data format on DRLs or Drools DTs
-Ddrools.dateformat="dd-mmm-yyyy"
-Ddrools.defaultlanguage=us
-Ddrools.defaultcountry=US

Springboot Kie-Server

  • connection to a Controller using Keystore based auth
mvn spring-boot:run \
-Dkie.keystore.keyStoreURL=file:///myjks.jceks \
-Dkie.keystore.keyStorePwd=<mypass> \
-Dkie.keystore.keyStorePwd= \
-Dkie.keystore.key.ctrl.alias=myuser \
-Dkie.keystore.key.ctrl.pwd=<mypass>

CI/CD Strategies

It is possible to create custom kie-server images as a part of CI/CD pipeline. For instance, the pipeline could invoke the image build (like in the sample below) - either from RHPAM project git source or from an already built KJAR.


### Binary build from the kjar of a RHPAM project
oc new-build --binary --name=custom-kie-server --image-stream="rhpam-templates/rhpam-kieserver-rhel8:7.10.0"
oc start-build bc/custom-kie-server --from-file=<<absolute path to kjar in the file system>> --follow --wait

OR

### S2I build from RHPAM project source
oc new-build <<GIT_URL of the RHPAM project>> --name=custom-kie-server --image-stream="rhpam-templates/rhpam-kieserver-rhel8:7.10.0"
oc start-build bc/custom-kie-server --follow --wait
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment