poniedziałek, 21 maja 2018

Contract Net Protocol with jBPM

jBPM provides lots of capabilities that could be used out of the box to build rather sophisticated solutions. In this article I'd like to show one of them - Contract Net Protocol.

Contract Net Protocol (CNP) is a task-sharing protocol in multi-agent systems, consisting of a collection of nodes or software agents that form the 'contract net'. Each node on the network can, at different times or for different tasks, be a manager or a contractor. [1]

This concepts nicely fits into case management capabilities of jBPM 7. It allows to easily model interaction between Initiator and Participant(s).

source http://www.fipa.org/specs/fipa00029/SC00029H.html#_ftnref1
Contract can be announced to many participants (aka bidders) that can either be interested in the contract and then bid or simply reject it and leave the contract net completely.

with jBPM 7, contract net can be modelled as a case definition where individual phases of the protocol can be externalised to processes to carry on with additional work. This improves readability and at the same time promotes reusability of the implementation.


Announce contract and Offer contract are separate processes that can be implemented separately according to needs. For this basic showcase they are based on human decision and look as follows

Each of the participants of the contract net will have dedicated instance of the announce contract process. Each should make the decision if they will place a bid or not. In case they won't do it at all, main contract net case definition keeps a timer event on them to remove given bidder if the deadline was reached.

As soon as all bidders replied (or time for reply elapsed) there are set of business rules that will evaluate all provided bids and select only one. Once it is selected an Offer contract subprocess will be initiated - after milestone of selecting a bid is completed.


So the bidder who placed the selected bid will get the "Work on contract" task assigned to actually perform the work. Once done the worker indicates if the work was done or she failed at doing the work. In case of successful completion of the work additional business rules are invoked to verify it.

Completion of the work (assuming it was done) will show the results of the work to the initiator for final verification. Once the results are reviewed the contract is ended - by that case instance is ready to be closed.

All this in action can be seen in the following screencast




Again, this is just basic implementation but shows the potential that can be unleashed to build advanced Contract Net Protocol solutions.

Complete project that can be easily imported into workbench and executed in KIE server can be found here.

To start the case instance you can use following payload that includes data (both contract and bidders) and case role assignments

{
    "case-data": 
    {
        "contract": 
        {
            "Contract": 
            {
                "name" : "jBPM contract",
                "description" : "provide development expertise for jBPM project",
                "price" : 1234.40
            }
        },
        "bidders": [
        {
            "Bidder": 
            {
                "id" : "maciek",
                "name" : "Maciej Swiderski",
                "email" : "maciek@email.com"   
            }
        },
        {
            "Bidder": 
            {
                "id" : "john",
                "name" : "John doe",
                "email" : "john@email.com"   
            }
        }
        ]
    },

    "case-user-assignments": 
    {
        "Initiator": "mary",
        "Participant": "john"
    },

    "case-group-assignments": 
    {
        
    }
}

If you need more bidders just add them by copying the single bidder in the payload

[1] Source - https://en.wikipedia.org/wiki/Contract_Net_Protocol

wtorek, 24 kwietnia 2018

jBPM Work Items repository tips + tricks

In his previous post, Maciej showcased the updates to the jBPM Work Items module and how easy it is now to get up and running with creating new workitems and start using them in your business processes right away.

In this post I wanted to leverage on what Maciej showed and add a couple of cool features that the jBPM Work Item repository gives you out of the box, specifically related to the repository content generation.

1. Skinning
By default the work item repository generates static content including all workitem info, documentation and download links. Here is what they look like out of the box:

1. Default repository index page

2. Default workitem documentation page

Using the default look/feel is fine, but there are cases you might want to change it to fit your company/business better by changing the colors, adding your logo, or even completely change the layout of these pages and here is how you can do it. 

The jBPM work items module includes a sub-module called template-resources. In this module you can find all related templates that are used to build your final repository content. Lets take a look at these files to find out what each dose:

a) repoheader.html and repofooter.html - responsible for the top and bottom part of your repository index page. You can change these to for example define different page colors, add your logo to it, etc. Whatever you feel like. 
b) repoindex.part- defines each workitem information (each table row on the repository index page). You can change this to change the display for each of your workitems, add/remove download links etc.
c) indextemplate.st - this is a StringTemplate file that is used by each workitem module to generate its documentation page. Again you have free reign to change the look/feel of your workitem documentation as you wish.

With little knowledge of html (and power of jQuery and Boostrap that are built in) you can customize your workitem repository, for example (I'm not a designer btw :):

3) "Skinned" workitem repository index page

2. Excluding workitems from generated repository content
By default all workitems in the jBPM work items module will be displayed in the generated repository. You may not want to expose all of the available ones to your clients and can control which ones you wish to expose via the repository Maven assembly descriptor.
Here in the dependencySet section you can define excludes on the workitems you do not wish to display. Let's say you do not want to show the Archive and Dropbox workitems, you would do:

<excludes>
  <exclude>${project.groupId}:repository-springboot</exclude>
  <exclude>${project.groupId}:archive-workitem</exclude>
  <exclude>${project.groupId}:dropbox-workitem</exclude>
</excludes>

and those will not show up in the generated repository content any more. 
3. Generating additional download content using workitem handler information
By default each workitem in the repository might have one or more handler implementations. Each handler describes itself via the @Wid annotation, here is an example handler for sending Twitter messages.  During compilation step of your handlers repository gathers the info from this annotations and uses it to generate the workitem defintion configuration, the json config, the deployment descriptor xml, etc etc. You may want to generate additional configuration files that currently do not exist. 
This can be configured in the main project's pom.xml file. Here you can add more or remove existing config files generated from the annotation information in your workitem handlers.
I hope this info will be of some help to you guys. As always if there are any questions or if you have ideas on how to further enhance the workitem repository feel free to ask. 








poniedziałek, 23 kwietnia 2018

jBPM Work Items are really simple!

Work items are the way to build custom (domain specific) services that can be used from within a process. They are as any other activity in the process with the difference that they are usually focused on given domain or area.


Work Items are by default placed under Service Tasks category on the palette so can be easily drag and dropped into the canvas when designing processes and cases. Location on the palette is also configurable via category property of the work item definition. So let's have a guided tour on how to create a very simple but functional work item.

The complete process consists of following steps:

  • generating maven project for work item (both definition and handler)
  • implementing handler and configuring work item
  • optionally provide custom icon (16x16)
  • add work item project into service repository
  • import work item into project in workbench

Let's get our hands dirty and implement simple work item and then use it in a process.

Generate work item maven project

First step is to generate maven project that will be our base for:
  • work item definition
  • work item handler
First things first, what is work item definition and what is work item handler?
Work Item definition is a description of the work to be done. That is usually described by unique name, description, to make it more visible on the diagram - an icon and then what is expected at the entry (data inputs - parameters) and what is expected at the exit (data output - results).

Work Item handler is a logic that will be actually executed when given activity (representing work item) will be triggered as part of the process instance execution. 

Work Item is then a runtime representation of the Work Item definition that is backed by Work Item handler that is registered in the process engine via Work Item name. This registration gives users additional flexibility to allow usage of different logic to be executed depending where the process is executed - test vs production environment.

So to generate a maven project use maven archetype

mvn archetype:generate \
-DarchetypeGroupId=org.jbpm \
-DarchetypeArtifactId=jbpm-workitems-archetype \
-DarchetypeVersion=7.8.0-SNAPSHOT \
-DgroupId=org.jbpm.contrib \
-DartifactId=custom-workitem \
-DclassPrefix=Custom \
-Dversion=7.8.0-SNAPSHOT \
-DarchetypeCatalog=local

This command will generate a new project with:

  • groupId - org.jbpm.contrib
  • artifactId - custom-workitem
  • version - 7.8.0-SNAPSHOT
  • with work item configuration and handler class custom-workitem/src/main/java/org/jbpm/contrib/CustomWorkItemHandler.java

I'd like to recommend to generate this project as part of the official jbpm-work-items repository to benefit from service repository included in there. The rest of the article will assume this has been done. 
If you haven't done it yet, follow these:

  • clone github project: https://github.com/kiegroup/jbpm-work-items
  • go into jbpm-work-items
  • check the version number of the cloned project and adjust version argument accordingly in the maven archetype:generate command

Once the project is generated, import it into your IDE and implement the handler - CustomWorkItemHandler.java. You might need to add additional dependencies to your project, depending your the implementation - when doing so please keep following in mind:
  • dependencies that are already included in KIE Server - mark them as provided
  • check for any conflicts with application server, KIE Server and your app dependencies and resolve them - either by adjusting your handler project dependencies or runtime environment
CustomWorkItemHandler.java class consists of @Wid annotation that is actually responsible for configuring your work item definition. It allows you to define (to name just few):
  • name
  • description
  • category
  • icon
  • input parameters
  • results
  • handler
Most of the important parts are already generated for you, so examine them and check for correctness. Most likely parameters and results will be the one most often changed when implementing handlers.

Once that is done, proceed with implementation of the executeWorkItem method which is the heart of your custom work item.

Expose your work item in Service Repository

Now to take advantage of repository generation of jbpm-work-items project you need to add your newly generated project into two pom files:

  • main pom.xml file of jbpm-work-items project - one regular and one zip dependency
  • <dependency>
            <groupId>org.jbpm.contrib</groupId>
            <artifactId>custom-workitem</artifactId>
            <version>${project.version}</version>
          </dependency>
          <dependency>
            <groupId>org.jbpm.contrib</groupId>
            <artifactId>custom-workitem</artifactId>
            <version>${project.version}</version>
            <type>zip</type>
          </dependency>
    
  • repository/pom.xml - only zip dependency (but this time without the version tag)
  • <dependency>
          <groupId>org.jbpm.contrib</groupId>
          <artifactId>custom-workitem</artifactId>
          <type>zip</type>
        </dependency>
    


When you're finished just build the project (assuming you're in jbpm-work-items repository) use following:

mvn clean install -DskipTests -rf :custom-workitem

this will then build your project (custom-work item - adjust it if the artifactId is different) and repositories. Then if you start SpringBoot based Service Repository

java -jar repository-springboot/target/repository-springboot-7.8.0-SNAPSHOT.jar

you'll have your work item available there, just go to http://localhost:8090/repository

And that's it, you have your work item implemented, built and exposed via Service Repository!!!


Use work item in workbench

To make use of your newly created work item, login to workbench and:

  • create project
  • create asset - Business Process
  • use the yellow repository icon in the designer menu to open service repository browser
  • select work item and install it
  • reopen process editor and you'll find your installed work item under Service Tasks category (unless you changed the category when implementing work item)

That's all you need, in the background when the work item was installed your project was modified to add:
  • dependency to your work item jar (as maven dependency of your workbench's project)
  • deployment descriptor to register work item handler 
So now you're ready to launch it - just build and deploy your project in workbench and enjoy your work item being executed.

All this in single screen cast can be found below



Data set editor for KIE Server custom queries

Custom queries feature in KIE Server has been out for quite a while and proved to be very useful. Although there was no integration with workbench to take advantage of when:

  • working with subset of data from various tables that are not exposed via runtime views (processes or tasks)
  • building data set entries for reporting purpose
  • building dashboards

With version 7.8, jBPM is now equipped with data set editor for KIE Server custom queries, it allows users to:
  • define (as data set) and test queries on remote KIE Servers
  • save and edit existing data sets 
  • use defined data sets when building dashboards via Pages feature of workbench

Moreover, data set editor for KIE Server queries is built in a way that it ensures that queries are always send to all known kie servers when using managed mode. New KIE Servers connecting to controller (workbench) will also receive custom queries defined via data set editor.

See all this in action in short screencast



As usual, comments are more than welcome

środa, 28 lutego 2018

React to SLA violations in cases

As a follow up article for Track your processes and activities with SLA here is a short addon for case management. You can already track your SLA violations for cases, processes and activities but there is more to it.
jBPM 7.7 comes with out of the box support for automatic handling of SLA violations:

  • notification to case instance owner
  • escalation to administrator of case instance
  • starting another process instance as part of the case instance
With this you can easily enhance your cases with additional capabilities that will make sure your personnel is aware of the SLA violations. It can become crucial to make sure your customers are satisfied and you won't mis any targets.

Let's quickly dig into details of each of the mechanisms
As described in previous post, SLA violations are delivered via event listener (ProcessEventListener). 

Notification to case instance owner

Notification is of email type. It will essentially create a dynamic Email task so to make this work you need to register EmailWorkItemHandler via deployment descriptor.

It's implemented by org.jbpm.casemgmt.impl.wih.NotifyOwnerSLAViolationListener class and supports following parameters to be given (as part of its constructor):
  • subject - email subject that will be used for emails
  • body - email body that will be used for emails
  • template - email body template that should be used when preparing body
node that template parameter will override body when given. See this article for more information about email templates and this one for using Email task.

You can use default values as well by simply using the default constructor when registering listener in deployment descriptor.

Email addresses are retrieved via UserInfo for users assigned to role "owner" of the case. If there are no such role or no assigned users this event listener will silently skip any notifications.

Escalation to administrator of case instance

Escalation to admin means that whoever is assigned to role admin in given case instance will get assigned new user task (regardless if admin case role has assigned users or groups).
Similar to notification, this is done via dynamic user task that is "injected" into case instance. Depending if the escalation is for case instance SLA violation or particular activity, administrator will see slightly different task name and description to help him/her identify the failing element.

It's implemented by org.jbpm.casemgmt.impl.wih.EscalateToAdminSLAViolationListener class.

Starting another process instance as part of the case instance

Another type of automatic reaction to SLA violation is starting another process instance (of given process id) to handle SLA violation. This usually applies to more complex scenarios where handling can be multi step or requires many actors to be involved.

It's implemented by org.jbpm.casemgmt.impl.wih.StartProcessSLAViolationListener class. This class requires a single parameter to be given when registering it and that is process id of the process that should be started upon SLA violations.


These are basic ways of handling SLA violations and their main purpose is to illustrate how users can plug their own mechanism to deal with this kind of situations. Users can:
  • create their own listeners
  • extend existing listeners (e.g. with notification one, you can just override the method responsible for building map of parameters for Email task)
  • combine both
  • compose listeners and take decisions which one to used based on the content of the SLAViolationEvent
  • or anything else you find useful

Last but not least have a look how easy it is to extend our Order IT hardware case with SLA tracking and automatic escalation to administrator.



Stay tuned and let us know what you think!

wtorek, 27 lutego 2018

Track your processes and activities with SLA

One of the important parts of business automation is to be able to track the execution if it's done on time. This is what usually is done by SLA (Service Level Agreement) fulfilment. jBPM as part of 7 series provides an SLA tracking mechanism that applies to:

  • activities in your process (those that are state nodes)
  • processes
  • cases (next article will be dedicated to cases)
Even though users could already achieve that by using various constructs in the process (boundary timer events, event subprocesses with timer start event, etc) though that requires additional design work within the process and for some (basic) cases might make the diagram (process) less readable. 
On the other hand, these constructs provide more control over what needs to be done so it is still a viable approach, especially when custom and complex logic needs to be carried out.

jBPM 7.7 introduces SLA tracking based on due date that can be set either for entire process instance or selected activities. 



What this means is that process engine will keep track if the process instance or activity is completed before it's SLA. Whenever SLA due date is set, process/node instance will be annotated with additional information:
  • calculated due date (from the expression given at design time)
  • SLA compliance level
    • N/A - when there is no SLA due date (integer value 0)
    • Pending - when instance is active with due date set (integer value 1)
    • Met - when instance was completed before SLA due date (integer value 2)
    • Violated - when instance was not completed/aborted before SLA due date (integer value 3)
    • Aborted - when instance was aborted before SLA due date (integer value 4)

    As soon as process instance is started it will be labeled with proper information directly in workbench UI. That allows to spot directly issues with SLA violations and react accordingly.


    Moreover, to improve visibility a custom dashboard can be created to nicely aggregate information about SLA fulfilment to be able to easily share and control that area. Workbench is now equipped with so called Pages (part of design section, next to projects) where you can easily build custom dashboards and include them in the workbench application.



    But SLA tracking in jBPM is not only about showing that information or building charts on top of that. This is what comes out of the box but is not limited to it.

    SLA tracking is backed by ProcessEventListener that exposes two additional methods:
    • public void beforeSLAViolated(SLAViolatedEvent event)
    • public void afterSLAViolated(SLAViolatedEvent event)
    These methods are invoked directly when SLA violation was found. With this users can build custom logic to deal with SLA violations, to name few:
    • notify an administrator
    • spin another process to deal with violations
    • signal another part of the process instance
    • retrigger given node instance that is having issues with completion
    There might be almost endless ways of dealing with SLA violations and that's why jBPM gives the option to deal with then in the way you like rather than enforcing you to follow certain ways. Even notifications might be not so generic that everyone would apply the same way.

    By default, each SLA due date with be tracked by dedicated timer instance that will fire off when due date is reached. That in turn will signal process instance to deal with the SLA violation logic and call event listeners. Default operations are:
    • update SLA compliance level - to Violated
    • ensure that *Log tables will be updated with the SLA compliance level
    In some cases, especially when there is a huge volume of process instances with SLA tracking, individual timers might become a bottleneck in processing (though this would be really for very heavy loads firing at pretty much same time). To overcome this users can turn off timer based tracking and rely on external monitoring. As an alternative, jBPM provides executor command that can be scheduled to keep track of SLA violations. What it does is:
    • periodically checks ProcessInstanceLog and NodeInstanceLog to see if there are any instances with SLA violated (not completed in time)
    • for anything found, it signals given process instance telling that SLA violation was found
    • process instance then will do exactly same logic as when it's triggered by timer
    External tracking of SLAs most likely won't be as accurate (when it comes to time it signals SLA violations) but might reduce load on the overall environment. Timer based SLA violation tracking is real time, meaning the second it is violated it will be handled directly, while jBPM executor based will wait until next execution time (which is obviously configurable and defaults to 1 hour).

    Here is short screen case showing all this in action



    So that all makes it really simple to make sure your work is done on time and what might even be more important - you will be directly informed about it.

    piątek, 16 lutego 2018

    Redesigned jBPM executor

    jBPM executor is the backbone for asynchronous execution in jBPM. This applies to so called async continuation (when given activity is marked as isAsync), async work item handler or standalone jobs.

    Currently jBPM executor has two mechanisms to trigger execution:

    • polling mechanism that is available for all environments 
    • JMS based that is only available in JEE environment with configured queue

    JMS part was left as is because it proved to be extremely efficient and performs way better than polling one. Worth mentioning is that JMS based is only applicable for immediate jobs - meaning retries will always be processed by polling mechanism.

    On the other hand, polling based mechanism is not really efficient and in some cases (like cloud deployments where charge model is pay as you go) can cost more - due to periodic queries that checks for jobs to execute, even if there are no jobs. In addition to this, with high volume of jobs polling mechanism is actually suffering from race conditions between jBPM executor threads that constantly try to find a job to execute and they might be trying to get the same job. To solve that jBPM executor uses pessimistic locks for queries to make sure that only one instance (or thread) can fetch given job. This in turn caused bunch of issues with some of the data bases.

    All these led to redesign of jBPM executor internals to make it more robust, not only in JEE environment and not only for immediate jobs.

    What has changed?

    The most important (from user point of view) is meaning of one of the system properties used to control jBPM executor:
    • org.kie.executor.interval 
    this property was referring to how often polling thread was invoked to check for available jobs and was set to 3 (seconds) by default.
    After redesign, this in turn has default value 0 and refers to how often executor should sync with underlying data base. This should only be used in cluster setup when failover (execution of jobs from other instance) should be enabled.

    There is no initial delay any longer as it used to be, to let executor delay execution to allow other parts of the environment to finish bootstrapping. Instead, executor is started (initialised) only when all components finished - in context of KIE Server this is only when KIE Server is actually ready to serve requests.

    There is no more polling threads (except the optional sync with db) that are responsible for executing jobs. With that all EJB with asynchronous methods are gone too.


    Implementation

    So how would it actually work now? Diagram below shows the components (classes) involved and following is explanation on how they interact



    ExecutorService is the entry point and the only class that user/client code interacts with. Whatever client needs to do with executor it must go via executor service.

    ExecutorService delegates to executor (impl) all the scheduling related operations like:
    • schedule jobs
    • cancel jobs
    • requeue jobs
    additionally ExecutorService uses other services to deal with persistent stores. Though this part has not changed.

    ExecutorImpl is the main component in jBPM executor and takes all responsibility for maintaining consistent scheduling of jobs. It embeds special extension to ScheduledThreadPoolExecutor called PrioritisedScheduledThreadPoolExecutor. The main extension point is to use overridden delegateTask  to enforce prioritisation of jobs that should fire at the same time.

    ExecutorImpl schedules (depending on settings for interval and time unit property) sync with data base. As soon as it starts (is initialised) will load all eligible jobs (with queued or retrying status) and schedule them on thread pool executor. At the same time it handles duplicates to avoid multiple schedules for the same job. 
    What is actually scheduled in the thread pool executor is a thin object (PrioritisedJobRunnable) that holds only three values:
    • id of the job
    • priority of the job
    • execution time (when it should fire)
    each job has also reference to AvailableJobProcessor that is actually used to execute given job.

    AvailableJobProcessor is pretty much the same as it was, it's responsibility is to fetch given job by id (this time complete job with all data) and execute it. It also handles exceptions and completion of the job by interacting with ExecutorImpl whenever needed. It uses pessimistic lock when fetching a job but it avoid any of the problematic constructs as it gets the job by id.

    LoadJobsRunnable is a job that can be one time or periodic to sync thread pool executor with underlying data base. In non cluster environments it should be run only once - on startup and this is always the case regardless of the setting of org.kie.executor.interval property. Though in cluster environment where there are multiple instance of the executor using same data base interval can be set to positive integer to enable periodic sync with data base. This is to provide failover between executor instances.


    How jobs are managed and executed

    On executor start, all jobs are always loaded, regardless if there are one or more instances in the environment. That makes sure all jobs will be executed, even when their fire time has already passed or was scheduled by other executor instance.
    Jobs are always stored in db, no matter what trigger mechanism is used to execute the job (JMS or thread pool).

    With JMS

    Whenever JMS is available it will be used for immediate jobs, meaning they won't be scheduled in thread pool executor. Will be directly executed via JMS queue as they are handled in the current implementation. 

    Without JMS

    Jobs are stored in data base and scheduled in thread pool executor. Scheduling takes place only when transaction was committed successfully to make sure job is actually stored before it's attempted to be executed.
    Scheduling takes place always in the same JVM as the request is being handled. That means there is no load balancing and regardless when the job should fire it will fire in the same server (JVM). Failover will only apply when the periodic sync is enabled.

    Thread pool executor has configurable ThreadFactory and in JEE environment it relies on ManagedThreadFactory to allow access to application server components such as transaction manager etc. ManagedTreadFactory is configurable as well, so users can define their own thread factory in application server instead of the default one.


    Performance

    Thread pool executor is extremely fast at executing jobs but it's less efficient when scheduling jobs as it must reorganise internal queue of jobs. Obviously that will depend on the size of the queue but it's worth to keep in mind. 
    Overall performance of this approach is way better than polling mechanism and does not cause additional load on db - to periodically check for jobs.
    It's performance is close to what JMS provides (at least with embedded broker) so it gives really good alternative for non JEE environments like servlet containers or spring boot.


    Conclusion

    Main conclusion is that this bring efficient background processing to all possible runtime environments that jBPM can be deployed to. Moreover it reduces load on data bases and by that in some cases reduces costs. 
    When it comes to when to use which approach I'd say:
    • use JMS when possible for immediate jobs, in many cases it will be more efficient (especially with big volume of jobs) and provides load balancing with clustered JMS setup
    • use thread pool executor only when JMS is not applicable - servlet containers, spring boot, etc
    The defaults for KIE Server are as described above, on JEE servers it relies on JMS and for tomcat or spring boot it uses only thread pool executor. 

    In both ways it is now comparable when it comes to performance of the async jobs execution.