środa, 28 lutego 2018

React to SLA violations in cases

As a follow up article for Track your processes and activities with SLA here is a short addon for case management. You can already track your SLA violations for cases, processes and activities but there is more to it.
jBPM 7.7 comes with out of the box support for automatic handling of SLA violations:

  • notification to case instance owner
  • escalation to administrator of case instance
  • starting another process instance as part of the case instance
With this you can easily enhance your cases with additional capabilities that will make sure your personnel is aware of the SLA violations. It can become crucial to make sure your customers are satisfied and you won't mis any targets.

Let's quickly dig into details of each of the mechanisms
As described in previous post, SLA violations are delivered via event listener (ProcessEventListener). 

Notification to case instance owner

Notification is of email type. It will essentially create a dynamic Email task so to make this work you need to register EmailWorkItemHandler via deployment descriptor.

It's implemented by org.jbpm.casemgmt.impl.wih.NotifyOwnerSLAViolationListener class and supports following parameters to be given (as part of its constructor):
  • subject - email subject that will be used for emails
  • body - email body that will be used for emails
  • template - email body template that should be used when preparing body
node that template parameter will override body when given. See this article for more information about email templates and this one for using Email task.

You can use default values as well by simply using the default constructor when registering listener in deployment descriptor.

Email addresses are retrieved via UserInfo for users assigned to role "owner" of the case. If there are no such role or no assigned users this event listener will silently skip any notifications.

Escalation to administrator of case instance

Escalation to admin means that whoever is assigned to role admin in given case instance will get assigned new user task (regardless if admin case role has assigned users or groups).
Similar to notification, this is done via dynamic user task that is "injected" into case instance. Depending if the escalation is for case instance SLA violation or particular activity, administrator will see slightly different task name and description to help him/her identify the failing element.

It's implemented by org.jbpm.casemgmt.impl.wih.EscalateToAdminSLAViolationListener class.

Starting another process instance as part of the case instance

Another type of automatic reaction to SLA violation is starting another process instance (of given process id) to handle SLA violation. This usually applies to more complex scenarios where handling can be multi step or requires many actors to be involved.

It's implemented by org.jbpm.casemgmt.impl.wih.StartProcessSLAViolationListener class. This class requires a single parameter to be given when registering it and that is process id of the process that should be started upon SLA violations.

These are basic ways of handling SLA violations and their main purpose is to illustrate how users can plug their own mechanism to deal with this kind of situations. Users can:
  • create their own listeners
  • extend existing listeners (e.g. with notification one, you can just override the method responsible for building map of parameters for Email task)
  • combine both
  • compose listeners and take decisions which one to used based on the content of the SLAViolationEvent
  • or anything else you find useful

Last but not least have a look how easy it is to extend our Order IT hardware case with SLA tracking and automatic escalation to administrator.

Stay tuned and let us know what you think!

wtorek, 27 lutego 2018

Track your processes and activities with SLA

One of the important parts of business automation is to be able to track the execution if it's done on time. This is what usually is done by SLA (Service Level Agreement) fulfilment. jBPM as part of 7 series provides an SLA tracking mechanism that applies to:

  • activities in your process (those that are state nodes)
  • processes
  • cases (next article will be dedicated to cases)
Even though users could already achieve that by using various constructs in the process (boundary timer events, event subprocesses with timer start event, etc) though that requires additional design work within the process and for some (basic) cases might make the diagram (process) less readable. 
On the other hand, these constructs provide more control over what needs to be done so it is still a viable approach, especially when custom and complex logic needs to be carried out.

jBPM 7.7 introduces SLA tracking based on due date that can be set either for entire process instance or selected activities. 

What this means is that process engine will keep track if the process instance or activity is completed before it's SLA. Whenever SLA due date is set, process/node instance will be annotated with additional information:
  • calculated due date (from the expression given at design time)
  • SLA compliance level
    • N/A - when there is no SLA due date (integer value 0)
    • Pending - when instance is active with due date set (integer value 1)
    • Met - when instance was completed before SLA due date (integer value 2)
    • Violated - when instance was not completed/aborted before SLA due date (integer value 3)
    • Aborted - when instance was aborted before SLA due date (integer value 4)

    As soon as process instance is started it will be labeled with proper information directly in workbench UI. That allows to spot directly issues with SLA violations and react accordingly.

    Moreover, to improve visibility a custom dashboard can be created to nicely aggregate information about SLA fulfilment to be able to easily share and control that area. Workbench is now equipped with so called Pages (part of design section, next to projects) where you can easily build custom dashboards and include them in the workbench application.

    But SLA tracking in jBPM is not only about showing that information or building charts on top of that. This is what comes out of the box but is not limited to it.

    SLA tracking is backed by ProcessEventListener that exposes two additional methods:
    • public void beforeSLAViolated(SLAViolatedEvent event)
    • public void afterSLAViolated(SLAViolatedEvent event)
    These methods are invoked directly when SLA violation was found. With this users can build custom logic to deal with SLA violations, to name few:
    • notify an administrator
    • spin another process to deal with violations
    • signal another part of the process instance
    • retrigger given node instance that is having issues with completion
    There might be almost endless ways of dealing with SLA violations and that's why jBPM gives the option to deal with then in the way you like rather than enforcing you to follow certain ways. Even notifications might be not so generic that everyone would apply the same way.

    By default, each SLA due date with be tracked by dedicated timer instance that will fire off when due date is reached. That in turn will signal process instance to deal with the SLA violation logic and call event listeners. Default operations are:
    • update SLA compliance level - to Violated
    • ensure that *Log tables will be updated with the SLA compliance level
    In some cases, especially when there is a huge volume of process instances with SLA tracking, individual timers might become a bottleneck in processing (though this would be really for very heavy loads firing at pretty much same time). To overcome this users can turn off timer based tracking and rely on external monitoring. As an alternative, jBPM provides executor command that can be scheduled to keep track of SLA violations. What it does is:
    • periodically checks ProcessInstanceLog and NodeInstanceLog to see if there are any instances with SLA violated (not completed in time)
    • for anything found, it signals given process instance telling that SLA violation was found
    • process instance then will do exactly same logic as when it's triggered by timer
    External tracking of SLAs most likely won't be as accurate (when it comes to time it signals SLA violations) but might reduce load on the overall environment. Timer based SLA violation tracking is real time, meaning the second it is violated it will be handled directly, while jBPM executor based will wait until next execution time (which is obviously configurable and defaults to 1 hour).

    Here is short screen case showing all this in action

    So that all makes it really simple to make sure your work is done on time and what might even be more important - you will be directly informed about it.

    piątek, 16 lutego 2018

    Redesigned jBPM executor

    jBPM executor is the backbone for asynchronous execution in jBPM. This applies to so called async continuation (when given activity is marked as isAsync), async work item handler or standalone jobs.

    Currently jBPM executor has two mechanisms to trigger execution:

    • polling mechanism that is available for all environments 
    • JMS based that is only available in JEE environment with configured queue

    JMS part was left as is because it proved to be extremely efficient and performs way better than polling one. Worth mentioning is that JMS based is only applicable for immediate jobs - meaning retries will always be processed by polling mechanism.

    On the other hand, polling based mechanism is not really efficient and in some cases (like cloud deployments where charge model is pay as you go) can cost more - due to periodic queries that checks for jobs to execute, even if there are no jobs. In addition to this, with high volume of jobs polling mechanism is actually suffering from race conditions between jBPM executor threads that constantly try to find a job to execute and they might be trying to get the same job. To solve that jBPM executor uses pessimistic locks for queries to make sure that only one instance (or thread) can fetch given job. This in turn caused bunch of issues with some of the data bases.

    All these led to redesign of jBPM executor internals to make it more robust, not only in JEE environment and not only for immediate jobs.

    What has changed?

    The most important (from user point of view) is meaning of one of the system properties used to control jBPM executor:
    • org.kie.executor.interval 
    this property was referring to how often polling thread was invoked to check for available jobs and was set to 3 (seconds) by default.
    After redesign, this in turn has default value 0 and refers to how often executor should sync with underlying data base. This should only be used in cluster setup when failover (execution of jobs from other instance) should be enabled.

    There is no initial delay any longer as it used to be, to let executor delay execution to allow other parts of the environment to finish bootstrapping. Instead, executor is started (initialised) only when all components finished - in context of KIE Server this is only when KIE Server is actually ready to serve requests.

    There is no more polling threads (except the optional sync with db) that are responsible for executing jobs. With that all EJB with asynchronous methods are gone too.


    So how would it actually work now? Diagram below shows the components (classes) involved and following is explanation on how they interact

    ExecutorService is the entry point and the only class that user/client code interacts with. Whatever client needs to do with executor it must go via executor service.

    ExecutorService delegates to executor (impl) all the scheduling related operations like:
    • schedule jobs
    • cancel jobs
    • requeue jobs
    additionally ExecutorService uses other services to deal with persistent stores. Though this part has not changed.

    ExecutorImpl is the main component in jBPM executor and takes all responsibility for maintaining consistent scheduling of jobs. It embeds special extension to ScheduledThreadPoolExecutor called PrioritisedScheduledThreadPoolExecutor. The main extension point is to use overridden delegateTask  to enforce prioritisation of jobs that should fire at the same time.

    ExecutorImpl schedules (depending on settings for interval and time unit property) sync with data base. As soon as it starts (is initialised) will load all eligible jobs (with queued or retrying status) and schedule them on thread pool executor. At the same time it handles duplicates to avoid multiple schedules for the same job. 
    What is actually scheduled in the thread pool executor is a thin object (PrioritisedJobRunnable) that holds only three values:
    • id of the job
    • priority of the job
    • execution time (when it should fire)
    each job has also reference to AvailableJobProcessor that is actually used to execute given job.

    AvailableJobProcessor is pretty much the same as it was, it's responsibility is to fetch given job by id (this time complete job with all data) and execute it. It also handles exceptions and completion of the job by interacting with ExecutorImpl whenever needed. It uses pessimistic lock when fetching a job but it avoid any of the problematic constructs as it gets the job by id.

    LoadJobsRunnable is a job that can be one time or periodic to sync thread pool executor with underlying data base. In non cluster environments it should be run only once - on startup and this is always the case regardless of the setting of org.kie.executor.interval property. Though in cluster environment where there are multiple instance of the executor using same data base interval can be set to positive integer to enable periodic sync with data base. This is to provide failover between executor instances.

    How jobs are managed and executed

    On executor start, all jobs are always loaded, regardless if there are one or more instances in the environment. That makes sure all jobs will be executed, even when their fire time has already passed or was scheduled by other executor instance.
    Jobs are always stored in db, no matter what trigger mechanism is used to execute the job (JMS or thread pool).

    With JMS

    Whenever JMS is available it will be used for immediate jobs, meaning they won't be scheduled in thread pool executor. Will be directly executed via JMS queue as they are handled in the current implementation. 

    Without JMS

    Jobs are stored in data base and scheduled in thread pool executor. Scheduling takes place only when transaction was committed successfully to make sure job is actually stored before it's attempted to be executed.
    Scheduling takes place always in the same JVM as the request is being handled. That means there is no load balancing and regardless when the job should fire it will fire in the same server (JVM). Failover will only apply when the periodic sync is enabled.

    Thread pool executor has configurable ThreadFactory and in JEE environment it relies on ManagedThreadFactory to allow access to application server components such as transaction manager etc. ManagedTreadFactory is configurable as well, so users can define their own thread factory in application server instead of the default one.


    Thread pool executor is extremely fast at executing jobs but it's less efficient when scheduling jobs as it must reorganise internal queue of jobs. Obviously that will depend on the size of the queue but it's worth to keep in mind. 
    Overall performance of this approach is way better than polling mechanism and does not cause additional load on db - to periodically check for jobs.
    It's performance is close to what JMS provides (at least with embedded broker) so it gives really good alternative for non JEE environments like servlet containers or spring boot.


    Main conclusion is that this bring efficient background processing to all possible runtime environments that jBPM can be deployed to. Moreover it reduces load on data bases and by that in some cases reduces costs. 
    When it comes to when to use which approach I'd say:
    • use JMS when possible for immediate jobs, in many cases it will be more efficient (especially with big volume of jobs) and provides load balancing with clustered JMS setup
    • use thread pool executor only when JMS is not applicable - servlet containers, spring boot, etc
    The defaults for KIE Server are as described above, on JEE servers it relies on JMS and for tomcat or spring boot it uses only thread pool executor. 

    In both ways it is now comparable when it comes to performance of the async jobs execution.

    środa, 14 lutego 2018

    Updated jBPM Service Repository

    1.0 Updated jBPM Workitem Repository for 7.6.0.Final

    The jBPM Service Repository has received a major update and is now available since the 7.6.0.Final release ( http://download.jboss.org/jbpm/release/7.6.0.Final/service-repository/ ). The release number follows the jBPM releases so you can chose the service repository that is compatible with the jBPM version you are using. 

    On top of updating the service repository we have also added a great number of new integration services (workitems) that you can chose from. Building the repository locally, adding new workitems and contributing them to the community has also been made a lot simpler, so lets take a look at each of the new features and how you can use them to help bridge the gap between your business processes and services.
    New integration services
    The updated service repository includes a number of new integration services (workitems) that you can use within your business processes. Here is a list of the new services that were added and the description of what each can help you achieve inside your business processes:
    • Dropbox -  Upload and download files from Dropbox
    • Google Calendar - Add and retrieve Calendars and Calendar Events from Google
    • Google Mail -Send mail via Google
    • Google Sheets - Read content of sheets via Google
    • Google Tasks - Add and retrieve tasks via Google
    • Google Drive - Upload and download media to/from Google Drive
    • IBM Watson - Classify image and detect faces in image via IBM Watson
    • IFTTT - Send a trigger message via IFTTT
    • Twitter - Update status and send messages using Twitter
    • Github - Create Gist or list your repositories in Github
    • Jira - Create or update Jiras
    As mentioned this list is a growing list and we are looking for contributions from the community to make it even bigger. If you have implemented or are planning to implement your own workitems and feel like contributing them for all of the jBPM community to use please contact us on IRC or the jBPM mailing list and we will be more than happy to help to get your implementation in this repository.

    Updated workitem documentation
    Each workitem in this repository has a nice documentation page now which describes its input/output parameters, dependencies etc. Here is an example of this:
    Sample workitem documentation page

    Easy to see download links
    The download links for each of the services in the repository are much clearer to see now as they are presented right on the repository main page. The downloads available for each of the services are it's workitem definition (.wid), its JSON workitem definition (.json), and the service jar file (.jar). Here is an example of that:

    Download links

    Building the jBPM service repository locally
    You can easily build the jBPM service repository locally and host it yourself. The service repository is hosted on Github - https://github.com/kiegroup/jbpm-work-items. If you clone this repository locally and build it (using Maven) the entire repository as shown here will be build in the repository/target directory. A zip file that includes the entire repository is also available for you there to share or extract to your desired location. 

    Using services in the repository
    Having a service repository gives you a really nice way of sharing your service integration point with anyone that wants to use them in their business processes. It gives you the ability to share services within our outside your businesses. Within the realm of jBPM there are two ways of using sservices from the repository in business processes:
         1. Install within the KIE workbench via the jBPM Designer
         jBPM Designer includes a service repository connection feature which allows you to enter the repository URL and install services from it. Once installed these services are automatically registered with your workbench project and are available for you to use in the node palette when designing processes:
    Connect and install workitems from any jBPM Service Repository

    Installed services available in the process nodes panel

          2. Pre-Install on AS startup 
          If you are running the Kie workbench you can tell it to prtee-install a number of services from your service repository. With this option your services will be installed before you start designing your business processes. Same as above, services will be available in the process nodes panel of jBPM designer and registered within your workbench project configurations. For this you need two system properties namely org.jbpm.service.repository which defines the url to your service repo (note this can also come from the file system) org.jbpm.service.servicetasknames which is a comma-separated list of the workitem names from your repository that you would like to install. Here is an example of this:

    Sample command like to pre-install workitems

    Want to contribute?
    We would really like for this service repository to become community-driven and have made strides to allow the community to easily do so. Feel free to fork https://github.com/kiegroup/jbpm-work-items and add your own implementations to it. Once ready to contribute or have any questions or need help feel free to contact us on IRC or the jBPM mailing list. 

    poniedziałek, 5 lutego 2018

    Interact with your processes via email

    In the world that is constantly running on high pace, people have less and less time to interact with software via sophisticated user interfaces. Especially in situations when there is a need to make a decision or provide quick feedback.
    It's no different when it comes to business automation, where number of processes, rules and cases interact with each other to fulfil particular business goal. But at some point there is a need to have human participation one way or another.

    Regardless of the information human actor needs to provide, they need to be put in context to make a proper decision or answer given query. Obviously tailored UI and forms can achieve this but that will usually require people to be notified about awaiting task and then switch to some other application to know more about the task and the work that is expected from them.
    But what if they are no online or simply overlooked the notification... hmm and the notification itself how is that handled....

    To provide an alternative approach (well know and constantly in use) email messaging can be used. In most of the cases notification is already send over email. Then a logical choice would be to use the same mechanism to respond to that notification over email too. And this is exactly what this article is about - interacting with processes over email.

    What does it cover?
    • first and foremost - notification and completion of user tasks - users can be directly notified over email that there is a task waiting for their action, users can simply reply to that email to complete their tasks
    • start process instance by email
    • start case instance by email
    • upload documents to your processes (as part of task completion for instance)


    Let's have a look what is included in the email integration:
    • KIE Server extension that allows to simply make use of it with KIE Server
    • task event listener that sends email notification when task is created
    • message extractor with default implementations (for plain text and text with attachments)
    • handlers to react to received messages (after they are extracted)
      • two default implementations - complete task and start process

    jBPM notification uses IMAP to listen on given folder for new messages and process them immediately once they are available. To do that, it uses IDLE command so it can be notified by the mail server without of any poll based mechanism. The only poll is done once it starts to look up if there are any (unread) messages in the folder and if so process them.

    Mechanism is as follows

    Message extractors are responsible for parsing source message and extracting relevant information, this can be based on various criteria such as "from" email address, "to" email address, subject, MIME type and more. Based on that information, only one can actually be used to process message and thus MessageExtractor has priority that allows to control in what order extractors will be tried. Moreover, extractors usually deal with only one MIME type so they can be nicely written (and tested).

    On the other side, all handlers are always invoked for single message that was extracted. Since handlers actually bring the logic what to do with message, they should also include logic to ignore message if not applicable for it. For instance it does not make sense to start process instance for a reply message - as this is most likely referring to existing user task. So it's up to handler implementation to deal with these situations. 
    For instance a CompleteTaskHandler will only react to message that have In-Reply-To header and it's in expected format (containing container id and task id). All other messages will simply be skipped. StartProcessHandler will actually ignore all messages that do have In-Reply-To header present.

    Default implementations

    jBPM notifications comes with complete support for handing user tasks - it integrates easily with sending email notifications and handling replies. So there is not much you need to do if you want to take advantage of it. Maybe just provide your custom email templates to send emails with your brand, or extract information relevant to your domain.

    you can send email to jBPM and it will automatically create new process instance for it. How does it do it?
    • it expects that the subject of the email to be ContainerId:ProcessId
    • message content will be set as process instance variable called "messageContent"
    • "from" email address will be used to find the user in jBPM and set it as "sender" process instance variable
    This is the basic way to integrate email with process engine, you can provide your custom handlers that will do whatever else is needed. maybe include some email markup in the message that will make it simple to parse and find relevant data. All in your hands.

    capable of extracting plain text from email that uses "text/plain" as content type.

    capable of extracting both text and attachments from email that uses "multipart/alternative", "multipart/mixed"as content type

    More message extractors can be added, no limitations there, they are always invoked based on their priority (that is defined as part of the implementation).

    Sample use case

    To illustrate how it actually works and what it can do, let's take a look at very simple personal assistant build with jBPM and drools. So it has a single process as shown below:

    It then uses business rules to match question with best answer if possible, if it cannot find any answer it delegates that to human so someone from the staff could reply. This could be later on enhanced with "learning capabilities" so whatever human actor replied is put into the system and next time such question would be asked, then system can reply directly.

    Watch this short screen cast to see it actually in action.

    Simple but shows how little effort is needed to make this a useful feature, and who knows maybe that kind of functionality is needed in some domain....

    This is not yet integrated with jBPM but if you think that it makes a good candidate then reach out to us via mailing list.

    wtorek, 30 stycznia 2018

    Spring Boot starters for jBPM and KIE Server

    jBPM supports Spring (and Spring Boot) for quite a while but it didn't provide it based on Spring Boot way - auto configuration and starters.

    With upcoming release (7.6.0) this has changed. Now there are fully featured starters (based on auto configuration modules) for:

    • jBPM embedded
    • fully featured KIE Server
    • rules only KIE Server (Drools)
    • rules, processes and cases KIE Server (jBPM)
    • planning KIE Server (OptaPlanner)

    You can very easily get started with these by using Spring Initializr (https://start.spring.io) where you can generate a complete project with all needed to get it running.

    Have a look at this quick screencast that shows it in action.

    Next take some time to read up guides for starters:
    • jBPM business process management - embedded engine
      •  groupId: org.kie
      •  artifactId: jbpm-spring-boot-starter-basic
      •  Guide
    • Fully featured KIE Server (Drools, jBPM, Optaplanner)
      •  groupId: org.kie
      •  artifactId: kie-server-spring-boot-starter
      •  Guide
    • Rules and Decisions KIE Server (Drools, DMN)
      •   groupId: org.kie
      •   artifactId: kie-server-spring-boot-starter-drools
      •   Guide
    • Rules and Decisions, Process and Cases KIE Server (Drools, DMN, jBPM, Case mgmt)
      •  groupId: org.kie
      •  artifactId: kie-server-spring-boot-starter-jbpm
      •  Guide
    • Planning KIE Server (Optaplanner)
      •  groupId: org.kie
      •  artifactId: kie-server-spring-boot-starter-optaplanner
      •  Guide

    Last but not least, take a look at samples that are in the code base - especially one worth noting is KIE Server secured with Keycloak!

    Stay tuned as more will come!

    środa, 13 grudnia 2017

    Be lazy with your data

    jBPM comes with really handy feature called pluggable variable persistence strategy (or shorter marshalling strategies). This is the mechanism responsible for persisting process instance and task variables to their data store - whatever kind of data store you use.

    An important thing to keep in mind when using marshalling strategies is the fact that each time variable is read it will read that from the data store which might be:

    • file system
    • data base
    • REST service
    • document management system e.g. ECM
    • and more
    depending on the accessibility of this service and its performance this might not be a big deal but if that data is loaded many times it might become an issue. Especially if the data is of fair size like documents. 
    Imagine process instance operates on bunch of documents (word, pdf, etc). Each of this document is of 2MB size which will give 20 MB for 10 documents. That means every time user interact with this process instance all documents will be loaded from external system. And this is not a problem (or it is actually how it should behave) when these documents are needed for the work user intend to perform. 

    But what if (s)he is not going to use all documents or even not a single one? Or if it's not a user but an timer is firing of to send reminder to users?

    Should all documents be loaded then? Obviously it does not make much sense though the problem with this decision is how would process instance know if given data will be used or not? And because of that it simply loads all variables whenever process instance (or task) is loaded.
    This is default mechanism for process instance and task variables that are stored as part of them as it does not make much sense to separate them since their life cycle is bound to each other - always loaded and stored together. So this does not bring any overhead or additional communication effort.

    Though situation looks slightly different for variables that are actually stored externally - like physical documents stored in ECM system. In this case, documents (including their content - 2MB each) will be loaded regardless if the data will be used or not. Moreover, in high volume systems this might put unnecessary load on ECM to constantly load and store documents which actually didn't change.

    jBPM 7.6 comes with support for lazy load of variables to resolve these issues. Though it won't magically apply to all your variables that are stored externally, but will provide all the support from the engine side to take advantage of lazy loading. Since the mechanism for loading variables will differ based on the back end data store it's up to user to apply this principle which is rather simple as follows:
    • your variable must implement org.kie.internal.utils.LazyLoaded
    • your marshaler strategy needs to provide kind of service responsible for loading content of the variable - user by load method of the variable
    • your marshalling strategy should not load content by default but set that service on the variable so can be used to lazy load when needed
    • to avoid unnecessary operations to store variable, implement sort of tracking system on your data to identify if variable has changed and store it only if it did, your marshaler should check this tracking mechanism in marshal method and should reset it after loading variable in unmarshal method

    jBPM 7.6 provides this mechanism as part of its support for documents (jbpm-document module). The DocumentImpl implements both LazyLoaded and tracking mechanism to resolve both issues (too often load and too often store). This extremely improves overall performance as it reduces number of reads and writes to document service at the same time provides content on demand and only when needed.

    I'd like to encourage everyone who stores process or task variables externally to make it lazy loaded and tracked to improve the performance of your system and reduce load on your back end data store.