środa, 7 grudnia 2016

KIE Server router - even more flexibility

Yet another article from KIE Server series. This time to tackle next steps when you already get yourself familiar with KIE Server and its capabilities.

KIE Server promotes architecture where there are many KIE Server instances responsible for running individual projects (kjars). Than in turn might be completely independent domains or the other way around - related to each other but separated on different runtime to avoid negative impact on each other.
Untitled Diagram.png


At this point, the client application needs to be aware of all the servers to properly interact with the servers. To put it in details client application needs to know:
  • location (url) of HR KIE Server
  • location (url) of IT KIE Server 1 and IT KIE Server 2
  • containers deployed to HR KIE Server
  • containers deployed to IT KIE Server (just one of them as they are considered to be homogeneous)
While knowing about available containers is not so difficult and can be retrieved from running server, knowing about all the locations is more tricky. Especially in dynamic (cloud) environments where servers can come and go based on various conditions.

To deal with these problems, KIE Server introduces new component - KIE Server Router. Router is responsible to bridge all KIE Servers grouped under same router to provide unified view of all servers. Unified view consists of:
  • Find the right server to deal with requests
  • Aggregate responses from different servers
  • Provide efficient load balancing
  • Deal with changing environment - like added/removed server instances

Then the only thing client knows is the location of the router. Router then exposes most of the capabilities of the KIE Server over HTTP. It comes with two main responsibilities:
  • proxy to the actual KIE Server instance based on contextual information - container id or alias
  • aggregator of data - to collect information from all distinct server instances in single client request

Untitled Diagram-2.png

There are two types of requests KIE Server Router supports from client perspective:
  • Modification requests - POST, PUT, DELETE - HTTP methods are all considered as such. Main requirement to be properly proxied is that it includes container id (or alias) in the URL
  • Retrieval requests - GET - HTTP method are seen as that as well. Though when they do include container id will be handled the same way as modification requests.
There is additional type of requests - administration requests - that KIE Server Router supports and these are strictly to allow router to function properly within changing environment.
  • Registers new servers and containers when server or container starts on any of the KIE Server instance
  • Unregisters existing servers and containers when server or container stops on any KIE Server instance
  • List available configuration of the router - what servers and containers it is aware of
Router itself has very limited information and the most important is to be able to route to correct server instance based on container id. So the assumption it has there will be only one set of servers that will host given container id/alias. Although this does not mean it's only single server. It can be as many servers as needed that can be dynamically added and removed. Proxy will load balance for all known servers for given container.

Other than the available containers and servers, KIE Server Router does not keep any information. This might not cover all possible scenarios but it does cover quite few of them.

Untitled Diagram-3.png

KIE Server Router comes in two pieces:
  • proxy that acts like a server 
  • client that is included in kie server to integrate with the proxy
Router client is responsible to bind into KIE Server life cycle and send notifications to KIE Server Router when configuration is changed:
  • When container is started (successfully) it will register it in the router
  • When container is stopped (successfully) it will unregister it from router
  • When entire server instance is stopped, it will unregister all containers (that are in state started) from router

KIE Server router client is packaged in KIE Server itself but by default is deactivated. It can be activated by setting router URL via system property:
org.kie.server.router 
Which can be one or more valid HTTP urls pointing to one or more routers this server should be registered in.

KIE Server router exposes api that is completely compatible with KIE Server Client interface so you can use the java client to talk to router as you would do when talking to any KIE Server instance. Though it has some limitations:
  • Router cannot be used to deploy new containers - this is due to it will not know given container id yet and thus won’t be able to decide in which server it should be deployed to
  • Router cannot deal with modification requests to endpoints of KIE Server that is not based on container id
    • Jobs
    • Documents
  • Router will return hard coded response when requesting KIE Server info
See basic KIE server Router capabilities in action in following screen cast:


Response aggregators

Retrieval requests are responsible for collecting data from various sources but they must return all the data aggregated into single response - that is well structured. Here response aggregators come into the picture. There are dedicated response aggregators that are per data format:
  • JSON
  • JAXB
  • Xstream 
XML based aggregators (both JAXB and Xstream) use Java SE xml parsers with some hints on what elements are the subject for aggregation. While JSON uses org.json library (which is the smallest one) to aggregate JSON responses.

All aggregated responses are compatible with data model returned by KIE Server and thus can be consumed by KIE Server Client without any issues.

Aggregators support both sorting and pagination of the aggregated results. It does the aggregation and sorting and paging on the router side. Though the initial sorting is done on the actual KIE Server instances as well to make sure it is properly respected on the source data.

Paging on the other side is bit more tricky as it needs to ask KIE Servers to always give from page 0 up to the requested one to properly take into consideration all KIE Servers before returning requested page.

See paging and sorting in action in following screencast.



That concludes quick tour about KIE Server Router that should provide more flexibility in dealing with more advanced environments with KIE Servers.

Comments, questions, ideas as usual - welcome

poniedziałek, 21 listopada 2016

Pluggable container locator and policy support in KIE Server

In this article, a container locator support was introduced. Commonly known as aliases. At that time it was by default using latest available version of the project configured with same alias. This idea was really well received and thus further enhanced.

Pluggable container locator


First of all, not always latest available container is the way to go. There might be a need to have time bound container selection for given alias, for example:

  • there are two containers for given alias
  • even though there is a new version already deployed it should not be used until predefined date

so users might implement their own container locator interface and register it by including that implementation bundled into a jar file on KIE Server class path. As usual, the discovery mechanism is based on ServiceLoader so the jar must include:
  • implementation of ContainerLocator interface
  • file named org.kie.server.services.api.ContainerLocator must be placed in META-INF/services directory
  • include fully qualified class name of the ContainerLocator implementation in META-INF/services/org.kie.server.services.api.ContainerLocator file
Since there might be multiple implementation present on class path, container locator to be used needs to be given via system property:
  • org.kie.server.container.locator where the value should be class name of the implementation of ContainerLocator interface - simple name not FQCN
that will be then used instead of default latest container locator.

so far so good, but what should happen with containers that are now idle or should not be used anymore? Since the container locator will make sure that selected (by default latest) container is going to be used in most of the cases, there might be containers that do no need to be on runtime any longer. Especially important in environments where new versions of containers are frequently deployed which might lead to increased use of memory. Thus efficient cleanup of not used containers is a must. 

Pluggable policy support

For this a policy support was added, but not only for this as policies are general purpose tool within KIE Server. So what's that?

Policy is a set of rules that are applied by KIE Server periodically. Each policy can be registered at different time to be applied. Policies are discovered when KIE Server starts and are registered but are not started by default. 
The reason for this is that the discovery mechanism (ServiceLoader) is based on class path scanning and thus are always performed regardless if they should be used or not. So there is another step required to make the policy to be activated. 

Policy activation is done by system property when booking KIE Server:
  • org.kie.server.policy.activate - where value is a comma separated list of policies (their names) to be activated
When policy manager activates given policy it will respect its life cycle:
  • will invoke start method of the policy
  • will retrieve interval from the policy (invoke getInterval method)
  • schedule periodic execution of that policy based on given interval - if interval is less than 1 it will ignore that policy
NOTE: scheduling is done based on interval for both first execution and then repeatable executions - meaning first execution will take place after interval. Interval must be given in milliseconds.

Similar, when KIE Server stops it will call stop method of every activated policy to properly shut it down.

Policy support can be used for various use cases, one that comes out of the box is to complement container locator (with its default latest only). So there is a policy available in KIE Server that will undeploy other container than latest. This police by default is applied once a day, but can be reconfigured via system properties. KeepLatestContainerOnlyPolicy will attempt to dispose containers that have lower versions, though it might fail on such attempt. Reasons of the failure might vary but most common will be when there are active process instances for container that is being disposed. In that case the container is left as started and the next day (or after another reconfigured period of time) the attempt will be retried. 

NOTE: KeepLatestContainerOnlyPolicy is aware of controller so it will notify controller that the policy was applied and stop container in controller only, but only stop it and not remove it. Same as any policy this one must be activated via system property as well.

This opens the door for tremendous amount of policy implementations, starting with cleanup, through blue-green deployments and finishing at reconfiguring runtimes. All performed periodically and automatically by KIE Server itself.

As always, comments and further ideas are more than welcome.

wtorek, 8 listopada 2016

Administration interfaces in jBPM 7

In many cases, when working with business processes, users end up in situations that were not foreseen before, e.g. task was assigned to a user that left the company, timer was scheduled with wrong expiration time and so on.

jBPM from it's early days had the capabilities to deal with these though it required substantial knowledge on how to use low level apis of jBPM. These days are now over, jBPM version 7 comes with administration api that cover:

  • process instance operations
  • user task operations
  • process instance migration

These administration interfaces are supported in jBPM services and in KIE Server so users have full power of performing quite advanced operations when utilizing jBPM as process engine regardless if that is embedded (jbpm services api) or as a service (KIE Server).

Let's start quickly by looking what sort of capabilities each of the service provide.

Process instance Administration


Process instance administration service provides operations around the process engine and individual process instance, following is complete list of operations supported and their short description:
  • get process nodes - by process instance id - this returns all nodes (including embedded subprocesses) that exists in given process instance. Even though the nodes come from process definition it's important to get them via process instance to make sure that given node exists and have valid node id so it can be used with other admin operations successfully
  • cancel node instance - by process instance id and node instance id - does exactly what the name suggests - cancels given nodes instance within process instance
  • retrigger node instance - by process instance id and node instance id - retrigger by first canceling the active node instance and create new instance of the same type - sort of recreates the node instance
  • update timer - by process instance id and timer id - updates timer expiration of active timer. It updates the timer taking into consideration time elapsed since the timer was scheduled. For example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 1,5 hour from the time it was updated. Allows to update
    • delay - duration after timer expires
    • period - interval between timer expiration - applicable only for cycle timers
    • repeat limit - limit the expiration to given number - applicable only for cycle timers
  • update timer relative to current time - by process instance id and timer id - similar to regular update time but the update is relative to the current time - for example: In case timer was initially created with delay of 1 hour and after 30 min it's decided to update it to 2 hours it will then expire in 2 hours from the time it was updated.
  • list timer instances - by process instance id - returns all active timers found for given process instance
  • trigger node - by process instance id and node id - allows to trigger (instantiate) any node in process instance at any time.

Complete ProcessInstanceAdminService can be found here.
KIE Server client version of it can be found here.


User task administration


User task administration mainly provides useful methods to manipulate task assignments (users and groups), data handling and automatic (time based) notifications and reassignments. Following is complete list of operations supported for user task administration service:
  • add/remove potential owners - by task id - supports both users and groups with option to remove existing assignment
  • add/remove excluded owners - by task id - supports both users and groups with option to remove existing assignment
  • add/remove business administrators  - by task id - supports both users and groups with option to remove existing assignment
  • add task inputs - by task id - modify task input content after task has been created
  • remove task inputs - by task id - completely remove task input variable(s)
  • remove task output - by task id - completely remove task output variable(s)
  • schedules new reassignment to given users/groups after given time elapses - by task id - schedules automatic reassignment based on time expression and state of the task:
    • reassign if not started (meaning when task was not moved to InProgress state)
    • reassign if not completed (meaning when task was not moved to Completed state)
  • schedules new email notification to given users/groups after given time elapses - by task id - schedules automatic notification based on time expression and state of the task:
    • notify if not started (meaning when task was not moved to InProgress state)
    • notify if not completed (meaning when task was not moved to Completed state)
  • list scheduled task notifications - by task id - returns all active task notifications
  • list scheduled task reassignments - by task id - returns all active tasks reassignments
  • cancel task notification - by task id and notification id - cancels (and unschedules) task notification
  • cancel task reassignment - by task id and reassignment id - cancels (and unschedules) task reassignment
NOTE: all user task admin operations must be performed as business administrator of given task - that means every single call to user task admin service will be checked in terms of authorization and only business administrators of given task will be allowed to perform the operation.

Complete UserTaskAdminService can be found here.
KIE Server client version of it can be found here.


Process instance migration


ProcessInstanceMigrationService provides administrative utility to move given process instance(s) from one deployment to another or one process definition to another. It’s main responsibility is to allow basic upgrade of process definition behind given process instance. That might include mapping of currently active nodes to other nodes in new definition.

Migration does not deal with process or task variables, they are not affected by migration. Essentially process instance migration means a change of underlying process definition process engine uses to move on with process instance.

Even though process instance migration is available it’s recommended to let active process instances finish and then start new instances with new version whenever possible. In case that approach can’t be used, migration of active process instance needs to be carefully planned before its execution as it might lead to unexpected issues.Most important to take into account is:
  • is new process definition backward compatible?
  • are there any data changes (variables that could affect process instance decisions after migration)?
  • is there need for node mapping?
Answers to these questions might save a lot of headache and production problems after migration. Best is to always stick with backward compatible processes - like extending process definition rather than removing nodes. Though that’s not always possible and in some cases there is a need to remove certain nodes from process definition. In that situation, migration needs to be instructed how to map nodes that were removed in new definition in case active process instance is at the moment in such a node.

Complete ProcessInstanceMigrationService can be found here.
KIE Server version of it can be found here.

With this, I'd like to emphasize that administrators of jBPM should be well equipped with enough tools for the most common operations they might face. Obviously that won't cover al possible cases so we're are more than interested in users feedback on what else might be there as admin function. So share it!

wtorek, 25 października 2016

Case management - jBPM v7 - Part 3 - dynamic activities

It's time for next article in "Case Management" series, this time let's look at dynamic activities that can be added to a case on runtime. Dynamic means process definition that is behind a case has no such node/activity defined and thus cannot be simply signaled as it was done for some of the activities in previous articles (Part 1 and Part 2).

So what can be added to a case as dynamic activity?

  • user task
  • service task - which is pretty much any type of service task that is implemented as work item 
  • sub process - reusable

User and service tasks are quite simple and easy to understand, they are just added to case instance and immediately executed. Depending of the nature of the task, it might start and wait for completion (user task) or it might directly finish after execution (service tasks). Although most of the service tasks (as defined in BPMN2 spec - Service Task) will be invoked synchronously it might be configured to run in background or even wait for external signal to be completed - all depends on the implementation of the work item handler.
Sub process is slightly different in the expectations process engine will have - process definition that is going to be created as dynamic process must exists in kjar. That is to make sure process engine can find that process by its id to execute it. There are no restriction on what the subprocess will do, it can be synchronous without wait states or it can include user tasks or other subprocesses. Moreover such created subprocess will have correlation key set with first property being the case id of the case where the dynamic task was created. So from case management point of view it will belong to that case and thus see all case data (from case file - see more details about case file in Part 2).

Create dynamic user task

To create dynamic user task there are few things that must be given:
  • task name
  • task description (optional though recommended to be used)
  • actors - list of comma separated actors to assign the task, can refer to case roles for dynamic resolving 
  • groups - same as for action but referring to groups, again can use case roles
  • input data - task inputs to be available to task actors
Dynamic user task can be created via following endpoint:

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

where 
  • itorders is container id
  • IT-0000000001 is case id
Method::
POST

Body::
{
 "name" : "RequestManagerApproval",
 "data" : {
  "reason" : "Fixed hardware spec",
  "caseFile_hwSpec" : "#{caseFile_hwSpec}"
  }, 
 "description" : "Ask for manager approval again",
 "actors" : "manager",
 "groups" : "" 
}

this will then create new user task associated with case IT-000000001 and the task will be assigned to person who was assigned to case role named manager. This task will then have two input variables:
  • reason
  • caseFile_hwSpec - it's defined as expression to allow runtime capturing of process/case data
There might be a form defined to provide user friendly UI for the task which will be then looked up by task name - in this case it's RequestManagerApproval (and the form file name should be RequestManagerApproval-taskform.form in kjar).

Create dynamic service task

Service tasks are slightly less complex from the general point of view, though they might need more data to be provided to properly perform the execution. Service tasks require following things to be given:
  • name - name of the activity
  • nodeType - type of a node that will be then used to find the work item handler
  • data - map of data to properly deal with execution
Service task can be created with the same endpoint as user task with difference in the body payload.
Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/tasks

where 
  • itorders is container id
  • IT-0000000001 is case id
Method::
POST

Body::
{
 "name" : "InvokeService",
 "data" : {
  "Parameter" : "Fixed hardware spec",
  "Interface" : "org.jbpm.demo.itorders.services.ITOrderService",
  "Operation" : "printMessage",
  "ParameterType" : "java.lang.String"
  }, 
 "nodeType" : "Service Task"
}

In this example, an java based service is executed. It consists of an interface that is a public class org.jbpm.demo.itorders.services.ITOrderService, public printMessage method with single argument of type String. Upon execution Parameter value is passed to the method for execution.

Number and names, types of data given to create service tasks completely depends on the implementation of service task's handler. In this example org.jbpm.process.workitem.bpmn2.ServiceTaskHandler was used.

NOTE: For any custom service tasks, make sure handler is registered in deployment descriptor in WorkItem Handler section where the name is same as nodeType used when creating a dynamic service task.

Create dynamic subprocess

Dynamic subprocess expects only optional data to be provided, there are no special parameters as for tasks so it's quite straight forward to be created. 

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/processes/itorders-data.place-order

where 
  • itorders is container id
  • IT-0000000001 is case id
  • itorders-data.place-order is the process id of the process to be created
Method::
POST

Body::
{
 "any-name" : "any-value"
}

Mapping of output data

Typically when dealing with regular tasks or subprocesses to map output data, users define data output associations to instruct the engine on what output of the source (task or sub process instance) to be mapped to what process instance variable. Since dynamic tasks do not have data output definition there is only one way to put output from task/subprocess to the process instance - by name. This means the name of the returned output of a task must match by name process variable to be mapped. Otherwise it will ignore that output, why is that? It's to safe guard case/process instance of putting unrelated variables and thus only expected information will be propagated back to case/process instance.

Look at this in action

As usual, there are screen casts to illustrate this in action. First comes the authoring part that shows:
  • creation of additional form to visualize dynamic task for requesting manager approval
  • simple java service to be invoked by dynamic service task
  • declaration of service task handler in deployment descriptor


Next, it is shown how it actually works in runtime environment (kie server)




Complete project that can be imported and executed can be found in github.

So that concludes part 3 of case management in jBPM 7. Comments and ideas more than welcome. And that's still not all that is coming :)

środa, 19 października 2016

Case management - jBPM v7 - Part 2 - working with case data

In Part 1, basic concepts around case management brought by jBPM 7 were introduced. It was a basic example (it order handling) as it was limited to just moving through case activities and basic data to satisfy milestone conditions.

In this article, case file will be described in more details and how it can be used from within a case and process. So let's start with quick recap of variables available in processes.

There are several levels where variables can be defined:

  • process level - process variable
  • subprocess level - subprocess variable
  • task level - task variable
Obviously the process level is the entry point where all other take the variables from. Meaning if a process instance creates subprocess it will usually include mapping from process level to subprocess level. Similar for tasks, tasks will get variables from process level. 
In such case it means the variable is copied to ensure isolation for each level. That is, in most of the cases, the desired behavior unless you need to keep all variables always up to date, regardless of the level they are in. That, in turn, is usual situation in case management which expects always the most up to date variables at any time in the case instance, regardless of their level.

So that's why case management in jBPM is equipped with Case File, which is then only one for entire case, regardless how many process instances compose the case instance. Storing data in case file promotes reuse instead of copy, so each process instance can take variable directly from case file and same for updates. There is no need to copy variable around, simply refer to it from your process instance.

Support for case file data is provided at design time by marking given variable as case file


as can be seen in the above screenshot, there is variable hwSpec marked as case file variable. And the other (approved) is process variable. That means hwSpec will be available to all processes within a case, and moreover it will be accessible directly from a case file even without process instance involvement.

Next, case variables can be used in data input and output mapping


case file variables are prefixed with caseFile_ so the engine can properly handle it. Though a simplified version (without the prefix) is expected to work as well. Though for clarity and readability it's recommended to always use the prefix.

Extended order hardware example

In part 1, there was a very basic, with no data case definition for handling order of IT hardware. In this article we extend the example to illustrate:
  • use of case file variables
  • use of documents
  • share of the information between process instances via case file
  • use business process (via call activity) to handle placing order activity


Following screencast shows entire design time activities to extend the part 1 example, including awesome feature to copy entire project!




So what was done here:

  • create new business process - place-order that will be responsible for placing order activity instead of script task from previous example
  • define case file variables:
    • hwSpec - which is a physical document that needs to be uploaded
    • ordered - which is indication for Milestone 1 to be achieved 
  • replace script task for Place order activity with reusable subprocess - important to note is that there are no variables mapping in place, all is directly taken from case file
  • generate forms to handle file upload and slightly adjust their look

With these few simple steps our case definition is enhanced with quite a bit of new features making it's applicability much better. It's quite common to include files/documents in a case, though they should be still available even if given process instance that uploaded them is gone. And that's provided by case file that is there as long as case instance was not destroyed.

Let's now run the example to see it in action




The execution is similar as it was in part one, meaning to start the case we need to use REST api. A worth noting part here is that we made a new version of the project:

  •  org.jbpm.demo
  • itorders
  • 2.0
and then it was deployed on top of the first version, in exact same kie server. Even though there are both versions running the URL to start the case didn't change:


Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances

where

  • itorders is the container alias that was deployed to KIE Server
  • itorders.orderhardware is case definition id

Method: POST

As described above, at the time when new case is started it should provide basic configuration - role assignments:

POST body::
{
  "case-data" : {  },
  "case-user-assignments" : {
    "owner" : "maciek",
    "manager" : "maciek"
  },
  "case-group-assignments" : {
    "supplier" : "IT"
 }
}

itorders is an alias that when used will always select the latest version of the project. Though if there is a need to explicitly pick given version then simply replace the alias with the container id (itorders_2.0 or itorders_1.0)

Once the process is started supplier (based on role assignment - IT group) will have task to complete to provide hardware specification - upload a document. Then manager can review the specification and approve (or not) the order. Then it goes to subprocess to actually handle the ordering, which once done will store status into case file which will then trigger milestone 1.

Throughout all these user oriented activities, case file information (hwSpec) is shared without any need to copy that around. Moreover, there was no need to configure anything to handle documents either, that is all done by creating a case project that by default sets up everything that is needed.

At any time (as long as case was not destroyed) you can get the case file to view the data. This can be retrieved by following endpoint

Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/caseFile 

where:
  • itorders - is the container alias
  • IT-0000000001 - is the case ID
Method: GET



With this, part 1 concludes with a note that since it is bit enhanced order hardware case, it's certainly not all that you can do with it so stay tuned for more :)

Try it yourself

As usual, complete source code is located in github.

poniedziałek, 10 października 2016

Case management - jBPM v7 - Part 1

This article starts a new series of blog posts about case management feature that is coming in jBPM v7 to illustrate its capabilities with complete examples that will get more complex/advanced on each part.

One of the most frequently requested features in jBPM is so called Case Management. Case management can mean different things depending who you talked to so I'd like to start with small scope definition what does it mean in context of jBPM (at the moment as that might change based on feedback, supported features and use cases and further evolution).

Case management can be best described when compared to business processes. Business processes are usually modeled as flow charts with clearly defined paths to reach a business goal. These processes usually have one (might have more) starting points and are structurally connected to build end to end flow of work and data.



While cases are more dynamic, they provide room for improvements as the case evolve without the need to foresee all possible actions in advance. So case definition usually consists of loosely coupled process fragments that can be connected (directly on indirectly) to lead to certain milestones and finally business goal.

Looking at different notations that can be used for case management, processes and cases might be represented differently:

  • BPMN2
  • CMMN
jBPM comes with cases support based on BPMN2 as most users are familiar with this notation and most if not all features can be represented with BPMN2 constructs. That's at least a starting point which might be revisited further on. A good comparison between BPMN2 and CMMN was published by Bruce Silver.

These article series will introduce readers to case management support gradually with more features as we go to not provide too much details at once and let the features described be backed with examples that can be seen (screencast) and executed on the actual environment with jBPM v7.

Case project

First thing to start with, is to create Case project - it's a special type of project in KIE workbench that is on top of regular project to configure it for the case management:
  • set runtime strategy to Per Case
  • configure marshallers for case file and documents
  • create WorkDefinition.wid files in the project and its packages to ensure case related nodes (e.g. Milestone) are available in palette 


Case definition

So let's start with basic case definition example that covers following use case - IT hardware orders. As in any company, there is a need from time to time to order new IT equipment - such as computers, phones, etc. This kind of system can be represented with a good case management as they usually deal with a bit of dynamic decisions that might influence the flow. 

Case definition is created in authoring perspective in KIE workbench - it expects name, location and optionally case ID prefix. What's that? Case ID prefix is configurable element that allows to easily distinguish different types of cases. Default mechanism is that the prefix is then followed with generated id in following format:

ID-XXXXXXXXXX

where X is generated number to produce unique id with the prefix. If prefix is not given it defaults to CASE and then each subsequent instance of that case definition will be:
CASE-0000000001
CASE-0000000002
CASE-0000000003

or when prefix is set to HR
HR-0000000001
HR-0000000002
HR-0000000003

Case definition is always an adhoc process definition meaning it is a dynamic process so does not require to have explicit start nodes.

Once the clean definition is created, it's time to define roles involved in the usual case of ordering new IT hardware:
  • owner - is the person who requests the hardware (can be only one)
  • manager - is direct manager of the owner to approve the requested hardware
  • supplier - set of people that can order and deliver physical equipment (usually more than one)
When the roles are known, case management must ensure that these are not hardcoded to single set of people/groups as part of case definition and that it can differ per each case instance. This is where case role assignments come into the picture and can be:
  • given when case starts
  • set at any given point in time while case is active
  • removed at any given point in time while case is active
second and third option does not alter the task assignments for already active tasks.

What is important to note here, is that in case management users should always use roles for task assignments instead of actual user/group names, that is to make the case as dynamic as possible so actual user/group assignment is done as late as possible. It's similar to process variables though without expression syntax (#{variable}).

Let's take a look at our case definition:


So what do we have here? First thing that is directly seen is - no start nodes of the process. Does that mean there is no way to tell what is going to be triggered when new instance of this case definition is created?
Quite the opposite - nodes that have no incoming connections and are marked as Adhoc Autostart (a property of a node) will be automatically triggered when instance is started.

In this case these are:
  • Prepare hardware spec
  • Milestone 1: Order placed
Both of these nodes are wait states, meaning they are triggered but they are not left, they wait for further action:
  • Prepare hardware spec - wait for supplier to provide the spec and complete the task
  • Milestone 1: Order placed - wait for condition to be met - there is a case file variable named "ordered" with value true
Hmmm, but what is a case file then? Case File is like a bucket for data for entire case instance. Since case can span across number of process instances, instead of coping data back and forth (that first of all might be expensive and second can lead to use of out of date information) process instance can write and read from case file that is accessible to all process instance that belong to the same case. CaseFile is stored in working memory and thus is persiteable same as ksession and process instance - meaning can use marshaling strategies to store in different places e.g. documents, JPA entities etc. Though what's more important - it is a fact in working memory and thus can be subject for rules.

Milestone actually uses case file as condition to trigger only if there is a ordered variable available in case file and its value is true. Only then milestone will be completed and will follow to next node.

Another worth noting part is the end signals that are at the end of Milestone 1 and Milestone 2 fragments. These signals are responsible for triggering next Milestone in line, but again, only triggering and not completing it as they will wait on condition. The scope of signal is process instance only so completing Milestone 1 in first case instance will not cause any side effects on other active case instances of the same definition.

Here is a complete design of this project and case definition as screencast.





Complete source code of this project (and the entire repository) can be found here. This repository can be cloned directly to workbench for build and deploy.

... speaking of build and deploy....

The project can be directly build and deploy in workbench and (assuming you have KIE Server connected to workbench) provisioned to execution environment where it can be started and worked on.

At the moment workbench does not provide any case management UI, thus we will use REST calls to start a case and put data into case file but we can use workbench for user task interaction and overall monitoring - process instance logs, process instance image, active nodes, etc.

Start new case

To start a new case use following endpoint:
Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/itorders.orderhardware/instances

where

  • itorders is the container alias that was deployed to KIE Server
  • itorders.orderhardware is case definition id
As described above, at the time when new case is started it should provide basic configuration - role assignments:

POST body::
{
  "case-data" : {  },
  "case-user-assignments" : {
    "owner" : "maciek",
    "manager" : "maciek"
  },
  "case-group-assignments" : {
    "supplier" : "IT"
 }
}

At the moment case-data is empty as we don't supply any data/information to the case. But we do configure our defined roles. Two of them are user assignments (as can be seen in the above screen cast they are referenced in Actor property of user tasks) and third is group assignments (as it is referenced in Groups property of user task).

Once successfully stared it will return case ID that should look like
IT-0000000001

Then this case can already be seen in process instance list in workbench, and its tasks should be available in task perspective. So the tasks can be completed and various milestones will be achieved until it reaches the Milestone that requires shipped variable to be present in case file.

Insert case file data

Case file data can be easily inserted into active case using REST api.
Endpoint::
http://host:port/kie-server/services/rest/server/containers/itorders/cases/instances/IT-0000000001/caseFile/shipped

where
  • itorders is the container alias that was deployed to KIE Server
  • IT-0000000001 is the unique id of a case instance
  • shipped is the name of the case file variable to be set/replaced
POST body::
true

Same should be later repeated to insert "delivered" case file variable to achieve Milestone 3 and move to final task - Customer Satisfaction Survey. And that's all for this basic case example.

Execution in action can be found in this screencast



Comments and ideas more than welcome. And in addition, contribution to what cases should be provided as example are wanted!

piątek, 30 września 2016

Improved container handling and updates in KIE Server

KIE Server allows to deploy multiple kjars, even the same project but different versions. And that is actually quite common to add new version of the project (kjar) next to already running. That in turn enforces to have unique container ids for each project.
Let's take an example - at the beginning we start with first project

Group id: org.jbpm
Artifact id: my-project
Version: 1.0

This project has single process inside and once built we deploy it with container id set to my-project.

Then we realized that the process needs an update (of whatever type) and thus we need to increase project version and again build and deploy it to KIE Server. So that gives us another project version

Group idorg.jbpm
Artifact id: my-project
Version: 2.0

Then we cannot deploy it with the same container id (my-project) so we need to change it to something else ... my-project2 (most likely ;))

So what's wrong with that? Well, first of all naming convention is starting to be affected by the versioning schema used, which might be good or bad, depending how it's used. But more important is that clients interacting with these projects must be aware of the versions all the time.
That in turn makes the client application to be bound to release cycle of the projects  in particular to their new versions (and by that processes and other assets).

What can we do about it...?
The improvement that comes in version 7 allows to define aliases for containers (which is runtime representation of kjar). Alias can be added to as many containers as needed and by default (when not given) uses artifact id of the project.
Aliases are not constrained to the same group and artifact ids so project with different GA coordinates can use same alias. Then alias can be used all the time when interacting with KIE Server, the behavior differs depending on the operations performed as it might require some additional logic to figure out which is the actual container to be used. Let's examine these situations

Starting new process instance

As listed above unique container id disables clients to use single endpoint to be used to start process instances of given process id as they always need to provide container id which differs between versions. When alias is used instead client application can actually always use the same container alias (instead of container id) to always start latest version of the process. 

To start process of latest version, KIE Server will get container alias and find all containers that declares that alias and then search for latest by comparing project versions - it's based on maven like version comparator though it only takes into account version and not group or artifact id.

So if we deploy the first project and then issue following request:
http://localhost:8230/kie-server/services/rest/server/containers/my-project/processes/evaluation/instances

where: 
  • my-project is container alias
  • evaluation is process id
it will then start new instance from org.jbpm:my-project:1.0 project.

Next if we deploy the 2.0 version then issue exact same request (the URL) then the new instance will be started from org.jbpm:my-project:2.0 project.

That gives us options to continuously deploy new versions and ensure that client who relies on our processes will always use latest version of the processes available in the system. It works always on the live information so if you then remove version 2.0 from KIE Server and start another instance it will be back on org.jbpm:my-project:1.0 project as it's the latest one available.

Interacting with existing process instance

Interaction with existing process instances depends on the process instance id. To be able to perform any operation on process instance its id must be given. Then based on that information KIE Server will identify correct container id to be used.
Container id is needed to be able:
  • unmarshal incoming data (like variables)
  • find correct runtime manager 
  • marshal outgoing data
Both incoming and outgoing data might refer to project specific types (that can change between versions) and thus it's important that the right one is used.

Interacting with tasks

Similar to process instances, interaction with tasks is dependent on the task id. KIE Server will locate proper container by task id to be able to correctly deal with the requests. Container id is used for exact same operations as in process instance case (unmarshall and marshal data and find correct runtime manager).

Interacting with process definition image and forms

Interacting with process definition image and process forms will work same as start process - returns always the latest one when using container alias.

Below you can see this in action in the following screen cast. This screencast illustrates use with workbench integrated with KIE Server so as you can see when you press build and deploy you will be given all the details already filled in:
  • container id is artifactId _ version
  • alias is artifact id
Although you're in full control and can change both defaults to whatever you need.



And again, container aliases are set either explicitly (if given when creating containers) or implicitly (based on artifact id). Although that does not mean you have to use them. In some case container ids are good enough and they will still work the same way as they do now.

Hopefully this will bring another reason to move to KIE Server and start using its full potential :)