czwartek, 17 sierpnia 2017

Maven plugins for KIE Server

Since version 7 of jBPM KIE Server is the only execution server available by default thus it's getting more and more traction. With that in mind there is a need to have it more aligned with CI/CD pipelines to allow simple integration with runtime environments.

To help with that two maven plugins were built:

the main purpose of these plugins is to enable simple deployment (and not only deployment) of kjars into KIE Servers. 
First one is dedicated for unmanaged KIE Servers as that plugin interacts directly with KIE Server REST api. While the second one targets managed KIE Servers as it interacts with KIE Controller (either one in workbench/business central or standalone controller).

These maven plugins can be used to perform deployment of kjar to execution server directly from within a build pipeline. 

Both plugins have comprehensive documentation (see links above) but just for completeness I'd like to list their capabilities in this article:

KIE Server Deploy Maven Plugin

  • deploy -  deploy kjar to runtime environment
  • dispose - dispose running kjar (kie container) in runtime environment
  • update - update version of running kjar (kie container) in runtime environment

KIE Server Controller Deploy Maven Plugin

  • get-template - retrieves existing server templates from controller
  • create-template - creates new server templates with set of containers 
  • delete-template - removes server template
  • get-containers - retrieves containers in given server template
  • get-container - retrieves given container from server template
  • create-container - create new container in given server template
  • delete-container - delete container from given server template
  • start-container - starts container in given server template
  • stop-container - stops container in given server template
  • deploy-container - creates and starts container in given server template
  • dispose-container - stops and removes container from given server template 

    Contribution - a win win situation!

    And now the most important part - these Maven plugins were added by Fabio Massimo as contributions to KIE projects. So I'd like to thank Fabio for his outstanding work and excellent addition to projects. 

    This clearly shows how valuable contributions are! With that I'd like to encourage others to follow Fabio and share with others community members great stuff you all have done or plan to do!


          środa, 2 sierpnia 2017

          Managed KIE Server gets ready for the cloud

          As described in this article, KIE Server can run in two modes:

          • managed, wit controller that is responsible for providing kie containers to be deployed
          • unmanaged, self contained server that allows to deploy kie containers manually 
          In this article, I'd like to focus on managed mode and show some improvements in that area that will make managed KIE Server ready for the cloud.

          Background

          With default configuration of managed KIE Server, both controller and kie server need to know how to communicate with each other. By default it is REST based communication and thus require to provide credentials while sending requests
          • user and password - for BASIC authentication
          • token for BEARER authentication 
          These should be gives as system properties on each side:

          • org.kie.server.user and org.kie.server.password is to be set on controller jvm to instruct what credentails to use when connecting to kie server(s)
          • org.kie.server.controller.user and org.kie.server.controller.password is to be set on kie server jvm to instruct what credentials to use when connecting to controller

          This configuration fits nicely in non restricted environment where both controller and KIE Server(s) don't have any limitations to talk to each other. Though it does require that user name and password used by controller to connect to kie servers is the same as it is set globally via system properties and thus will be used whenever talking to any KIE Server instance.

          Though this setup can become problematic if there are any restrictions between these two. In some cases controller might be hidden behind firewall. That will then make an issue for it to communicate with KIE Server(s) when needed. Similar this becomes an issue in OpenShift environment where controller and KIE Server(s) are in different namespaces - they won't see each other internal IP.

          Here we touch upon another aspect of managed KIE Servers - its location. KIE Server when running in managed mode requires following configuration parameters (given as system properties on jvm that runs the KIE Server):
          • org.kie.server.id - an id that points to server template id defined in the controller
          • org.kie.server.controller - an URL of the controller to connect to upon start
          • org.kie.server.location - an URL of this instance of the KIE Server where it will be accessible over HTTP/REST
          The location of the KIE Server is expected to be unique - since this is an URL where the actual instance is accessible. Though this becomes an issue when running kie servers behind load balancer or when running in a cloud based environments. 

          It puts us in situation that we either give load balancer URL and by that loose the capabilities of receiving updates from controller (as only one of them will get updates based on load balancer selection) or we bypass load balancer and then loose the capabilities of it for runtime operations. Keep in mind that the location that kie server does provide on connection to controller is then used by (so called) runtime views in workbench - process instances, tasks, etc.

          In OpenShift environment that is pretty much the same issue - either public IP is provided which completely hides the individual PODs or internal IP of the POD. It has the same consequences as load balancer with one addition - internal IP won't work at all across namespaces.

          Websockets to the rescue...

          To resolve all the issues mentioned above, an alternative (and soon to be the default) way of communicating between KIE Server and Controller was introduced. It is based on Websockets that is now available in pretty much any JEE container (including servlet container) and solves pretty much all the issues that were identified for both on premise and in the cloud.


          As illustrated on the diagram above, KIE Server is the one who initiate the communication and keeps it active as long as it's alive. That in turn removes any need from KIE Controller to know how to communicate (and by that connect to) with KIE Server instances. So there is no more need to configure any user name or password on controller jvm to talk to KIE Servers, it will simply reuse open channel to connected KIE Servers.

          KIE Server is solely responsible for the connection. That means it needs to know where the controller is, how to authenticate when opening connection and how to handle lost connection (e.g. when controller goes down).

          So the first two are exactly the same, given as system properties on jvm that KIE Server is going to run:
          • using either BASIC or BEARER authentication 
          • org.kie.server.controller - an URL of the controller to connect to upon start
          Lost connections are handled by retry mechanism - as soon as KIE Server gets notification that the connection is closed it will start a background thread that will attempt to connect to controller every 10 seconds. Once it is reconnected that thread is terminated. It will reconnect only if the KIE Server itself is not the one who closed the connection.

          Since we keep connection open between kie servers and kie controller then the location given when kie server connects does not have to be unique any more. That solves the issue with running behind load balancer or in OpenShift with different namespaces. System property that provides location (org.kie.server.location) should now be given as the load balancer or public IP in OpenShift. 

          NOTE: If you don't run behind load balancer on on-premise setup (not OpenShift) then keep the location of the kie server unique regardless of the websocket being used. Similar rule applies - same public IP/load balancer should be kept for single server template only.

          There is no need for any extra configuration to enable websocket based communication, it is based only on the actual URL given as controller url - org.kie.server.controller system property.

          -Dorg.kie.server.controller=ws://localhost:8080/kie-wb/websocket/controller

          Depending where is your controller you might need to change:
          • localhost - to actual host/IP of your server where controller is deployed to
          • 8080 - to actual port number of your server where controller is deployed to
          • kie-wb - to actual context path of the controller web app 

          Both protocols - HTTP/REST and Websocket are active by default and either of them can be used. Though one rule must be kept - use single protocol for all kie servers of given server template. 
          Recommended is to keep single protocol across all kie servers connected to single controller.

          Workbench that provides UI for process related operations (Process Instance, Process Definitions, Tasks perspectives) will utilise websocket channel only for administration operations, that is:
          • controller based operations to manage kie servers
          • data set queries registration required by runtime views
          All other operations, like getting user tasks, getting process definitions or instances, will use regular REST based communication as it will call endpoints on behalf of logged in user to enforce security.

          With this enhancement managed KIE Server is way nicer option to run in cloud and behind load balancer than ever before :)

          Stay tuned for more to come!