Wednesday, March 23, 2016

Packt is doing it again: 50% off on all eBooks and Videos

Packt Publishing has another great promotion going: 50% off on all Packt eBooks and Videos until April 30th.

It is a great opportunity to grab all those Drools books as well as any others you might be interested in.

Click on the image bellow to be redirected to their online store:


Monday, March 21, 2016

High Availability Drools Stateless Service in Openshift Origin

openshift-origin-logoHi everyone! On this blog post I wanted to cover a simple example showing how easy it is to scale our Drools Stateless services by using Openshift 3 (Docker and Kubernetes). I will be showing how we can scale our service by provisioning new instances on demand and how these instances are load balanced by Kubernetes using a round robin strategy.

Our Drools Stateless Service

First of all we need a stateless Kie Session to play around with. In these simple example I've created a food recommendation service to demonstrate what kind of scenarios you can build up using this approach. All the source code can be found inside the Drools Workshop repository hosted on github:
In this project you will find 4 modules:
  • drools-food-model: our business model including the domain classes, such as Ingredient, Sandwich, Salad, etc
  • drools-food-kjar: our business knowledge, here we have our set of rules to describe how the food recommendations will be done.
  • drools-food-services: using wildfly swarm I'm exposing a domain specific service encapsulating the rule engine. Here a set of rest services is exposed so our clients can interact.
  • drools-controller: by using the Kubernetes Java API we can programatically provision new instances of our Food Recommendation Service on demand to the Openshift environment.
Our unit of work will be the Drools-Food-Services project which expose the REST endpoints to interact with our stateless sessions.
Also notice that there is another Service that gives us very basic information about where our Service is running:
We will call this service to know exactly which instance of the service is answering our clients later on.
The rules for this example are simple and not doing much, if you are looking to learn Drools, I recommend you to create more meaning full rules and share it with me so we can improve the example ;) You can take a look at the rules here:
As you might expect: Sandwiches for boys and Salads for girls :)
One last important thing about our service that is important for you to see is how the rules are being picked up by the Service Endpoint. I'm using the Drools CDI extension to @Inject a KieContainer which is resolved using the KIE-CI module, explained in some of my previous posts.
We will bundle this project into a Docker Image that can be started as many times as we want/need. If you have a Docker client installed in your local environment you can start this food recommendation service by looking at the salaboy/drools-food-services image which is hosted in
By starting the Docker image without even knowing what is running inside we immediately notice the following advantages:
  • We don't need to install Java or any other tool besides Docker
  • We don't need to configure anything to run our Rest Service
  • We don't even need to build anything locally due the fact that the image is hosted in
  • We can run on top of any operating system
At the same time we get notice the following disadvantages:
  • We need to know in which IP and Port our service is exposed by Docker
  • If we run more than one image we need to keep track of all the IPs and Ports and notify to all our clients about those
  • There is no built in way of load balance between different instances of the same docker image instance
For solving these disadvantages Openshift, and more specifically, Kubernetes to our rescue!

Provisioning our Service inside Openshift

As I mention before, if we just start creating new Docker Image instances of our service we soon find out that our clients will need to know about how many instances do we have running and how to contact each of them. This is obviously no good, and for that reason we need an intermediate layer to deal with this problem. Kubernetes provides us with this layer of abstraction and provisioning, which allows us to create multiple instances of our PODs (abstraction on top of the docker image) and configure to it Replication Controllers and Services.
The concept of Replication Controller provides a way to define how many instances should be running our our service at a given time. Replication controllers are in charge of guarantee that if we need at least 3 instances running, those instances are running all the time. If one of these instances died, the replication controller will automatically spawn one for us.
Services in Kubernetes solve the problem of knowing all and every Docker instance details.  Services allows us to provide a Facade for our clients to use to interact with our instances of our Pods. The Service layer also allows us to define a strategy (called session affinity) to define how to load balance our Pod instances behind the service. There are to built in strategies: ClientIP and Round Robin.
So we need to things now, we need an installation of Openshift Origin (v3) and our project Drools Controller which will interact with the Kubernetes REST endpoints to provision our Pods, Replicator Controllers and Services.
For the Openshift installation, I recommend you to follow the steps described here:
I'm running here in my laptop the Vagrant option (second option) described in the previous link.
Finally, an ultra simple example can be found of how to use the Kubernetes API to provision in this case our drools-food-services into Openshift.
Notice that we are defining everything at runtime, which is really cool, because we can start from scratch or modify existing Services, Replication Controllers and Pods.
You can take a look at the drools-controller project. which shows how we can create a Replication Controller which points to our Docker image and defines 1 replica (one replica by default is created).
If you log in into the Openshift Console you will be able to see the newly created service with the Replication Controller and just one replica of our Pod. By using the UI (or the APIs, changing the Main class) we can provision more replicas, as many as we need. The Kubernetes Service will make sure to load balance between the different pod instances.
Voila! Our Services Replicas are up and running!
Voila! Our Services Replicas are up and running!
Now if you access the NodeStat service by doing a GET to the mapped Kubernetes Service Port you will get the Pod that is answering you that request. If you execute the request multiple times you should be able to see that the Round Robin strategy is kicking in.
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-k9gym","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-pzqlu","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
In the same way you can interact with the Statless Sessions in each of these 3 Pods. In such case, you don't really need to know which Pod is answering your request, you just need to get the job done by any of them.

Summing up

By leveraging the Openshift origin infrastructure we manage to simplify our architecture by not reinventing mechanisms that already exists in tools such as Kubernetes & Docker. On following posts I will be writing about some other nice advantages of using this infrastructure such as roll ups to upgrade the version of our services, adding security and Api Management to the mix.
If you have questions about this approach please share your thoughts.


Saturday, March 19, 2016

Keycloak SSO Integration into jBPM and Drools Workbench


Single Sign On (SSO) and related token exchange mechanisms are becoming the most common scenario for the authentication and authorization in different environments on the web, specially when moving into the cloud.

This article talks about the integration of Keycloak with jBPM or Drools applications in order to use all the features provided on Keycloak. Keycloak is an integrated SSO and IDM for browser applications and RESTful web services. Lean more about it in the Keycloak's home page.

The result of the integration with Keycloak has lots of advantages such as:
  • Provide an integrated SSO and IDM environment for different clients, including jBPM and Drools workbenches
  • Social logins - use your Facebook, Google, Linkedin, etc accounts
  • User session management
  • And much more...
Next sections cover the following integration points with Keycloak:

  • Workbench authentication through a Keycloak server
    It basically consists of securing both web client and remote service clients through the Keycloak SSO. So either web interface or remote service consumers ( whether a user or a service ) will authenticate into trough KC.
  • Execution server authentication through a Keycloak server
    Consists of securing the remote services provided by the execution server (as it does not provides web interface). Any remote service consumer ( whether a user or a service ) will authenticate trough KC.
  • Consuming remote services
    This section describes how a third party clients can consume the remote service endpoints provided by both Workbench and Execution Server.

Consider the following diagram as the environment for this article's example:

Example scenario

Keycloak is a standalone process that provides remote authentication, authorization and administration services that can be potentially consumed by one or more jBPM applications over the network.

Consider these main steps for building this environment:
  • Install and setup a Keycloak server
  • Create and setup a Realm for this example - Configure realm's clients, users and roles
  • Install and setup the SSO client adapter & jBPM application


  • The resulting environment and the different configurations for this article are based on the jBPM (KIE) Workbench, but same ones can also be applied for the KIE Drools Workbench as well. 
  • This example uses latest 6.4.0.CR2 community release version

Step 1 - Install and setup a Keycloak server

Keycloak provides an extensive documentation and several articles about the installation on different environments. This section describes the minimal setup for being able to build the integrated environment for the example. Please refer to the Keycloak documentation if you need more information.

Here are the steps for a minimal Keycloak installation and setup:
  1. Download latest version of Keycloak from the Downloads section. This example is based on Keycloak 1.9.0.Final.
  2. Unzip the downloaded distribution of Keycloak into a folder, let's refer it as 

  3. Run the KC server - This example is based on running both Keycloak and jBPM on same host. In order to avoid port conflicts you can use a port offset for the Keycloak's server as:

        $KC_HOME/bin/ -Djboss.socket.binding.port-offset=100
  4. Create a Keycloak's administration user - Execute the following command to create an admin user for this example:

        $KC_HOME/bin/ -r master -u 'admin' -p 'admin'
The Keycloak administration console will be available at http://localhost:8180/auth/admin (use the admin/admin for login credentials)

Step 2 - Create and setup the demo Realm

Security realms are used to restrict the access for the different application's resources. 

Once the Keycloak server is running next step is about creating a realm. This realm will provide the different users, roles, sessions, etc for the jBPM application/s.

Keycloak provides several examples for the realm creation and management, from the official examples to different articles with more examples.

You can create the realm manually or just import the given json files.

Creating the realm step by step

Follow these steps in order to create the demo realm used later in this article:
  1. Go to the Keycloak administration console and click on Add realm button. Give it the name demo.
  2. Go to the Clients section (from the main admin console menu) and create a new client for the demo realm:
    • Client ID: kie
    • Client protocol: openid-connect
    • Access type: confidential
    • Root URL: http://localhost:8080
    • Base URL: /kie-wb-6.4.0.Final
    • Redirect URIs: /kie-wb-6.4.0.Final/*
The resulting kie client settings screen:

Settings for the kie client

Note: As you can see in the above settings it's being considered the value kie-wb-6.4.0.Final for the application's context path. If your jbpm application will be deployed on a different context path, host or port, just use your concrete settings here.

Last step for being able to use the demo realm from the jBPM workbench is create the application's user and roles:
  • Go to the Roles section and create the roles admin, kiemgmt and rest-all
  • Go to the Users section and create the admin user. Set the password with value "password" in the credentials tab, unset the temporary switch.
  • In the Users section navigate to the Role Mappings tab and assign the admin, kiemgmt and rest-all roles to the admin user
Role mappings for admin user

Importing the demo realm

Import both:

  • Demo Realm - Click on Add Realm and use the demo-realm.json file
  • Realm users - Once demo realm imported, click on Import in the main menu and use the demo-users-0.json file as import source
At this point a Keycloak server is running on the host, setup with a minimal configuration set. Let's move to the jBPM workbench setup.

Step 3 - Install and setup jBPM workbench

For this tutorial let's use a Wildfly as the application server for the jBPM workbench, as the jBPM installer does by default.

Let's assume, after running the jBPM installer, the $JBPM_HOME as the root path for the Wildfly server where the application has been deployed.

Step 3.1 - Install the KC adapter

In order to use the Keycloak's authentication and authorization modules from the jBPM application, the Keycloak adapter for Wildfly must be installed on our server at $JBPM_HOME. Keycloak provides multiple adapters for different containers out of the box, if you are using another container or need to use another adapter, please take a look at the adapters configuration from Keycloak docs. Here are the steps to install and setup the adapter for Wildfly 8.2.x:

  1. Download the adapter from here
  2. Execute the following commands:

    cd $JBPM_HOME/
    unzip // Install the KC client adapter
    cd $JBPM_HOME/bin
    ./ -c standalone-full.xml // Setup the KC client adapter.
    // ** Once server is up, open a new command line terminal and run:
    cd $JBPM_HOME/bin
    ./ -c --file=adapter-install.cli
Step 3.2 - Configure the KC adapter

Once installed the KC adapter into Wildfly, next step is to configure the adapter in order to specify different settings such as the location for the authentication server, the realm to use and so on.

Keycloak provides two ways of configuring the adapter:
  • Per WAR configuration
  • Via Keycloak subsystem 
In this example let's use the second option, use the Keycloak subsystem, so our WAR is free from this kind of settings. If you want to use the per WAR approach, please take a look here.

Edit the configuration file $JBPM_HOME/standalone/configuration/standalone-full.xml and locate the subsystem configuration section. Add the following content:

<subsystem xmlns="urn:jboss:domain:keycloak:1.1">
  <secure-deployment name="kie-wb-6.4.0-Final.war">
    <credential name="secret">925f9190-a7c1-4cfd-8a3c-004f9c73dae6</credential>

If you have imported the example json files from this article in step 2, you can just use same configuration as above by using your concrete deployment name . Otherwise please use your values for these configurations:
  • Name for the secure deployment - Use your concrete application's WAR file name
  • Realm - Is the realm that the applications will use, in our example, the demo realm created on step 2.
  • Realm Public Key - Provide here the public key for the demo realm. It's not mandatory, if it's not specified, it will be retrieved from the server. Otherwise, you can find it in the Keycloak admin console -> Realm settings ( for demo realm ) -> Keys
  • Authentication server URL - The URL for the Keycloak's authentication server
  • Resource - The name for the client created on step 2. In our example, use the value kie.
  • Enable basic auth - For this example let's enable Basic authentication mechanism as well, so clients can use both Token (Baerer) and Basic approaches to perform the requests.
  • Credential - Use the password value for the kie client. You can find it in the Keycloak admin console -> Clients -> kie -> Credentials tab -> Copy the value for the secret.

For this example you have to take care about using your concrete values for secure-deployment namerealm-public-key and credential password. You can find detailed information about the KC adapter configurations here.

Step 3.3 - Run the environment

At this point a Keycloak server is up and running on the host, and the KC adapter is installed and configured for the jBPM application server. You can run the application using:

    $JBPM_HOME/bin/ -c standalone-full.xml

You can navigate into the application once the server is up at:

jBPM & SSO - Login page 
Use your Keycloak's admin user credentials to login: admin/password

Securing workbench remote services via Keycloak

Both jBPM and Drools workbenches provides different remote service endpoints that can be consumed by third party clients using the remote API.

In order to authenticate those services thorough Keycloak the BasicAuthSecurityFilter must be disabled, apply those modifications for the the WEB-INF/web.xml file (app deployment descriptor)  from jBPM's WAR file:

1.- Remove the filter :

  <filter-name>HTTP Basic Auth Filter</filter-name>
    <param-value>KIE Workbench Realm</param-value>

  <filter-name>HTTP Basic Auth Filter</filter-name>

2.- Constraint the remote services url patterns as:


Important note: The user that consumes the remote services must be member of role rest-all. As on described on step 2, the admin user in this example it's already a member of the rest-all role.