Friday, August 14, 2015

User and Group Management UI for Drools and jBPM workbench

Roger has been working on an SPI with a nice UI for the management of users and groups. Now people no longer have to go out to command line or some external tool to add and manage users and groups. We have a pluggable SPI approach with the main implementation currently targeting key cloak. We also have implementations for Wildfly and Tomcat properties files. This work should be part of our 6.4 release, later this year.

You can watch a youtube video here, don't forget to turn on HD (there is no audio).

(click to enlarge)


Friday, August 07, 2015

RuleML2015 : Hybrid Reactive Relational and Graph Reasoning

Here are the slides and presentation for my RuleML submission "Building a Hybrid Reactive Rule Engine for Relational and Graph Reasoning". In this we propose a syntax extensions inspired by XPath, called OOPath, for Drools along with engine extensions and domain integration for reactive pojo graphs. We are hoping this new syntax and approach will make rule engines much easier to use for java developers. This work is already at an early prototype stage, and exists in master. You can follow the unit tests here:

The first screenshot shows three rules. R1 is a reactive relational rule. R2 uses a 'from' which has access to the full graph information, but is not reactive. R3 uses OOPath statement, which is succinct and reactive. The second and third screenshots show more advanced syntax.

The Slides and video:
slides :
video :


Thursday, August 06, 2015

New Kie Navigator for Eclipse

Eclipse Kie Navigator
The Kie Navigator is a new view created as a result of the discussion in B*MS Eclipse Tooling Enhancements. The information presented here is preliminary and reflects the current state of this view; the new feature will be delivered in its final version in Drools/jBPM Tooling 6.4.0.

The Kie Navigator View is accessed from the Eclipse Window->Show View menu:
In order to use the Kie Navigator View, the user must first define an Application Server in the WST Servers View. So, initially the Kie Navigator View will look like this:
Clicking on the link “Use the Servers View to create a new server…” will open the Servers View where a new server definition can be created. Management of the server, including startup and shutdown is done from there. Note that Drools/jBPM requires certain additional JVM and server startup options, which must be added to the server startup configuration. Once a new server has been defined, open the server configuration page (double click on the newly created server entry) and the server Overview page is opened:
Clicking the “Open launch configuration” link opens the following dialog:
Here the user can enter the app server and JVM arguments to properly configure startup of the Kie web service. See the Drools/jBPM documentation for more information about these arguments.

Alternatively, the app server and Kie web service application can be started from a command-line using either the provided Ant demo scripts or any other custom startup script. Note that starting from the Servers view may cause the app server to be shut down when exiting Eclipse. A server can also be configured in Eclipse for external management (see the “Server Behavior” section in the above screenshot.)

Once the server has been configured and started, the Kie Navigator View will recognize the server and attempt to communicate with the Kie web service. The view now looks something like this:
In this screenshot several nodes have been expanded to show all possible situations. At the root of this view is the app server. The Kie Navigator View is designed to support multiple servers, but each must obviously be configured a different hostname and/or HTTP port number. This, for example, allows management of development, test and production servers.

Below the server level are Organizational Units and Repositories. Repositories that are not currently associated with an Organizational Unit appear directly under the Server root node. Below the Organizational Unit level are the associated Repositories, and below the Repositories are Projects contained in the Repository.

A Repository can either be available () or unavailable () in the Workspace; a Repository is only available if it has been “imported” (see Context Menus, below) from the Kie web server.

Similarly, a Project can either be available () or unavailable () depending on whether it has been “imported”. When a Project has been imported, it behaves exactly the same as if it were being viewed in the Eclipse Project Explorer or Navigator; that is, all of the same menu actions available in the Project Explorer are also available in the Kie Navigator View. Also, all of the icon decorators and labels on project folders are the same as in Project Explorer.

Context Menus

This section describes the  context menu actions available for each type of node in the Kie Navigator tree.


  • Refresh - causes a refresh of the entire viewer by making REST calls to the server to update the tree hierarchy.
  • Create Organization… - creates a new Organizational Unit with information collected from the following dialog:
  • Properties - displays the Server Properties dialog (see the Properties section below)

Organizational Unit

  • Add Repository… - adds a Repository that is not already associated with any other Organizational Unit to this Organization. A selection dialog containing a list of all unassociated Repositories will be displayed.
  • Create Repository… - creates a new Repository with information collected from the following dialog:
  • Delete Organization… deletes the selected Organizational Unit and dissociates any Repositories that were associated with this Organization - the Repositories are not deleted.
  • Properties - displays the Organizational Unit Properties dialog (see the Properties section below)


  • Import Repository - clones the Repository and makes it available in the Git Repository View. This menu action is only available if the Repository has not already been cloned. All actions that affect the Repository (pull, commit, push, etc.) can then be performed from the Git Repository View.
  • Create Project… - creates a new Project in this Repository with information collected from the following dialog:
If the “Import the Project” checkbox is checked, the Project will be created in the local Repository and then created, and opened  in the local workspace. If  unchecked, the Project is only created in the local Repository; it can then be “imported” at a later time. Note that the Project will become “visible” in the Kie web console immediately, but the Project contents will only be available on the server after Repository changes are committed and pushed to upstream.
  • Remove Repository… - removes the selected Repository from its containing Organizational Unit. The user will be prompted to optionally delete the Repository from the server.
  • Show in Git Repository View - opens the Git Repositories View and highlights the selected Repository in that view.
  • Properties - displays the Repository Properties dialog (see the Properties section below)


This context menu is only available if the Project has not yet been “Imported” that is, it has not yet been created in the local workspace.
  • Import Project - creates a local workspace project that references the selected Project in the Repository. This makes the project available for use. If a project with the same name already exists in the workspace, the newly selected Project can not be imported.
  • Delete Project… - deletes the selected Project and removes it from its containing Repository.
  • Properties - displays the Project Properties dialog (see the Properties section below)

Once a Project has been “Imported”, it becomes synchronized with the other Eclipse resource viewers as well (e.g. Project Explorer, Java Package Explorer, Eclipse Navigator, etc.) and any changes made in any of these viewers will also be reflected in the Kie Navigator View and vice-versa. The screenshot below illustrates this effect:



  • Server Name: the server name as defined in the WST Servers Viewer. This can not be changed.
  • Host Name: the name of the machine on which the app server is running. This is also managed from the WST Servers Viewer.
  • Username/Password: login credentials for the Kie web app. This is used to make REST calls to the Kie web service.
  • Trust connections to this Server: if a host is not known as a trusted site, the ssh protocol will prompt the user to verify that this is a trusted site. Setting this checkbox disables the prompt. The host can also be entered into the ssh configuration as a trusted site to avoid this problem.
  • KIE Application Name: the name of the Kie web app; the Kie Navigator will try the following application names by default to determine the app name:
    • kie-wb
    • kie-drools-wb
    • kie-jbpm-wb
    • business-central
    • drools-console
    • jbpm-console
    • jboss-brms
However, since the user has the option of renaming the Kie web app during installation, Kie Navigator may not be able to discover the actual name. This field is intended for the case where the web app name has been user-defined.
  • Use default Git Repository Path: when this checkbox is set, repositories will be cloned into the directory configured by Git (see the Eclipse User Preferences for Git.) When unchecked, the directory used in the following field will be used instead.
  • Git Repository Path: the directory to use for cloning repositories from this server; this field is only enabled if the “Use default Git Repository Path” checkbox is unset. Note that since it is possible to have many servers (e.g. production, test, etc.) with a similar organizational structure, the chances of repository name collisions are high. It is therefore suggested to use a different repository directory for each server. By default, the server name is appended to the default Git repository path, to give a unique directory name for each server.

Organizational Unit

These fields correspond to the Organizational Unit definition in the Kie web app. Note that only the Owner and Default Group ID can be changed.


These fields correspond to the Repository definition in the Kie web app. The property page also shows the remote and local Git repository locations. Note that only the description and login credentials can be changed.


These fields correspond to the Project definition in the Kie web app. Currently none of these fields can be updated on the web server due to REST API limitations.

If a Project has been imported, this property page is shown in the context of the Eclipse project properties, as shown here:
Work is still ongoing, and the information here all preliminary - any suggestions for changes or improvements are welcome!


Thursday, July 23, 2015

Validation and Verification for Decision Tables Update

The decision table verification and validation has come to a point where it is time to take a pause on adding new features, but I'm hoping to continue the work before the end of this year. One big missing feature is finding missing ranges and reporting about incomplete decision tables.

This blog entry will be an update to a previous entry that can be found here.

Features that made it into the next Final release

The features are currently in so any future release will include them. Here is a simple demo video showing the V&V issue panel and how it works in real time.

The different issue levels are:
  • Error - Serious fault. It is clear that the author is doing something wrong. Conflicts are a good example of errors.
  • Warning - These are most likely serious faults. They do not prevent the dtable from working, but need to be double checked by the dtable author. Redundant/subsumptant rules for example, maybe the actions need to happen twice in some cases.
  • Info - The author might not want to have any conditions in the dtable. If the conditions are missing each action gets executed. This can be used to insert a set of facts into the working memory. Still it is good to inform that the conditions might have been deleted by accident.  

The verification and validation looks for the following issues:


To put it simple: two rows that are equal are redundant, but redundancy can be more complicated. The longer explanation is: redundancy exists when two rows do the same actions when they are given the same set of facts.

Redundancy might not be a problem if the redundant rules are setting a value on an existing fact, this just sets the value twice. Problems occur when the two rules increase a counter or add more facts into the working memory. In both cases the other row is not needed.




Subsumption exists when one row does the same thing as another, with a sub set of the values/facts of another rule. In the simple example below I have a case where a fact that has the max deposit below 2000 fires both rows.

The problems with subsumption are similar to the case with redundancy.




Conflicts can exists either on a single row or between rows.
A single row conflict prevent the row actions from ever being executed.

Single row conflict - second row checks that amount is greater than 10000 and below 1

Conflict between two rows exists when the conditions of two rules are met with a same set of facts, but the actions set existing fact fields to  different values. The conditions might be redundant or just subsumptant.

The problem here is, how do we know what action is made last? In the example below: Will the rate be set to 2 or 4 in the end? Without going into the details, the end result may be different on each run and with each software version. 
Two conflicting rows - both rows change the same fact to a different value


Deficiency gives the same kind of trouble that conflicts did. The conditions are too loose and the actions conflict.

For example:
If the loan amount is less than 2000 we do not accept it.
If the person has a job we approve the loan.
The problem is, we might have people with jobs asking for loans that are under 2000. Sometimes they get them, sometimes they do not.

Missing Columns

In some cases, usually by accident, the user can delete all the condition or action columns.

When the conditions are removed all the actions are executed and when the actions columns are missing the rows do nothing.
The action columns are missing
The condition columns are missing


Wednesday, July 01, 2015

Extending UberFire with AngularJS for the BPMS Domain

You can now see Alex’s talk, originally created by Kris, showing how to extend UberFire (UF) with AngularJS for the BPMS Domain, using Red Hat BPMS (productized version of jBPM). Everything is built live and in real time.

We are making great progress with our UF documentation, including tutorials. UF forms the core of our extensible UI architecture, it can be used standalone or to extend Drools or jBPM workbenches.

This work was made possible by our polyglot interoperability work for our UF framework, which we show in detail here:

The full UF docs are here, there has been a big update recently, as well as new tutorials. The empty sections will be filled in over next few days:

A recent generated PDF can be found here:

We should be close to a formal launch in a few weeks. The remaining items are:
-GWT 2.8 upgrade
-Move to new L&F theme
-Merge in Tool Windows.
-Move GWTExport to @JsType
-Mege in JS-UI


Tuesday, June 30, 2015

Uberfire, Drools and jBPM High Level Roadmap Slides and Videos from Red Hat Summit 2015

I did a short, 20 minute, high level presentation on some road map items for Uberfire, Drools and jBPM. I have provided the slides here, and it includes some early POC videos.

You can see the ongoing L&F work in the link below. The design is sleeker and more minimal. It now has a compact mode, that collapses the perspective switcher and sub-perspective menus -  see the “simple perspective”. Click the user name to switch modes. It automatically goes into compact mode when a panel is enlarged. 
user : admin
pass : admin



Friday, June 19, 2015

Improved multi-threading behaviour with Drools 6.3 SNAPSHOT

We’ve rewritten the internal parts of our code that deal with multi-threading to remove a large number of synchronisation points and to improve stability and predictability. We believe that what we have done is now far more robust for the interaction of the User, Timer and Engine threads. Our initial benchmarking is showing that this has led to mild performance improvements too. We’d really like to get this hardened, before we do 6.3 final, so if you have an application that users Timers or Time Windows, especially when using FireUntilHalt, could you give it a good hammering? Especially those using the TimedRuleExecutionFilter, which allows a timer to fire reactively when the engine is in passive mode (not fireUntilHalt).

For this iteration we just focused on the engine internals, we have not yet touched the outer lock and sync points, i.e. the ksession and kbase locks that threads go through when they do an insert/update/delete action. These apparently can create contention for lots of small lived ksessions. We believe with the latest work we've been doing we can soon improve this area too.

You should find all this work in the latest snapshot, for drools-core and drools-compiler.

For those interest, we have done two things. The first part was to properly separate the User insert/update/delete thread actions with he Engine network evaluations thread.  The second part is to remove most of the internal sync points and replace with a state machine.

The User/Engine thread separation has been made possible by our move away from Rete to Phreak. With Rete the network evaluation is done during the User insert/update/delete action, meaning each user action locks the entire engine. With phreak the insert/update/delete is separated and network evaluation happens when fireAllRules or fireUntilHalt is called. We've added a queue, SynchronizedPropagationList, that stores up the user actions as commands, in a thread safe queue. The engine thread then takes all the entries on each of its iterations. We found our custom queue outperformed the JDK concurrent queues, but I think that is due to our specialist implementation. Instead of the engine taking just the HEAD entry, it does a takeAll and the processes that returned linked list as a batch. This reduces the amount of times the Engine thread hits the queue for each of the elements it processes. We can also efficiently handle when to park and when to notify the engine to spin up again, which was alway a bit hit and miss before. Now it simply parks when takeAll returns null, and it notifies if a Timer or User adds work be done and the engine is known to be parked.

The second part introduces a state machine for the User, Timer and Engine thread interactions. This now provides us with a system that we can ca be documented, due to it's simplification, and also this will help explain the various thread interactions and behaviours. This was missing before, and understanding the behaviour could be a be bit confusing for users. It also means we now have a better behaviour for the interactions of calling fireAllRules and fireUntilHalt and when they overlap, or are called twice. i..e what happens if you call fireUntilHalt while fireAllRules is currently operating? or you call fireAllRules twice, or fall fireAllRules when fireUntilHalt is operating? Our state machine now more cleanly handles this with describable behaviour.

The bulk of the work is contained within the DefaultAgenda:

There are three threads that can interact. A User thread doing an insert/update/delete, the  Timer thread, for timers and time windows and the engine thread for network evaluations.  We have now changed this so that the timer thread no longer does network evaluations, blocking other threads, instead it submits a job and notifies the Engine thread (if it's not already running) to process it. You can see this in PhreakTimerNode. When the Timer now triggers it'll submit a job tot he queue that I introduced in the previous paragraph.
public void execute(JobContext ctx) {
    TimerNodeJobContext timerJobCtx = (TimerNodeJobContext) ctx;
    InternalWorkingMemory wm = timerJobCtx.getWorkingMemory();
    wm.addPropagation( new TimerAction( timerJobCtx ) );

When a timer thread is kicked off it has no idea if the engine thread is evaluating or parked. It could be parked because fireAllRules has returned and it's waiting for the next fireAllRules. Or it could be parked because fireUntilHalt currently has no work to do. If for instance the engine is parked in fireUntilHalt it needs to notify the engine thread to unpark and process the timer work. If however engine thread is working (be it fireUntilRules or fireUntilHalt) it should just put it into the queue for the engine thread to process and not do the notification. These interactions are subtle, but they must be solid and avoid contention or excessing syncing. The behaviour is complicated further by the TimedRuleExecutionFilter.

To handle this we introduced the following enum to represent the available states of the engine:
private enum ExecutionState {  // fireAllRule | fireUntilHalt | executeTask <-- action="" br="" required="">    INACTIVE( false ),         // fire        | fire          | exec    FIRING_ALL_RULES( true ),  // do nothing  | wait + fire   | enqueue    FIRING_UNTIL_HALT( true ), // do nothing  | do nothing    | enqueue    REST_HALTING( false ),     // wait + fire | wait + fire   | enqueue    FORCE_HALTING( false ),    // wait + fire | wait + fire   | wait + exec    EXECUTING_TASK( false );   // wait + fire | wait + fire   | wait + exec
    private final boolean firing;

    ExecutionState( boolean firing ) {
        this.firing = firing;

    public boolean isFiring() {
        return firing;

You can now see this state machine being used by fireAllRules and fireUntilHalt. Notice the new method waitAndEnterExecutionState. This allows threads to either park or return straight away - i.e. if you call fireAllRules and fireUntilHalt is running, just return straight away. If you call fireUntilHalt while fireAllRules is running, wait until fireAllRules finishes, then start fireUntilHalt.

public int fireAllRules(AgendaFilter agendaFilter,
                        int fireLimit) {
    synchronized (this) {
        if (currentState.isFiring()) {
            return 0;
        waitAndEnterExecutionState( ExecutionState.FIRING_ALL_RULES );

public void fireUntilHalt(final AgendaFilter agendaFilter) {
    synchronized (this) {
        if (currentState == ExecutionState.FIRING_UNTIL_HALT) {
        waitAndEnterExecutionState( ExecutionState.FIRING_UNTIL_HALT );

private void waitAndEnterExecutionState( ExecutionState newState ) {
    if (currentState != ExecutionState.INACTIVE) {
        try {
        } catch (InterruptedException e) {
            throw new RuntimeException( e );
    currentState = newState;

Previously you saw the Timer thread submitted a job into a queue, this is also handled by the state machine.
public void executeTask( ExecutableEntry executable ) {
    synchronized (this) {
        if (isFiring() || currentState == ExecutionState.REST_HALTING) {
        waitAndEnterExecutionState( ExecutionState.EXECUTING_TASK );

    try {
    } finally {

A key aspect we had to support here was what if a Timer thread triggers some work while the Engine thread is just returning. You end up with gaps, so that's work that doesn't fire, that the user was expecting. This is a problem people have seen in previous Drools releases. The combination of this task system halting statuses, allow the engine to restart again before properly halting. You can think of it as a two phase halting system. You an see that with the main do loop and then the second while loop, ensuring we get a clean shut down - i.e. the engine cannot park, unless there are no timer actions, before it returns and sets the state machine to INACTIVE.
int returnedFireCount;
do {
    returnedFireCount = fireNextItem( agendaFilter, fireCount, fireLimit );
    fireCount += returnedFireCount;
} while ( ( isFiring() && returnedFireCount != 0 && (fireLimit == -1 || fireCount < fireLimit) ) );

PropagationEntry head = tryHalt();
while (head != null) {
    fireCount += fireNextItem( agendaFilter, fireCount, fireLimit );
    SynchronizedPropagationList.flush(workingMemory, head);
    head = workingMemory.takeAllPropagations();

private PropagationEntry tryHalt() {
    synchronized (this) {
        PropagationEntry head = workingMemory.takeAllPropagations();
        if (head == null) {
            currentState = ExecutionState.INACTIVE;
        } else if (currentState != ExecutionState.FORCE_HALTING) {
            currentState = ExecutionState.REST_HALTING;
        return head;

One of the key aspects here is the takeAll action. We can use this to atomically both check if there is work to do, and return that work within a sync point. But process the work outside of the sync point. So you can see it it will only finally halt, if takeAll returns null. Note the Timer thread would have to go through this sync point to add more work - ensuring there are no gaps.

There is a lot to take in here, and it's a bit of a brain dump. But I hope it proves useful to those wanting to understand how we are improving our engine, and how the prior work we did with the Phreak algorithm has enabled this.


Thursday, June 18, 2015

Drools & jBPM get Dockerized

Docker is becoming a reference to build, ship and run container-based applications. It provides an standard, easy and automated way to deploy your applications.

Since latest 6.2.0.Final community release you can use Docker to deploy and run your Drools & jBPM applications in an easy and friendly way. Do not worry about operation system, environment and/or application server provisioning and deployments ... just use the applications!

The images are already available at Docker Hub:

Please refer to next "Drools & jBPM community Docker images" section for more information about what's contained in each image.

Why are these images helpful for me and my company?

To understand the advantages of using these Docker images, let's do a quick comparison with the deployment process for a manual installation of a Drools Workbench application.

If you do it by yourself:
  1. Install and prepare a Java runtime environment
  2. Download the workbench war (and other resources if necessary), from the official home page or from JBoss Nexus
  3. Download and prepare a JBoss WildFly server instance
  4. Configure the WildFly instance, including for example configuring the security subsystem etc.
  5. Deploy Drools into the WildFly instance
  6. Start the application server and run your Drools application
As you can notice, manual installation already takes quite a few steps.  While this process can be automated in some way (as the jbpm-installer for example does), some questions arise at this point ... What if I need a more complex environment? Are other colleagues using the same software versions and configuration? Can I replicate exact same environment? Could someone else easily run my local example easily during a customer demo? And if I need to deploy several identical runtime environments? What about removing my local installation from my computer? ...

Software containers & Docker are a possible solution and help providing an answer to some of these questions.

Both Drools & jBPM community Docker images include:
  • The OpenJDK JRE 1.7 environment 
  • A JBoss WildFly 8.1.0.Final application server
  • Our web-based applications (Drools Workbench, KIE server and/or jBPM Workbench) ready to run (configurations and deployments already present)
You don't have to worry about the Java environment, the application server, the web applications or configuration ... just run the application using a single command:

  docker run -p 8080:8080 -d --name drools-wb jboss/drools-workbench-showcase:6.2.0.Final

Once finished, just remove it:

   docker stop ...

At this point, you can customize, replicate and distribute the applications! Learn more about Docker, its advantages and how to use it at the offical site.

The environment you need

Do not worry about Java environments, application servers or database management systems, just install Docker:

   # For RHEL/Fedora based distributions:
   sudo yum -y install docker

More installation information at the official Docker documentation.

Are you using Windows? 

For windows users, in order to use Docker, you have to install Boot2Docker. It provides a Linux basic environment where Docker can run. Please refer to the official  documentation for the Docker installation on Windows platforms.

You are ready to run!

Drools & jBPM community Docker images

For the 6.2.0.Final community release six Docker images have been released.  They can be categorized in two main groups: Base images provide the base software with no custom configurations. They are intended to be extended and customized by Docker users.   Showcase images provide applications that are ready to run out-of-the-box (including for example some standard configuration).  Just run and use it!  Ideal for demos or evaluations / getting started.
  • Base images 
    • Drools Workbench
    • KIE Execution Server
    • jBPM Workbench
  • Showcase images
    • Drools Workbench Showcase
    • KIE Execution Server Showcase
    • jBPM Workbench Showcase

Let's dive into a detailed description of each image in the following sections.

Drools Workbench

This image provides the standalone Drools web authoring and rules management application for version 6.2.0.Final.  It does not include any custom configuration, it just provides a clean Drools Workbench application running in JBoss WildFly 8.1.  The goal of this image is to provide the base software and allow users to extend it, and apply custom configurations and build custom images.

Fetch the image into your Docker host:

   docker pull jboss/drools-workbench:6.2.0.Final

Customize the image by creating your Dockerfiles:

   FROM jboss/drools-workbench:6.2.0.Final

Please refer to Appendix C for extending this image.

Run a Drools Workbench container:

docker run -p 8080:8080 -d --name drools-wb jboss/drools-workbench:6.2.0.Final

Navigate to your Drools Workbench at:

   http://localhost:8080/drools-wb # Linux users
   http://<boot2docker_ip>:8080/drools-wb # Windows users

Refer to Appendix A for more information about IP address and port bindings.

Drools Workbench Showcase

See it in Docker Hub

This image provides the standalone Drools web authoring and rules management application for version 6.2.0.Final plus security configuration and some examples.
Tip: This image inherits from the Drools Workbench one and adds custom configurations for WildFly security subsystem (security realms) and system properties for enabling the use of the examples repository. 
The goal for this image is to provide a ready to run Drools Workbench application: just pull, run and use the Workbench.

1. Pull the image:

  docker pull jboss/drools-workbench-showcase:6.2.0.Final

2. Run the image:

  docker run -p 8080:8080 -d --name drools-wb-showcase jboss/drools-workbench-showcase:6.2.0.Final

3. Navigate to the workbench at:

   http://localhost:8080/drools-wb # Linux users
   http://<boot2docker_ip>:8080/drools-wb # Windows users

Refer to Appendix A for more information about IP address and port bindings.

You can use admin/admin for default logging in - Refer to Appendix B for default users and roles included

KIE Execution server

This image provides the standalone rules execution component for version 6.2.0.Final, to handle rules via remote interfaces.
More information for the KIE Execution Server can be found at the official documentation.
This image does not include any custom configuration, it just provides a clean KIE Execution Server application running in JBoss WildFly 8.1.  The goal for this image is to provide the base software and let the users to extend it, and apply custom configurations and build custom images.

Fetch the image into your Docker host:

   docker pull jboss/kie-server:6.2.0.Final

Customize the image by creating your Dockerfiles:

   FROM jboss/kie-server:6.2.0.Final

Please refer to Appendix C for extending this image.
Run a KIE Execution Server container:

   docker run -p 8080:8080 -d --name kie-server jboss/kie-server:6.2.0.Final

The KIE Execution Server is located at:

   http://localhost:8080/kie-server # Linux users
   http://<boot2docker_ip>:8080/kie-server # Windows users

Refer to Appendix A for more information about IP address and port bindings.

Example: use the remote REST API to perform server requests :

 http://localhost:8080/kie-server/services/rest/server # Linux
 http://<boot2docker_ip>:8080/kie-server/services/rest/server # Win

KIE Execution Server Showcase

See it in Docker Hub

This image provides the standalone rules execution component version 6.2.0.Final to handle rules via remote interfaces plus a basic security configuration (include a default user and role).
More information for the KIE Execution Server can be found at the official documentation. 
Tip: This image inherits from the KIE Execution Server one and adds custom configuration for WildFly security subsystem (security realms).

The goal of this image is to provide a ready to run KIE Execution Server: just pull, run and use the remote services.

1. Pull the image:

   docker pull jboss/kie-server-showcase:6.2.0.Final

2. Run the image:

   docker run -p 8080:8080 -d --name kie-server-showcase jboss/kie-server-showcase:6.2.0.Final

3. The server is located at:

   http://localhost:8080/kie-server # Linux users
   http://<boot2docker_ip>:8080/kie-server # Windows users

    The REST API service is located at:
 http://localhost:8080/kie-server/services/rest/server # Linux  
 http://<boot2docker_ip>:8080/kie-server/services/rest/server # Win  

Refer to Appendix A for more information about IP address and port bindings.

You can use kie-server/kie-server for default logging - Refer to Appendix B for default users and roles included

jBPM Workbench

This image provides the standalone version 6.2.0.Final of the jBPM Workbench: web-based authoring and management of your processes.  It does not include any custom configuration, it just provides a clean jBPM Workbench application running in JBoss WildFly 8.1.  The goal of this image is to provide the base software and let the users to extend it, and apply custom configurations and build custom images.

Fetch the image into your Docker host:

   docker pull jboss/jbpm-workbench:6.2.0.Final

Customize the image by creating your Dockerfiles:

   FROM jboss/jbpm-workbench:6.2.0.Final

Please refer to Appendix C for extending this image.
Run a jBPM Workbench container:

   docker run -p 8080:8080 -d --name jbpm-wb jboss/jbpm-workbench:6.2.0.Final

Navigate to your jBPM Workbench at:

   http://localhost:8080/jbpm-console # Linux users
   http://<boot2docker_ip>:8080/jbpm-console # Windows users

Refer to Appendix A for more information about IP address and port bindings.

jBPM Workbench Showcase

This image provides the standalone version 6.2.0.Final of the jBPM Workbench: web-based authoring and management of your processes. It includes the security and persistence configurations and some examples too.
Tip: This image inherits from the jBPM Workbench one and adds custom configurations for WildFly security subsystem (security realms) and system properties for enabling the use of the examples repository. 
The goal of this image is to provide a ready to run jBPM Workbench application: just pull, run and use the Workbench:

1. Pull the image:

   docker pull jboss/jbpm-workbench-showcase:6.2.0.Final

2. Run the image:

   docker run -p 8080:8080 -d --name jbpm-wb-showcase jboss/jbpm-workbench-showcase:6.2.0.Final

3. Navigate into the workbench at:

   http://localhost:8080/jbpm-console # Linux users  
   http://<boot2docker_ip>:8080/jbpm-console # Windows users

Refer to Appendix A for more information about IP address and port bindings.

You can use admin/admin for default logging - Refer to Appendix B for default users and roles included



Appendix A - IP address and ports bindings for Docker containers

Port bindings
By default, when using any of the Drools & jBPM Docker images, the port 8080 is exposed for the use of the HTTP connector. This port is not exposed to the Docker host by default, so in order to expose it and be able to navigate through the applications please read the following instructions.

The recommended use for running containers is specifying in the docker client the -p argument as:

  docker run -p 8080:8080 -d ....

Doing this way, the docker daemon binds the internal container's port 8080 to the Docker host machine's port 8080. So you can navigate into the applications at:


If your Docker host machine's port 8080 is not available, run the containers with the -P command line argument. Docker binds the internal 8080 port to an available free exposed port in the Docker host, so in order to access the application you have to discover the bind port number.

To discover running container's ports type the following command:

   docker ps -a

This command will output the processes and the port mappings for each running container:

2a55fb....   jboss/drools-w..  ...      ...     ..>8080/tcp.. drools-wb
The PORTS column shows that the internal container's port 8080 is bound to port 49159 on the Docker host, so you can navigate into the applications at:


Docker hostname & IP address
The Docker hostname or IP address have to be specified in order to navigate through the container's applications.

If you are running Docker in your localhost and using Linux based OS, it defaults to localhost:


If you are running Docker on another machine or in Windows environments, where Boot2Docker is required,  you have to specify the host name (if DNS available for it) or the IP address for it:

Appendix B - Default applications users & roles

The Showcase images Drools Workbench Showcase and jBPM Workbench Showcase include default users & roles:

Drools & jBPM Workbench Showcase roles
Role Description
admin The administrator
analyst The analyst
developer The developer
manager The manager
user The end user
kiemgmt KIE management user
Accounting Accounting role
PM Project manager role
HR Human resources role
sales Sales role
IT IT role

Drools & jBPM Workbench Showcase users
Username Password Roles
admin admin admin,analyst,kiemgmt
krisv krisv admin,analyst
john john analyst,Accounting,PM
mary mary analyst,HR
sales-rep sales-rep analyst,sales
katy katy analyst,HR
jack jack analyst,IT
salaboy salaboy admin,analyst,IT,HR,Accounting

For KIE Execution Server Showcase there is a single user and role:
Username Password Roles
kie-server kie-server kie-server

Appendix C - Extending base images

The Base images are intended to be inherited from, for adding your custom configurations or deployments.

In order to extend the images, the Dockerfile must start with: 

    FROM jboss/drools-workbench:6.2.0.Final 
    FROM jboss/kie-server:6.2.0.Final
    FROM jboss/jbpm-workbench:6.2.0.Final

At this point, custom configurations and deployments can be added. Some notes:
  • JBoss WildFly is located at the path given by $JBOSS_HOME environment variable
  • $JBOSS_HOME points to /opt/jboss/wildfly/
  • Applications are using the server in standalone mode:
    • Configurations located at $JBOSS_HOME/standalone/configuration/
    • Configuration files for the standalone-full profile are used
    • Deployments are located at $JBOSS_HOME/standalone/deployments/
You can find more information at each official image page at Docker Hub: