Monday, May 29, 2017

New KIE persistence API on 7.0

This post introduce the upcoming drools and jBPM persistence api. The motivation for creating a persistence api that is to not be bound to JPA, as persistence in Drools and jBPM was until the 7.0.0 release is to allow a clean integration of alternative persistence mechanisms to JPA. While JPA is a great api it is tightly bound to a traditional RDBMS model with the drawbacks inherited from there - being hard to scale and difficult to get good performance from on ever scaling systems. With the new api we open up for integration of various general NoSQL databases as well as the creation of tightly tailor-made persistence mechanisms to achieve optimal performance and scalability.
At the time of this writing several implementations has been made - the default JPA mechanism, two generic NoSQL implementations backend by Inifinispan and MapDB which will be available as contributions, and a single tailor made NoSQL implementation discussed shortly in this post.

The changes done in the Drools and jBPM persistence mechanisms, its new features, and how it allows to build clean new implementations of persistence for KIE components is the basis for a new soon to be added MapDB integration experimental module. The existing Infinispan adaptation has been changed to accommodate to the new structure.
Because of this refactor, we can now have other implementations of persistence for KIE without depending on JPA, unless our specific persistence implementation is JPA based. It has implied, however, a set of changes:

Creation of drools-persistence-api and jbpm-persistence-api

In version 6, most of the persistence components and interfaces were only present in the JPA projects, where they had to be reused from other persistencies. We had to refactor these projects to reuse these interfaces without having the JPA dependencies added each time we did so. Here's the new set of dependencies:
<dependency>
 <groupId>org.drools</groupId>
 <artifactId>drools-persistence-api</artifactId>
 <version>7.0.0-SNAPSHOT</version>
</dependency>
<dependency>
 <groupId>org.jbpm</groupId>
 <artifactId>jbpm-persistence-api</artifactId>
 <version>7.0.0-SNAPSHOT</version>
</dependency>

The first thing to mention about the classes in this refactor is that the persistence model used by KIE components for KieSessions, WorkItems, ProcessInstances and CorrelationKeys is no longer a JPA class, but an interface. These interfaces are:
  • PersistentSession: For the JPA implementation, this interface is implemented by SessionInfo. For the upcoming MapDB implementation, MapDBSession is used.
  • PersistentWorkItem: For the JPA implementation, this interface is implemented by WorkItemInfo, and MapDBWorkItem for MapDB
  • PersistentProcessInstance: For the JPA implementation, this interface is implemented by ProcessInstanceInfo, and MapDBProcessInstance for MapDB
The important part is that, if you were using the JPA implementation and wish to continue doing so with the same classes as before. All interfaces are prepared to work with these interfaces. Which brings us to our next point

PersistenceContext, ProcessPersistenceContext and TaskPersistenceContext refactors

Interfaces of persistence contexts in version 6 were dependent on the JPA implementations of the model. In order to work with other persistence mechanisms, they had to be refactored to work with the runtime model (ProcessInstance, KieSession, and WorkItem, respectively), build the implementations locally, and be able to return the right element if requested by other components (ProcessInstanceManager, SignalManager, etc)
Also, for components like TaskPersistenceContext there were multiple dynamic HQL queries used in the task service code which wouldn’t be implementable in another persistence model. To avoid it, they were changed to use specific mechanisms more related to a Criteria. This way, the different filtering objects can be used in a different manner by other persistence mechanisms to create the queries required.

Task model refactor

The way the current task model relates tasks and content, comment, attachment and deadline objects was also dependent on the way JPA stores that information, or more precisely, the way ORMs related those types. So a refactor of the task persistence context interface was introduced to do the relation between components for us, if desired. Most of the methods are still there, and the different tables can still be used, but if we just want to use a Task to bind everything together as an object (the way a NoSQL implementation would do it) we now can. For the JPA implementation, it still relates object by ID. For other persistence mechanisms like MapDB, it justs add the sub-object to the task object, which it can fetch from internal indexes.
Another thing that was changed for the task model is that, before, we had different interfaces to represent a Task (Task, InternalTask, TaskSummary, etc) that were incompatible with each other. For JPA, this was ok, because they would represent different views of the same data.
But in general the motivation behind this mix of interfaces is to allow optimizations towards table based stores - by no means a bad thing. For non table based stores however these optimizations might not make sense. Making these interfaces compatible allows implementations where the runtime objects retrieved from the store to implement a multitude of the interfaces without breaking any runtime behavior. Making these interfaces compatible could be viewed as a first step, a further refinement would be to let these interfaces extending each other to underline the model  and make the implementations simpler
(But for other types of implementation like MapDB, where it would always be cheaper to get the Task object directly than creating a different object, we needed to be able to return a Task and make it work as a TaskSummary if the interface requested so. All interfaces now match for the same method names to allow for this.)

Extensible TimerJobFactoryManager / TimerService

On version 6, the only possible implementations of a TimerJobFactoryManager were bound in the construction by the values of theTimeJobFactoryType enum. A refactor was done to extend the existing types, to allow other types of timer job factories to be dynamically added

Creating your own persistence. The MapDB case

All these interfaces can be implemented anew to create a completely different persistence model, if desired. For MapDB, this is exactly what was done. In the case of the MapDB implementation that is still under review, there are three new modules:
  • org.kie:drools-persistence-mapdb
  • org.kie:jbpm-persistence-mapdb
  • org.kie:jbpm-human-task-mapdb
That are meant to implement all the Task model using MapDB implementation classes. Anyone with a wish to have another type of implementation for the KIE components can just follow these steps to get an implementation going:
  1. Create modules for mixing the persistence API projects with a persistence implementation mechanism dependencies
  2. Create a model implementation based on the given interfaces with all necessary configurations and annotations
  3. Create your own (Process|Task)PersistenceContext(Manager) classes, to implement how to store persistent objects
  4. Create your own managers (WorkItemManager, ProcessInstanceManager, SignalManager) and factories with all the necessary extra steps to persist your model.
  5. Create your own KieStoreServices implementation, that creates a session with the required configuration, and adding it to the classpath

You’re not alone: The MultiSupport case

MultiSupport is a Denmark based company that has used this refactor to create its own persistence implementation. They provide an archiving product that is focused on creating a O(1) archive retrieval system, and had a strong interest in getting their internal processes to work using the same persistence mechanism they used for their archives.
We worked on an implementation that allowed for an increase in the response time for large databases. Given their internal mechanism for lookup and retrieval of data, they were able to create an implementation with millions of active tasks which had virtually no degradation in response time.
In MultiSupport we have used the persistence api to create a tailored store, based on our in house storage engine - our motivation has been to provide unlimited scalability, extended search capabilities, simple distribution and a performance we struggled to achieve with the JPA implementation. We think this can be used as a showcase of just how far you can go with the new persistence api. With the current JPA implementation and a dedicated SQL server we have achieved an initial performance of less than 10 ‘start process’ operations per second, now with the upcoming release we on a single application server have a performance more than 10 fold.

Share/Bookmark

Wednesday, May 10, 2017

An Executable DMN Solution for Business Users - bpmNEXT presentation

The video recording from the bpmNEXT presentation we did a few weeks ago is up!

In this presentation, Bruce and myself do a demo of the end-to-end, full (level 3) DMN solution built in partnership with Trisotech and Method&Style.




Here it is:



Share/Bookmark

Wednesday, April 26, 2017

End to end BPM (with a splash of DMN)

Red Hat Summit next week is shaping up to be one of the best ever!

And if you are a Drools or jBPM enthusiast, you will be busy: another top presentation that we have lined up for you comes from a partnership between Signavio and Red Hat. Duncan Doyle and Tom Debevoise will be driving the show on this one with a great example of how do model processes (and a few decisions) with the BPMN and DMN standards using the awesome tools from Signavio, and then deploying those models into the solid Drools and jBPM engines for execution!

This is End to End BPM: from Process Modeling to Execution with Signavio and Red Hat !

Join us on Wednesday, May 3rd, at 3:30pm!

And here is some extra detail from Tom:

End to End BPM


For nearly a decade designing processes in Business Process Model Notation (BPMN) has been a best practice for aligning business and technical objectives. With BPMN, the business analyst or subject matter expert can precisely define the interactions of customers, systems and trading partners with the activities and events that drive them. Because the notation is a standard, the meaning of the process model is unambiguous.
Business uses BPMN to define
·       The roles of the participants
·       Their responsibilities
·       The timing and sequence of events
·       How to handle errors and exceptions

Figure1 Example BPMN process in Signavio
With the Signavio Process Manager, all stakeholders can collaborate on the process model using an ability to commutate comments and concerns and a shared definition of terms. As shown in the figure 1, BPMN activities can denote where forms, services and scripts are needed. BPMN is more than a drawing convention. Compliant software can export the diagram in an XML format that other systems can read. Signavio and Red Hat have leveraged this capability so that processes and more can be exchanged.

Figure 2, the same BPMN process in BPM Suite’s KIE Workbench
To create an executable process, the technical team would then and the code for user forms, scripts and services. So processes in the Signavio Process Manager can be exported to the BPM Suite for this objective.
Most business analysts are not concerned with ‘Code’, except in the areas of compliance where very detailed logic, including quantities, dates and computational logic is critical. Recently BPMN has been extended to include decision modeling with the decision modeling notation (DMN). While separate from BPMN, DMN has been designed to work with BPMN. With decision modeling the business analysts can control a process by determining the logic for:
·       What needs to be done next
·       Who need to do it
·       When and where it is done
·       And importantly, were any important rules broken
Figure 3, Decision logic for the process in DMN
Decision logic can be exported from the Signavio Process Manager and incorporated into the KIE workbench. The process in figure 1 and 2 is controlled by the decision in figure 3.

-->
The teamwork of Signavio and Red Hat is a perfect separation of concerns between the business and IT. Because it is designed to be easy to use and collaborative, the Signavio Process Manager is the perfect environment for developing the business view of a process or a decision. Similarly, because it can leverage the power and scalability of the entire Red Hat middleware stack, the BPM Suite is the perfect environment for turning these decisions into an executable form and hosting them.

Share/Bookmark

Tuesday, April 25, 2017

Just a few... million... rules... per second!

How would you architect a solution capable of executing literally millions of business rules per second? That also integrates hybrid solutions in C++ and Java? While at the same time drives latency down? And that is consumed by several different teams/customers?

Here is your chance to ask the team from Amadeus!

They prepared a great presentation for you at the Red Hat summit next week:

Decisions at a fast pace: scaling to multi-million transactions/second at Amadeus

During the session they will talk about their journey from requirements to the solution they built to meet their huge demand for decision automation. They will also talk about how a collaboration with Red Hat helped to achieve their goals.

Join us for this great session on Thursday, May 4th, at 3:30pm!




Share/Bookmark

DMN demo at Red Hat Summit

We have an event packed full of Drools, jBPM and Optaplanner content coming next week at the Red Hat Summit, but if you would like to know more about Decision Model and Notation and see a really cool demo, then we have the perfect session for you!

At the Decision Model and Notation 101 session, attendees will get a taste of what DMN brings to the table. How it allows business users to model executable decisions using a fun, high level, graphical language, that promotes interoperability and preserves their investment preventing vendor-lock-in.

But this will NOT be your typical slideware presentation. We have prepared a really nice demo of the end-to-end DMN solution announced by Trisotech a few days ago. During the session you will see a model being created with the Trisotech DMN Modeler, statically analyzed using the Method&Style DT Analysis module and executed in the cloud using Drools/Red Hat BRMS.

Come an join us on Tuesday, May 2nd at 3:30pm.

It is a full 3-course meal, if you will. And you can follow that up with drinks at the reception happening from 5pm-7pm at the partner Pavillion where you can also talk to us at the Red Hat booth about it and anything else you are interested in.

Happy Drooling!




Share/Bookmark

Wednesday, April 12, 2017

DMN Quick Start Program announced

Trisotech, a Red Hat partner, announced today the release of the DMN Quickstart Program.

Trisotech, in collaboration with Bruce Silver AssociatesAllegiance Advisory and Red Hat, is offering the definitive Decision Management Quick Start Success Program. This unique program provides the foundation for learning, modeling, analyzing, testing, executing and maintaining DMN level 3-compliant decision models as well as best practices to incorporate in an enterprise-level Decision Management Center of Excellence. 

The solution is a collaboration between the partner companies around the DMN standard. This is just one more advantage of standards: not only users are free from the costs of vendor lock-in, but it also allow vendors to collaborate in order to offer customers complete solutions.


Share/Bookmark

Tuesday, April 11, 2017

An Open Source perspective for the youngsters

Please allow me to take a break from the technical/community oriented posts and talk a bit about something that has been on my mind a lot lately. Stick with me and let me know what you think!

Twenty one years ago, Leandro Komosinski, one of the best teachers (mentor might be more appropriate) I had, told me in one of our meetings:

"- You should never stop learning. In our industry, if you stop learning, after three years you are obsolete. Do it for 5 years and you are relegated to maintaining legacy systems or worse, you are out of the market completely. "

While this seems pretty obvious today, it was a big insight to that 18 years old boy. I don’t really have any data to back this claim or the timeframes mentioned, but that advice stuck with me ever since.

It actually applies to everything, it doesn’t need to be technology. The gist of it: it is important to never stop learning, never stop growing, personally and professionally.

That brings me to the topic I would like to talk about. Nowadays, I talk to a lot of young developers. Unfortunately, several of them when asked “What do you like to do? What is your passion?” either don’t know or just offer generic answers: “I like software development”.

"But, what do you like in software development? Which books have you been reading? Which courses are you taking?" And the killer question: "which open source projects are you contributing to?"

The typical answer is: “- the company I work for does not give me time to do it.” 

Well, let me break it down for you: “this is not about the company you work for. This is about you!” :) 

What is your passion? How do you fuel it? What are you curious about? How do you learn more about it?

It doesn’t need to be software, it can be anything that interests you, but don’t waste your time. Don’t wait for others to give you time. Make your own time.

And if your passion is technology or software, then it is even easier. Open Source is a lot of things to a lot of people, but let me skip ideology. Let me give you a personal perspective for it: it is a way to learn, to grow, to feed your inner kid, to show what you care for, to innovate, to help.

If you think about Open Source as “free labour” or “work”, you are doing it wrong. Open source is like starting a masters degree and writing your thesis, except you don’t have teachers (you have communities), you don’t have classes (you do your own exploratory research), you don’t have homework (you apply what you learn) and you don’t have a diploma (you have your project to proudly flaunt to the world). 

It doesn’t matter if your project is used by the Fortune 500 or if it is your little pet that you feed every now and then. The important part is: did you grow by doing it? Are you better now than you were when you started?

So here is my little advice for the youngsters (please take it at face value):

- Be restless, be inquisitive, be curious, be innovative, be loud! Look for things that interest you in technology, arts, sociology, nature, and go after them. Just never stop learning, never stop growing. And if your passion is software development, then your open source dream project is probably a google search away.

Happy Drooling,
Edson


Share/Bookmark

Saturday, April 01, 2017

A sneak peek into what is coming! Are you ready?

As you might have guessed already, 2017 will be a great year for Drools, jBPM and Optaplanner! We have a lot of interesting things in the works! And what better opportunity to take a look under the hood at what is coming than joining us on a session, side talk or over a happy hour in the upcoming conferences?

Here is a short list of the sessions we have on two great conferences in the next month! The team and myself hope to meet you there!

Oh, and check the bottom of this post for a discount code for the Red Hat Summit registration!


Santa Barbara, California April 18-20, 2017






















Share/Bookmark

Tuesday, March 21, 2017

DMN 1.1 XML: from modeling to automation with Drools 7.0

I am a freelance consultant, but I am acting today as a PhD student. The global context of my thesis is Enterprise Architecture (EA), which requires to model the Enterprise. As one aspect of EA is business process modeling, I am using BPMN from years, but this notation is not very appropriate to represent decision criteria: a cascade of nested gateways becomes quickly difficult to understand then to modify. So, when OMG published the first version 1.0 Beta of DMN specification in 2014, I found that DMN was a very interesting notation to model decision-making. I succeeded in developing my own DMN modeling tool, based on DMN metamodel, in using the Sirius plugin for Eclipse . But even the next “final” version 1.0 of DMN specification was not very accomplished.

The latest version 1.1 of DMN, published in June 2016, is quite good. In the meantime, software editors (at least twenty) have launched good modeling tools, as Signavio Decision Manager (free for Academics) used for this article. This Signavio tool was already able to generate specific DRL files for running DMN models on the BRMS Drools current version 6. In addition to the graphics, some editors added recently the capability to export DMN models (diagram & decision tables) into “DMN 1.1 XML” files, which are compliant with the DMN specification. And the good news is that BRMS like Drools (future version 7, available in Beta version) are able to run theses DMN XML files for automating decision-making (a few lines of Java code are required to invoke theses high level DMN models).

This new approach of treating “DMN 1.1 XML” interchange model directly is better for tool independency and model portability. Here is a short comparison between the former classic but specific solution and this new and generic solution, using the tool Signavio Decision Manager (latest version 10.13.0). MDA (Model Driven Architecture) and its three models CIM, PIM & PSM gives us the appropriate reading grid for this comparison:

3 MDA models
Description
Classic specific DMN solution
from Signavio Decision Manager
to BRMS Drools
CIM (Computation
Independent Model)
Representation model for business,
independent of computer considerations
DRD (Decision Requirements Diagram)
+ Decision Tables
PIM (Platform
Independent Model)
Design model for computing,
independent of the execution platform
û
PSM (Platform
Specific Model)
Design model for computing,
specific to the execution platform
DRL (Drools Rule Language)
+ DMN Formulae Java8-1.0-SNAPSHOT.jar

The visible aspect of DMN is its emblematic Decision Requirements Diagram (DRD) which can be completed with some Decision Tables for representing the business logic for decision-making. A DRD and its Decision Tables compose a CIM model, independent of any computer considerations.

Then, in the classic but specific DMN solution, Signavio Decision Manager is able, from a business DMN model (DRD diagram and Decision Tables), to export a DRL file directly for a Drools rules engine. So, this solution skips the intermediate PIM level, that is not very compliant with MDA concept. Note that this DRL file needs a specific Signavio’s jar library with DMN formulae.

3 MDA models
Description
New generic DMN solution
from Signavio Decision Manager(or other tools)
to BRMS Drools (or other BRMS)
CIM (Computation
Independent Model)
Representation model for business,
independent of computer considerations
DRD (Decision Requirements Diagram)
+ Decision Tables
PIM (Platform
Independent Model)
Design model for computing,
independent of the execution platform
DMN 1.1 XML (interchange model)
containing FEEL Expressions
PSM (Platform
Specific Model)
Design model for computing,
specific to the execution platform
û

The invisible aspect of DMN is its DMN XML interchange model, very useful for exchanging a model between modeling tools. DMN XLM is also very useful for going from model to automation. DMN XML model takes into account computer considerations, but as it is defined into DMN specification, a standard published by OMG (Object Management Group), it is independent of any execution platform, so it is a PIM model. DMN XML complies to DMN metamodel and can be checked with an XSD schema provided by OMG. The latest version 1.1 of DMN has refined this DMN XML format.

As DMN is a declarative language, a DMN XML file contains essentially declarations. The business logic included can be expressed with FEEL (Friendly Enough Expression Language) expressions. All entities required for a DMN model (input data, decision tables, rules, output decisions, etc.) are exported into the DMN XML file, due to a mechanism called serialization. It is why automation is now possible from DMN XML directly. Not all DMN modeling tools allow to export (or import) to DMN XML format.

With the new generic DMN solution, Signavio Decision Manager is now able, from the same business DMN model (DRD diagram and decision tables), to export “DMN 1.1 XML” interchange model. As the future 7.0.0 version of Drools is able to interpret “DMN 1.1 XML” format directly, the last level PSM, specific to the execution platform, is not useful anymore.

The new generic DMN solution, without skipping PIM level, sounds definitely better than the specific one and is a good basis for automating decision-making. Another advantage is, as Signavio said, that this new approach using “DMN 1.1 XML” reduces the vendor lock-in.

Thierry BIARD

Share/Bookmark