Tuesday, October 09, 2018

Early video of #jBPM Case Modeller

Early video of #jBPM Case Modeller :) UX and L&F improvements to come https://youtu.be/AjSSKdbM_Ns 




Share/Bookmark

Tuesday, October 02, 2018

DMN webinar on October 18, 2018


Rule engines are a powerful yet flexible tool to define and implement huge sets of business requirements and constraints. While Drools Rule Language (DRL) may be appealing to define business rules for technically savvy domain experts, a new visual based standard has emerged in the Decision Management space to bridge the gap between technical and business analyst: the Object Management Group published in 2015 the Decision Model & Notation, a specification for a graphical decision language, expressly designed for business users.

Drools provides an open source execution engine with full DMN support at conformance level 3. If you're curious to know more about this there's no better way than joining free webinar where he will explore in details the features of this new standard.

Share/Bookmark

Monday, July 09, 2018

Mario Fusco is the new Drools Project Lead

It is my honor to announce that Mario Fusco will be taking over as the new Drools Project Lead.

Mario is a Principal Software Engineer at Red Hat, working on the development of the Drools core. He has vast experience as a Java developer, and among other accomplishments, was nominated a Java Champion in 2016. Mario previously created and led the open source Lambdaj project and has been involved in (and often leads) many enterprise-level projects in several industries ranging from media companies to the financial sector.

Mario is a frequent speaker at conferences, like the Red Hat Summit, Devoxx, Voxxed, JavaOne, LambaWorld and others. Mario authored several articles for InfoQ and DZone, and co-authored the “Java 8 in Action” book published by Manning. His tweeter following is another hint at his popularity, and if you would like to keep up with his latest insights, I suggest you hit that button.

Mario joined Red Hat in 2011 to work on the core engine of Drools and has since made invaluable contributions, including the development and improvement of the latest core algorithm. Among his interests are also functional programming and domain specific languages.

If you ever have the opportunity to interact with him in person, you will experience firsthand how nice of a person he is and how pleasant it is to have a talk with him. You can even offer him a beer, he will like that, but whatever you do, make sure you follow proper Italian etiquette (or is that Mario’s etiquette?): no pineapple on your pizza and no cappuccinos after meals.

Please join me in congratulating Mario on his new role as the Drools Project lead.

Edson Tirelli



Share/Bookmark

Wednesday, April 18, 2018

The DMN Cookbook has been published

The Decision Model and Notation (DMN) Standard offers something no previous attempt at standardization of decision modelling did: a simple, graphical effective language for the documentation and modelling of business decisions. It defines both the syntax and the semantics of the model, allowing IT and Business teams to "speak the same language". It also ensures interoperability between vendor tools that support the standard, and protect customer's investment and IP.

It was an honour to work with accomplished author Bruce Silver to write the "DMN Cookbook", a book that explains the features of the standard by examples, showing solutions for real modelling problems. It discusses what DMN offers that is different from traditional rules authoring languages, as well as how to leverage its features to create robust solutions.

Topics covered include:

  • What is DMN?
  • How DMN differs from traditional rule languages
  • DMN Basics
    • DRG elements and DRDs
    • Decision tables and other boxed expressions
    • FEEL
  • Decision services
  • Practical examples
    • Uniform Residential Loan Application: validation, handling null values, handling XML input
    • GSE Mortgage Eligibility: variations using a central registry
    • Canadian Sales Tax: variations without a central registry (dynamic and static composition)
    • Timing the Stock Market: modeling a state chart with DMN
    • Land Registry: DMN-enhanced Smart Contract
    • Decision Service Deployment: automated and manual
    • Decision Service Orchestration: BPMN or Microsoft Flow

More information on the book website.

Available on Amazon.


Share/Bookmark

bpmNext 2018 day 1 videos are already online!

The organization of bpmNEXT 2018 is outdoing themselves! The videos from the first day of the conference are already available.

In particular, the presentations from Denis Gagné, Bruce Silver and Edson Tirelli are directly related to Drools with content related to DMN. I also recommend the presentation from Vanessa Bridge, as it is related to BPM and the research we've been doing on blockchain.


Smarter Contracts with DMN: Edson Tirelli, Red Hat https://youtu.be/tdpZgbQbF9Q

Timing the Stock Market with DMN: Bruce Silver, methodandstyle.com https://youtu.be/vHCIC1HGbHQ

Decision as a Service (DaaS): The DMN Platform Revolution: Denis Gagné, Trisotech https://youtu.be/sYAIcBhVhIc

Secure, Private, Decentralized Business Processes for Blockchains: Vanessa Bridge, ConsenSys 
https://youtu.be/oww8zMzxvZA

The Future of Process in Digital Business: Jim Sinur, Aragon Research https://youtu.be/iBJBbXeVYUA

A New Architecture for Automation: Neil Ward-Dutton, MWD Advisors https://youtu.be/-AeijpL4b98

Turn IoT Technology into Operational Capability: Pieter van Schalkwyk, XMPro https://youtu.be/G7C01e8qyac

Business Milestones as Configuration: Joby O'Brien and Scott Menter, BPLogix https://youtu.be/D_heO33fyC0

Designing the Data-Driven Company: Elmar Nathe, MID GmbH https://youtu.be/zb__xVsOEA0

Using Customer Journeys to Connect Theory with Reality: Till Reiter and Enrico Teterra, Signavio 

https://youtu.be/ov0SqJCMmoY

Discovering the Organizational DNA: Jude Chagas Pereira, IYCON; Frank Kowalkowski, KCI https://youtu.be/NsCDgKPsTCs

Enjoy!

Share/Bookmark

Monday, February 26, 2018

The Drools Executable Model is alive

Overview

The purpose of the executable model is to provide a pure Java-based representation of a rule set, together with a convenient Java DSL to programmatically create such model. The model is low level and designed for the user to provide all the information it needs, such as the lambda’s for the index evaluation. This keeps it fast and avoids building in too many assumptions at this level. It is expected higher level representations can layer on in the future, that may be more end-user focused. This work also highly compliments the unit work, which provides a java-oriented way to provide data and control orchestration.

Details

This model is generic enough to be independent from Drools but can be compiled into a plain Drools knowledge base. For this reason the implementation of the executable model has been split in 2 subprojects:
  1. drools-canonical-model is the canonical representation of a rule set model which is totally independent from Drools
  2. drools-model-compiler compiles the canonical model into Drools internal data structures making it executable by the engine
The introduction of the executable model brings a set of benefits in different areas:
  • Compile time: in Drools 6 a kjar contained the list of drl files and other Drools artifacts defining the rule base together with some pre generated classes implementing the constraints and the consequences. Those drl files needed to be parsed and compiled from scratch, when the kjar is downloaded from the Maven repository and installed in a KieContainer, making this process quite slow especially for large rules sets. Conversely it is now possible to package inside the kjar the Java classes implementing the executable model of the project rule base and recreate the KieContainer and its KieBases out of it in a much faster way. The kie-maven-plugin automatically generates the executable model sources from the drl files during the compilation process.
  • Runtime: in the executable model all constraints are defined as Java lambda expressions. The same lambdas are also used for constraints evaluation and this allows to get rid of both mvel for interpreted evaluation and the jitting process transforming the mvel-based constraints in bytecode, resulting in a slow warming up process.
  • Future research: the executable model will allow to experiment new features of the rule engine without the need of encoding them in the drl format and modify the drl parser to support them. 

Executable Model DSLs

One goal while designing the first iteration of the DSL for the executable model was to get rid of the notion of pattern and to consider a rule as a flow of expressions (constraints) and actions (consequences). For this reason we called it Flow DSL. Some examples of this DSL are available here.
However after having implemented the Flow DSL it became clear that the decision of avoiding the explicit use of patterns obliged us to implement some extra-logic that had both a complexity and a performance cost, since in order to properly recreate the data structures expected by the Drools compiler it is necessary to put together the patterns out of those apparently unrelated expressions.
For this reason it has been decided to reintroduce the patterns in a second DSL that we called Pattern DSL. This allowed to bypass that algorithm grouping expressions that has to fill an artificial semantic gap and that is also time consuming at runtime.
We believe that both DSLs are valid for different use cases and then we decided to keep and support both. In particular the Pattern DSL is safer and faster (even if more verbose) so this will be the DSL that will be automatically generated when creating a kjar through the kie-maven-plugin. Conversely the Flow DSL is more succinct and closer to the way how an user may want to programmatically define a rule in Java and we planned to make it even less verbose by generating in an automatic way through a post processor the parts of the model defining the indexing and property reactivity. In other terms we expect that the Pattern DSL will be written by machines and the Flow DSL eventually by human.


Programmatic Build

As evidenced by the test cases linked in the former section it is possible to programmatically define in Java one or more rules and then add them to a Model with a fluent API

Model model = new ModelImpl().addRule( rule );

Once you have this model, which as explained is totally independent from Drools algorithms and data structures, it’s possible to create a KieBase out of it as it follows

KieBase kieBase = KieBaseBuilder.createKieBaseFromModel( model );

Alternatively, it is also possible to create an executable model based kieproject by starting from plain drl files, adding them to a KieFileSystem as usual

KieServices ks = KieServices.Factory.get();
KieFileSystem kfs = ks.newKieFileSystem()
                      .write( "src/main/resources/r1.drl", createDrl( "R1" ) );
KieBuilder kieBuilder = ks.newKieBuilder( kfs );

and then building the project using a new overload of the buildAll() method that accepts a class specifying which kind of project you want to build

kieBuilder.buildAll( ExecutableModelProject.class );

Doing so the KieBuilder will generate the executable model (based on the Pattern DSL) and then the resulting KieSession

KieSession ksession = ks.newKieContainer(ks.getRepository()
                                           .getDefaultReleaseId())
                        .newKieSession();

will work with lambda expression based constraint as described in the first section of this document. In the same way it is also possible to generate the executable model from the Flow DSL by passing a different project class to the KieBuilder

kieBuilder.buildAll( ExecutableModelFlowProject.class );

but, for what explained when discussing the 2 different DSLs, it is better to use the pattern-based one for this purpose.

Kie Maven Plugin

In order to generate a kjar embedding the executable model using the kie-maven-plugin it is necessary to add the dependencies related to the two formerly mentioned projects implementing the model and its compiler in the pom.xml file:

<dependencies>
 <dependency>
   <groupId>org.drools</groupId>
   <artifactId>drools-model-compiler</artifactId>
 </dependency>
 <dependency>
   <groupId>org.drools</groupId>
   <artifactId>drools-canonical-model</artifactId>
 </dependency>
</dependencies>

also add the plugin to the plugin section

<build>
 <plugins>
   <plugin>
     <groupId>org.kie</groupId>
     <artifactId>kie-maven-plugin</artifactId>
     <version>${project.version}</version>
     <extensions>true</extensions>
   </plugin>
 </plugins>
</build>
An example of a pom.xml file already prepared to generate the executable model is available here. By default the kie-maven-plugin still generates a drl based kjar, so it is necessary to run the plugin with the following argument:

-DgenerateModel=<VALUE>
Where <VALUE> can be one of three values:

YES
NO
WITHDRL

Both YES and WITHDRL will generate and add to the kjar use the Java classes implementing the executable model corresponding to the drl files in the original project with difference that the first will exclude the drl files from the generated kjar, while the second will also add them. However in this second case the drl files will play only a documentation role since the KieBase will be built from the executable model regardless.

Future developments

As anticipated one of the next goal is to make the DSLs, especially the flow one, more user friendly, in particular generating with a post-processor all the parts that could be automatically inferred, like the ones related to indexes and property reactivity.
Orthogonally from the executable model we improved the modularity and orchestration of rules especially through the work done on rule units, This focus around pojo-ification compliments this direction of research around pure java DSLs and we already have a few simple examples of how executable model and rule units can be mixed to this purpose.

Share/Bookmark

Wednesday, February 07, 2018

Running multi Workbench modules on the latest IntelliJ Idea with live reloading (client side)


NOTE: The instructions below apply only to the old version of the gwt-maven-plugin

At some point in the past, IntelliJ released an update that made it impossible to run the Workbench using the GWT plugin. After exchanging ideas with people on the team and summing up solutions, some workarounds have emerged. This guide provides information to running any Errai-based applications in the latest version of IntelliJ along with other modules to take advantage of IntelliJ's (unfortunately limited) live reloading capabilities to speed the development workflow.


Table of contents


1. Running Errai-based apps in the latest IntelliJ
2. Importing other modules and use live reload for client side code
3. Advanced configurations
3.1. Configuring your project's pom.xml to download and unpack Wildfly for you
3.2. Alternative workaround for non-patched Wildfly distros



1. Running Errai-based apps in the latest IntelliJ


As Max Barkley described on #logicabyss a while ago, IntelliJ has decided to hardcode gwt-dev classes to the classpath when launching Super Dev Mode in the GWT plugin. Since we're using the EmbeddedWildflyLauncher to deploy the Workbench apps, these dependencies are now deployed inside our Wilfdfly instance. Nothing too wrong with that except the fact that gwt-dev jar depends on apache-jsp, which has a ServletContainerInitializer marker file that causes the deploy to fail.

To solve that issue, the code that looks to the ServletContainerIntitilizer file and causes the deploy to fail was removed in custom patched versions of Wildfly that are available in Maven Central under the org.jboss.errai group id.

The following steps provide a quick guide to running any Errai-based application on the latest version of IntelliJ.


1. Download a patched version of Wildfly and unpack it into any directory you like
- For Wildfly 11.0.0.Final go here

2. Import the module you want to work on (I tested with drools-wb)
  - Open IntelliJ, go to File -> Open.. and select the pom.xml file, hit Open then choose Open as Project

3. Configure the GWT plugin execution like you normally would on previous versions of IntelliJ

- VM Options:
  -Xmx6144m
    -Xms2048m
  -Dorg.uberfire.nio.git.dir=/tmp/drools-wb
  -Derrai.jboss.home=/Users/tiagobento/drools-wb/drools-wb-webapp/target/wildfly-11.0.0.Final


- Dev Mode parameters:
  -server org.jboss.errai.cdi.server.gwt.EmbeddedWildFlyLauncher


4. Hit the Play button and wait for the application to be deployed


2. Importing other modules and using live reload for client side code


After being able to run a single webapp inside the latest version of IntelliJ, it might be very useful to have some of its dependencies be imported as well, so that after changing client-code on that dependency, you don't have to wait (way) too long for GWT to compile and bundle your application's JavaScript code again.

Simply go to File > New > Module from existing sources.. and choose the pom.xml of the module you want to import.
If you have kie-wb-common or appformer imported alongside with another project, you'll most certainly have to apply a patch in the beans.xml file of your webapp.

For drools-wb you can download the patch here. For other projects such as jbpm-wb, optaplanner-wb or kie-wb-distributions, you'll have to essentially do the same thing, but changing the directories inside the .diff file.

If your webapp is up, hit the Stop button and then hit Play again. Now you should be able to re-compile any code changed inside IntelliJ much faster.



3.1. Configuring your project's pom.xml to download and unpack Wildfly for you


If you are used to a less manual workflow, you can use the maven-dependency-plugin to download and unpack a Wildfly instance of your choice to any directory you like.

After you've added the snipped below to your pom.xml file, remember to add a "Run Maven Goal" before the Build of your application in the "Before launch" section of your GWT Configuration. Here I'm using the process-resources phase, but other phases are OK too.

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <executions>
      <execution>
        <id>unpack</id>
        <phase>process-resources</phase>
        <goals>
          <goal>unpack</goal>
        </goals>
        <configuration>
          <artifactItems>
            <artifactItem>
              <!-- Using a patched version of Wildfly -->
              <groupId>org.jboss.errai</groupId>
              <artifactId>wildfly-dist</artifactId>
              <version>11.0.0.Final</version>
              <type>zip</type>
              <overWrite>false</overWrite>
              <!-- Unpacking it into /target/wildfly-11.0.0.Final -->
              <outputDirectory>${project.build.directory}</outputDirectory>
            </artifactItem>
          </artifactItems>
          <skip>${gwt.compiler.skip}</skip>
        </configuration>
      </execution>
    </executions>
  </plugin>



3.2. Alternative workaround for non-patched Wildfly distros


If you want to try a different version of Widlfly or if you simply don't want to depend on any patched versions, you can still use official distros and exclude the ServletContainerInitializer file from the apache-jsp jar on your M2_REPO folder.

If you're working on a Unix system, the following commands should do the job.

1. cd ~/.m2/repository/

2. zip -d org/eclipse/jetty/apache-jsp/{version}/apache-jsp-{version}.jar META-INF/services/javax.servlet.ServletContainerInitializer

By excluding it manually from the apache-jsp jar, Maven won't try to download it again after you remove the file. That makes this workaround permanent as long as you don't erase your ~/.m2/ folder. Keep in mind that if you ever need the apache-jsp jar to have this file back, the best option is to delete the apache-jsp dependency directory and let Maven download it again.


New instructions for the new version of the gwt-maven-plugin are to come, stay tunned!





Share/Bookmark

Friday, November 10, 2017

Building Business Applications with DMN and BPMN

A couple weeks ago our own Matteo Mortari delivered a joint presentation and live demo with Denis Gagné from Trisotech at the BPM.com virtual event.

During the presentation, Matteo live demo'd a BPMN process and a couple DMN decision models created using the Trisotech tooling and exported to Red Hat BPM Suite for seamless execution.

Please note that no glue code was necessary for this demo. The BPMN process and the DMN models are natively executed in the platform, no Java knowledge needed.

Enough talking, hit play to watch the presentation... :)



Share/Bookmark

Thursday, October 12, 2017

5 Pillars of a Successful Java Web Application

Last week, Alex Porcelli and I had the opportunity to present at JavaOne San Francisco 2017 two talks related to our work: "5 Pillars of a Successful Java Web Application” and The Hidden Secret of Java Open Source Projects.

It was great to share our cumulative experience over the years building the workbench and the web tooling for the Drools and jBPM platform and both talks had great attendance (250+ people in the room).


In this series of posts, we’ll detail our "5 Pillars of a Successful Java Web Application”, trying to give you an overview of our research and also a taste of participating in a great event like Java One.
There are a lot of challenges related to building and architecting a web application, especially if you want to keep your codebase updated with modern techniques without throwing away a lot of your code every two years in favor of the latest trendy JS framework.
In our team we are able to successfully keep a 7+ year old Java application up-to-date, combining modern techniques with a legacy codebase of more than 1 million LOC, with an agile, sustainable, and evolutionary web approach.
More than just choosing and applying any web framework as the foundation of our web application, we based our web application architecture on 5 architectural pillars that proved crucial for our platform’s success. Let's talk about them:

1st Pillar: Large Scale Applications

The first pillar is that every web application architecture should be concerned about the potential of becoming a long-lived and mission-critical application, or in other words, a large-scale application. Even if your web application is not exactly big like ours (1mi+ lines of web code, 150 sub-projects, +7 years old) you should be concerned about the possibility that your small web app will become a big and important codebase for your business. What if your startup becomes an overnight success? What if your enterprise application needs to integrate with several external systems?
Every web application should be built as a large-scale application because it is part of a distributed system and it is hard to anticipate what will happen to your application and company in two to five years.
And for us, a critical tool for building these kinds of distributed and large-scale applications throughout the years has been static typing.

Static Typing

The debate of static vs. dynamic typing is very controversial. People who advocate in favor of dynamic typing usually argue that it makes the developer's job easier. This is true for certain problems.
However, static typing and a strong type system, among other advantages, simplify identifying errors that can generate failures in production and, especially for large-scale systems, make refactoring more effective.
Every application demands constant refactoring and cleaning. It’s a natural need. For large-scale ones, with codebases spread across multiple modules/projects, this task is even more complex. The confidence when refactoring is related to two factors: test coverage and the tooling that only a static type system is able to provide.
For instance, we need a static type system in order to find all usages of a method, in order to extract classes, and most importantly to figure out at compile time if we accidentally broke something.
But we are in web development and JavaScript is the language of the web. How can we have static typing in order to refactor effectively in the browser?

Using a transpiler

A transpiler is a type of compiler that takes the source code of a program written in one programming language as its input and produces equivalent source code in another programming language.
This is a well-known Computer Science problem and there are a lot of transpilers that output JavaScript. In a sense, JavaScript is the assembly of the web: the common ground across all the web ecosystems. We, as engineers, need to figure out what is the best approach to deal with JavaScript’s dynamic nature.
A Java transpiler, for instance, takes the Java code and transpiles it to JavaScript at compile time. So we have all the advantages of a statically-typed language, and its tooling, targeting the browser.

Java-to-JavaScript Transpilation

The transpiler that we use in our architecture, is GWT. This choice is a bit controversial, especially because the GWT framework was launched in 2006, when the web was a very different place.
But keep in mind that every piece of technology has its own good parts and bad parts. For sure there are some bad parts in GWT (like the Swing Style Widgets, multiple permutations per browser/language), but keep in mind that for our architecture what we are trying to achieve is static typing on the web, and for this purpose the GWT compiler is amazing.
Our group is part of GWT steering committee, and the next generation of GWT is all about JUST these good parts. Basically removing or decoupling the early 2000 legacy and keeping only the good parts. In our opinion the best parts of GWT are:
  • Java to JavaScript transpiler: extreme JavaScript performance due to compiling optimizations and static typing in the web;
  • java.* emulation: excellent emulation of the main java libraries, providing runtime behavior/consistency;
  • JS Interop: almost transparent interoperability between Java <-> Javascript. This is a key aspect of the next generation of GWT and the Drools/jBPM platform: embrace and interop (two way) with JS ecosystem.

Google is currently working on a new transpiler called J2CL (short for Java-to-Closure, using the Google Closure Compiler) that will be the compiler used in GWT 3, the next major GWT release. The J2CL transpiler has a different architecture and scope, allowing it to overcome many of the disadvantages of the previous GWT 2 compiler.

Whereas the GWT 2 compiler must load the entire AST of all sources (including dependencies), J2CL is not a monolithic compiler. Much like javac, it is able to individually compile source files, using class files to resolve external dependencies, leaving greater potential for incremental compilation.
These three good parts are great and in our opinion, you should really consider using GWT as a transpiler in your web applications. But keep in mind that the most important point here is that GWT is just our first pillar implementation. You can consider using other transpilers like Typescript, Dart, Elm, ScalaJS, PureScript, or TeaVM.
The key point is that every web application should be handled as a large-scale application, and every large-scale application should be concerned about effective refactoring. The best way to achieve this is using statically-typed languages.
This is the first of three posts about our 5 pillars of successful web applications. Stay tuned for the next ones.

[I would like to thank Max Barkley and Alexandre Porcelli for kindly reviewing this article before publication, contribute with the final text and provided great feedback.]



Share/Bookmark

Wednesday, September 20, 2017

Watch all the sessions from Red Hat Drools Day LIVE from your desktop or mobile, Sept 26th

We will be streaming all the sessions of the Drools Day in NYC, on Sep 26th, live!

Use the following link to watch:

https://www.facebook.com/RedHatDeveloperProgram/videos/1421743381254509/

Or watch it here:




Share/Bookmark