Tuesday, August 31, 2010

Want the latest on jBPM and Drools Flow? Drools and jBPM Boot Camp, San Jose Oct 2010

We are working on the jBPM5 codebase incorporating the best parts of Drools Flow. For anyone who is currently on jBPM3, jBPM4 or Drools Flow or looking to find out more about these technologies and BPMN2 this Boot Camp is a must.

Rules Fest, San Jose October 2010 is a 3 day rules conference followed by a 1 day boot camp. You can attend just the bootcamp or the entire event.
http://rulesfest.org/html/registration.html

Rete NT - 10 x faster than Rete 2

A little birdy has just told me that Charles Forgy has finished his latest creation, Rete NT. Where the NT is for "Next Technology". The engine has been built for parallisation and concurrency in execution and promises to be atleast 10x faster than his previous Rete 2 for large systems.

I know that Charles has been researching parallisation and concurrency for rule engines for over 20 years and when we last spoke he was talking about set oriented theory. So I'm guessing it has something in the lines of "Collection Oriented Match" and "Set-Oriented Constructs". Anyway can't wait for the official release to get more details, I suspect it'll be announced at his PST site soon: Production Systems Technology.

Thursday, August 26, 2010

Configuring Guvnor to use an external RDBMS made easier

The default Guvnor repository configuration uses embedded Derby databases which writes the workspace and version information to the local file system. This is not always optimal for a production system where it makes sense to use an external RDBMS.

We added a new section under the "Administration" tab called "Repository Configuration" which helps generate the repository.xml configuration file for a number of databases (Microsoft SQL Server, MySQL, Oracle, PostgreSQL, Derby, H2)

Check out a video showing the new feature.

Gremlin: A Graph-Based Programming Language

Gremlin is an interesting DSL, based on XPATH, for traversing graphs:
"Gremlin is a Turing-complete, graph-based programming language developed for key/value-pair multi-relational graphs called property graphs. Gremlin makes extensive use of XPath 1.0 to support complex graph traversals. Connectors exist to various graph databases and frameworks. This language has application in the areas of graph query, analysis, and manipulation."

You can read the introduction and access the full set of presentation slides at dzone:
http://www.dzone.com/links/r/gremlin_a_graph_programming_language.html

The presentation shows a nice example using the Grateful Dead:


(click to enlarge)

Friday, August 20, 2010

Left and Right Unlinking - Community Project Proposal

In an effort to help encourage those thinking of learning more about the internals of rule engines. I have made a document on implementating left and right unlinking. I describe the initial paper in terms relevant to Drools users, and then how that can be implemented in Drools and a series of enhancements over the original paper. The task is actually surprisingly simple and you only need to learn a very small part of the Drools implementation to do it, as such it's a great getting started task. For really large stateful systems of hundreds or even thousands of rules and hundreds of thousands of facts it should save significant amounts of memory.

Any takers?

Mark

Introduction
The following paper describes Left and Right unlinking enhancements for Rete based networks:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.6246

A rete based rule engine consists of two parts of the network, the alpha nodes and the beta nodes. When an object is first inserted into the engine it is discriminated against by the object type Node, this is a one input and one output node. From there it may be further discriminated against by alpha nodes that constrain on literal values before reaching the right input of a join node in the beta part of the network. Join nodes have two inputs, left and right. The right input receives propagations consisting of a single object from the alpha part of the network. The left input receives propagations consisting of 1 or more objects, from the parent beta node. We refer to these propagating objects as LeftTuple and RightTuple, other engines also use the terms tokens or partial matches. When a tuple propagation reaches a left or right input it's stored in that inputs memory and it attempts to join with all possible tuples on the opposite side. If there are no tuples on the opposite side then no join can happen and the tuple just waits in the node's memory until a propagation from the opposite side attempts to join with it. If a given. It would be better if the engine could avoid populating that node's memory until both sides have tuples. Left and right unlinking are solutions to this problem.

The paper proposes that a node can either be left unlinked or right unlinked, but not both, as then the rule would be completely disconnected from the network. Unlinking an input means that it will not receive any propagations and that the node's memory for that input is not populated, saving memory space. When the opposite side, which is still linked, receives a propagation the unlinked side is linked back in and receives all the none propagated tuples. As both sides cannot be unlinked, the paper describes a simple heuristic for choosing which side to unlink. Which ever side becomes empty first, then unlink the other. It says that on start up just arbitrarily chose to unlink one side as default. The initial hit from choosing the wrong side will be negligible, as the heuristic corrects this after the first set of propagations.

If the left input becomes empty the right input is unlink, thus clearing the right input's memory too. The moment the left input receives a propagation it re-attaches the right input fully populating it's memory. The node can then attempt joins as normal. Vice-versa if the right input becomes empty it unlinks the left input. The moment the right input receives a propagation it re-attaches the left input fully populating it's memory so that the node can attempt to join as normal.

Implementing Left and Right Unlinking for shared Knowledge Bases

The description of unlinking in the paper won't work for Drools or for other rule engines that share the knowledge base between multiple sessions. In Drools the session data is decoupled from the main knowledge base and multiple sessions can share the same knowledge base. The paper above describes systems where the session data is tightly coupled to the knowledge base and the knowledge base has only a single session. In shared systems a node input that is empty for one session might not be empty for another. Instead of physically unlinking the nodes, as described in the paper, an integer value can be used on the session's node memory that indicates if the node is unlinked for left, right or both inputs. When the propagating node attempts to propagate instead of just creating a left or right tuple and pushing it into the node. It'll first retrieve the node's memory and only create the tuple and propagate if it's linked.

This is great as it also avoids creating tuple objects that would just be discarded afterwards as there would be nothing to join with, making things lighter on the GC. However it means the engine looks up the node memory twice, once before propagating to the node and also inside of the node as it attempt to do joins. Instead the node memory should be looked up once, prior to propagating and then passed as an argument, avoiding the double lookup.

Traditional Rete has memory per alpha node, for each literal constraint, in the network. Drools does not have alpha memory, instead facts are pulled from the object type node. This means that facts may needlessly evaluate in the alpha part of the network, only to be refused addition to the node memory afterwards. Rete supports something called “node sharing”, where multiple rules with similar constructs use the same nodes in the network. For this reason shared nodes cannot easily be unlinked. As a compromise when the alpha node is no longer shared, the network can do a node memory lookup, prior to doing the evaluation and check if that section of the network is unlinked and avoid attempting the evaluation if it is. This allows for left and right unlinking to be used in a engine such as Drools.

Using Left and Right Unlinking at the Same Time

The original paper describes an implantation in which a node cannot have both the left and right inputs unlinked for the same node. Building on the extension above to allow unlinking to work with a shared knowledge base the initial linking status value can be set to both left and right being unlinked. However in this initial state, where both sides are unlinked, the leaf node's right input isn't just waiting for a left propagation so the right can re-link itself (which it can't as the left is unlinked too). It's also waiting to receive it's first propagation, when it does it will link the left input back in. This will then tell it's parent node's right input to also do the same, i.e. wait for it's first right input propagation and link in the left when it happens. If it already has a right propagation it'll just link in the left anyway. This will trickle up until the root is finally linked in and propagations can happen as normally, and the rule's nodes return to the above heuristics for when to link and unlink the nodes.

Avoid Unnecessary Eager Propagations

A rule always eagerly propagates all joins, regardless of whether the child node can undertake joins too, for instance of there is no propagates for the leaf node then no rules can fire, and the eager propagations are wasted work. Unlinking can be extended to try to prevent some level of eager propagations. Should the leaf node become right unlinked and that right input also become empty it will unlink the left too (so both sides are unlinked) and go back to waiting for the first right propagation, at which point it'll re-link the left. If the parent node also has it's right input unlinked at the point that it's child node unlinks the left it will do this too. It will repeat this up the chain until it reaches a node that has both left and right linked in. This stops any further eager matching from occurring that we know can't result in an activation until the leaf node has at least one right input.

Heuristics to Avoid Churn from Excessive and Unnecessary Unlinking

The only case where left and right linking would be a bad idea is in situations that would cause a "churn”. Churn is when a node with have a large amount of right input memory is continually caused to be linked in and linked out, forcing those nodes to be repeatedly populated which causes a slow down. However heuristics can be used here too, to avoid unnecessary unlinking. The first time an input becomes empty unlink the opposite and store a time stamp (integer counter for fact handles from the WM). Then have a minimum delta number, say 100. The next time it attempts to unlink, calculate the delta of the current time stamp (integer counter on fact handle) and the time stamp of the node which last unlinked (which was recorded at the point of unlinking) if it's less than 100 then do nothing and don't unlink until it's 100 or more. If it's 100 or more then unlink and as well as storing the unlink time stamp, then take the delta of 100 or more and apply a multiple (2, 3, 4 etc depending on how steep you want it to rise, 3 is a good starting number) and store it. Such as if the delta is 100 then store 300. The next time the node links and attempts to unlink it must be a delta of 300 or more, the time after that 900 the time after that 2700.

Friday, August 13, 2010

Don't forget RuleFest October 2010

Don't forget RuleFest 2010 this October. Edson, Kris and myself will
be there. And the event includes a one day bootcamp for those wanting to
discuss their projects with us or run over examples with us to hand.
http://rulesfest.org/html/home.html



Rules Fest brings you the best and brightest speakers from industry, academia, and private research to share practical knowledge and techniques for creating, utilizing, and managing software that incorporates rule engines, inference engines, logical reasoners, or other rule-based and reasoning technologies.

Rules Fest exists to serve the:

  • Architects
  • Engineers,
  • Developers, and
  • Programmers

who use these technologies to solve complex information processing and decision-making problems.

http://rulesfest.org/html/home.html

Wednesday, August 04, 2010

Drools Grid (version 2) – #1 Modules Introduction

Hi there, I'm right now commiting/merging into the JBoss Drools trunk (5.2.0.SNAPSHOT) the new version of the Drools Grid module. The idea of this module and all its submodules is to provide the ability to execute distributed knowledge session across distributed grid of machines/nodes.

For achieving this big goal we can set up different components that will allow us to transparently distribute our knowledge session based on the requirements that we have for our applications.
In this post I will give a quick overview about each of these components and in the next few post I will be trying to show how we can use this project in real life scenarios.

Remember that this is a work in progress, so community feedback is appreciated!

Inside the drools-grid directory you will find the following sub modules:

Drools Grid API (drools-grid-api - Low level API)

This module contains all the low level APIs to interact with nodes across the grid. You will find here core concepts that will be used in the grid internals to define different types of services.
Some of the core interfaces that you will find here are:

ExecutionNodeService: this interface will represent across the grid Nodes that will be able to host and execute knowledge sessions.
DirectoryNodeService: this interface will represent across the grid Nodes that will be in charge of hosting a directory with information about what’s living inside the grid. Inside these nodes we can find all the ExecutionNodeServices and HumanTaskNodeServices that we have currently running inside our distributed nodes and also the knowledge sessions that we have running inside them.
HumanTaskNodeService: this interface will represent a HumanTaskNodeService that will be in charge of hosting and executing human tasks for business processes. (Work in progress, so expect changes)

(note: In the future expect to see more of these interfaces representing new type of services running inside the grid.)

These services will be distributed running in different places and we will use a simple API to be able to be connected with these services in order to use them. For handling these connections to different services we have a class called GridConnection. This GridConnection class will let us add new connectors to our different services. We can add new ExecutionNodes, DirectoryNodes or HumanTaskNodes connectors to a GridConnection. Based on these connectors when we ask for a specific service (executionNode, directoryNode or humanTaskNode) the GridConnection will choose one of the registered connectors and it will give us one connection to the service. In the case that we want to create a new knowledge session, we need to request for an ExectuionNode to the GridConnection. This will take one of the available connectors and it will create an ExecutionNode (client) for you to start using it. The executionNode internally will contain a set of low level services, that based on the connector type will be configured to provide an execution environment that will run locally, remotely or in a real distributed environment.

As you can imagine, these interfaces needs to be implemented in order to provide the functionality. That's why we have different modules that provides different implementations for these services.
It’s also important to note, that this APIs are extended for each particular type of environment. You will find two extensions right now: drools-grid-remote-api and drools-grid-distributed-api. Both will contain a set of specific classes and interfaces that extends the core functionality provided by the project drools-grid-api.

Let's take a look at the different different environment types and the sub-modules that we need to use in each of them.

Local Environments

This is a pretty straight forward environment. This environment will let us execute Drools in the way we already are used to. The only difference with the common Drools APIs is that we will use the Drools Grid APIs that will give us the power to move our application to a different type of environment in the future.

Drools Grid Local Impl (drools-grid-local):

This module provides a Local implementation of the previously described services. With Local I mean, in the same JVM instance. This implementation behaves in the same way that if we were using the common Drools APIs. The idea behind this implementation is to provide the ability to run Drools Grid locally using the same APIs that we can use in distributed environments. This will give us the possibility to move our implementations from one environment (Local) to more distributed ones (Remote or Distributed).

Inside this project you will find the local implementation of the services that will included inside the Execution Nodes and Directory Nodes. Note that we didn't include the HumanTask node here because we don't have a local implementation for the Human Task service.

Remote Environments

Remote Environments will let us run our knowledge sessions in different JVM instances distributed across a network of computers. Based on the requirements of each situations we will be able to choose the underlaying implementation that it’s used to communicate different runtimes hosted in different JVMs/Machines/Nodes.

Drools Grid Remote API (drools-grid-remote-api):

This module provides the API that needs to be implemented by Remote Environment providers. Right now the two planned implementation for these APIs will be HornetQ and Apache Mina. The idea behind this two implementation is provide the guidelines to create new and more robust implementations that suits different situations/requirements.

Drools Grid Remote Node Mina (drools-grid-remote-mina):

This module provides the implementation of the internal services required to establish a remote connection. This module can also be executed from the command line to execute a new Mina Remote Server that can host and execution remote knowledge sessions. This module provide the specific connector required by a client that wants to create remote sessions that will be hosted inside a Mina Execution Node Server.

Drools Grid Remote Directory Mina (drools-grid-remote-dir-mina):

This module provide the implementation of the internal services required to establish a connection with a remote directory service. This module can also be executed from the console to start a new directory node that will keep track of the Execution Nodes, Knowledge Sessions, Knowledge Bases and other Directory Nodes that are running inside our grid.

Distributed Environments

Distributed environments provide a more robust solution and more services around the topology of machines that we will in our network. In distributed environments we will have services that will let us automatically deploy, fork and manage all the services across the grid. We will not need to manage or start different services in different machines, we will have a full distributed environment that will be in charge of these tasks. One of the main characteristics of this kind of environments is that the environment itself will know when and how we need to create new services instances, because the demand is too high.

Drools Grid Distributed API (drools-grid-distributed-api)

This module provide some of the extensions needed for Distributed environments. It only adds some internal classes that are used for the services that will run in this kind of environments.

Drools Grid Distributed Node Rio (drools-grid-distributed-rio)

This module provides the implementation of a Rio service that will capable to host knowledge sessions. When we compile and package this module we will get a OAR (rio deployable archive), that we can distribute/deploy in a Rio environment. Take a look at this post to see how you can configure and deploy this Rio Service (I will add this soon).

Drools Grid Distribtued Directory Rio (drools-grid-distributed-dir-rio)

This module provides the implementation of a Rio service that will be capable to host information about the grid environment. It will store information related with our knowledge sessions, kbases and other services running across the grid. It’s important to note that Rio itself store and maintain low level information about the grid usage, and this information will not be part of the directory service.

Drools Grid Tasks (drools-grid-task) (Work in progress, need refactoring)

This module will be split in the following sub modules: drools-grid-task-api, drools-grid-remote-task-mina, drools-grid-remote-task-hornetQ and probably drools-grid-distributed-task-rio. Right now, the project only contains the interfaces to hook up the currently two supported implementations Apache Mina and HornetQ. But to move forward with this refactorings, we need to do first some core refactorings in drools-process/drools-process-task, to split implementations and interfaces.

Drools Grid Service (drools-grid-services)

This module brings the user the APIs to build Applications. The main idea behind this project is to provide a High Level API to abstract the low level details that are required to build a Grid Environment.
Using this module you can describe your grid topology and then use this definition in order to run your application on top of it. Inside the Drools Grid Services APIs you will have the following concepts to describe and use your Grid Topology:

GridTopology: a GridTopology will represent the topology itself. It will be composed with ExecutionEnvironments, DirectoryInstances and TaskServerInstances. You as client user, will define your topology (where are your ExecutionEnvironments, DirectoryInstances and TaskServerInstance) and then we will create a new GridTopology instance using this definition. Once we get the GridTopology object we can start using it for our applications executions.

ExecutionEnvironment: it will represent a Node/Machine that will be able to host more than one knowledge session. Inside this node/machine the ksession will run and we can interact with it remotely (or locally).
DirectoryInstance: it will represent a node that will keep track about the other nodes in the grid and it will let us register, and lookup this services and it’s contents.
TaskServerInstance: it will present a human task server node, that will be able to execute and maintain all the information about human tasks for business processes.

If you want to create an application that uses Drools Grid, this is the module that you want to use. We will be analyzing how to use this module in future posts.

In brief

Basically I’ve introduced the modules inside Drools Grid. I will be working hard in some refactorings during the next two weeks, so feedback is really appreciated. I will publish in another blog post my current TODO list, if you want to help I will be here trying to answering questions.

Stay tuned!

Original post: http://salaboy.wordpress.com/2010/08/04/drools-grid-version-2-1-modules-introduction/

DEBS 2009 - Excellent CEP presentation

http://www.slideshare.net/opher.etzion/debs2009-event-processing-languages-tutorial