Wednesday, November 28, 2007

Drools blog reaches over 500 subscribers :)

Today is a milestone for this blog, our rss/atom subscribers just broke 500 with 507 subscribers :)

Last week we saw our highest web hits with 726 unique views, which is on top of the subscribers and the views at syndication sites like JBoss.

I know these figures are small compared to techs like Hibernate, but the main thing is we are growing the message. Two years ago most people didn't know what a rule engine was, hopefully we are starting to change that :)

Tuesday, November 27, 2007

Drools solver @ Javapolis

I 'll hold a Drools Solver BOF at Javapolis 2007 on Tuesday December 11th at 20:00. You're all invited :)
Take a look at the schedule here.
Take a look the contents of the BOF here.

The drools solver manual has also been expanded with more info about the examples:
Take a look at the updated manual here.

The manual now contains some insight about the problem size of the examples. Did you know that the traveling tournament example nl16 finds a feasible solution out of 2,45064610271441678267620602e+259 possible solutions?

And of course I 've added some more eye candy:

Pluggable Dialects for Drools Processes now work :)

Many of you would have read my blog on unifying rules and processes, which was also featured at infoq. Unifying these technologies is not just about the modelling paradigm it's also about the infrastructure overlaps. Today I just finished my first end to end test for dialectable actions in a processes definition - which we call ruleflow, indicating its a melding of the power of processes and rules - so what does this mean?

Pluggable Dialects has been apart of the Drools framework for a while now, what it means is that any eval conditions and the consequence of the rule and can be written in any language; we currently support Java and MVEL as dialect plugins.

One of the extra bits of plumbing that makes this worth while is that a Dialect, at compile time, returns the identifiers that it needs - i.e. none local variables - this allows us to do variable inject in compiled languages like Java, which means no manually retrieving and assignment variables from a context :)

Scripting language plugins like MVEL are very easy to integrate although compiled languages like Java add extra levels of complexity - this is because we want to compile all our consequences, and now actions, in a single pass. The compilation is at a later date than when the rule/action itself was built and thus we need an additional wiring process to hook the compiled code back up to the rule/action.

As we've already built all this for the rules framework with a bit of tweaking the process framework gets it for free - and thus we start to see the value of a unified core.

The image below (click to enlarge) shows a screenshot from our ruleflow editor in Eclipse. It contains just two actions, but of different dialects, one Action is MVEL the other in Java - both populate a String value in the List. The displayed Action is of the Java dialect, notice it has variable inject, so you don't need to do assign the variables manually from a context, i.e.:
List list = (List) context.getVariable("list");


A Java Dialect Action in Ruleflow

Notice as well how the rules and process apis are complimentary to each other. The image below is from the unit test for the process definition in the above screenshot.



Dialect Unit Test for Ruleflow

Monday, November 26, 2007

Pigeons, Complex Event Processing and how to make millions with JBoss Drools

I'm still waiting for the JBoss bouncers to hand me my coat and ask me to leave this blog. Mark gets to talk about Unifying Rules and Processes. Fernando and Michael are very proud of the 2nd version of the Business Rules WebApp . And I get to talk about pigeons. Yep, Pigeons; birds that fly, sometimes useful for carrying messages and have one hidden talent.

During the cold war, the Soviets (allegedly) trained pigeons to inspect ball-bearings on the production line . The Pigeons would sit in comfortable little boxes while the shiny silver ball bearings steamed past on a conveyor belt. If the pigeon spotted any that were defective, they would peck a button and the broken bearing was gone. Since the fall of the Berlin wall, all the pigeons have been gainfully re-employed over at Google.

Thankfully the pigeons didn't go to work at a Bank in the City (have you ever seen anything with feathers drive a Ferrari?) . While the pigeons would be very good at responding to simple market events events (Market up , sell; Market Down , Buy). more complex analysis escapes them; For example ; if the market is down for the 30 mins, and Shares in Acme corp are down more than 10% than the average ,and I have seen 3 buy orders for that share in the last 60 seconds = I think the market is about to turn = buy shares in Acme corp.

Never mind pigeons; most humans would find that difficult - think about trying to read the stock ticker prices (the ones you see rolling across the screen at MSNBC) for all stocks, while trying to hold the buy and sell information for the last 30 minutes in your head. And do that not only for one , but for the couple of hundred different types of shares in the market. And while keeping an eye on your own trading position so that you're not exposed to one sector of the market (e.g. keeping enough cash , not too many property or technology shares. No wonder most traders make their millions and burn out before they're 30 - that sort of Complex Event Processing (CEP) will wear you out.

Most IT applications are like pigeons; they can only handle simple events. Press Button. Do something.

The way to make millions is to design applications that can handle these complex events, and apply sophisticated business rules to the (evolving) situation. And do it quickly enough (milliseconds) to seize the opportunity before somebody else does. A keep on doing it as long as the market is open.

Funnily enough, Complex Event Processing is part of the vision for Drools. With enough support, I'm sure we could convince Mark to stand up at JavaPolis and use a set of Pigeons on his slides. I suppose it's better than using pictures of lego people to explain how to do projects using Agile.

Monday, November 19, 2007

A Vision for Unified Rules and Processes

Since Drools 4.0 I've been demonstrating our ruleflow stuff, which includes a graphical designer and basic rules and process integration for stateful rule orchestration. What is ruleflow? Ruleflow is the integration of rules and processes, which can predominantly be used to orchestrate the execution of rules. Additionally what 4.0 also provides is a prototype to prove that we don't need to have a process oriented or a rule oriented view of the world. I believe that any company that isn't able to truly unify rules and processes into a single modelling system, how PegaSystems have done, will not be in a suitable position for the future - actually Microsoft get this too with their Workflow Foundations offering, although their rule offering is still very weak. Typically the market has a strong process company with weak rules, or strong rules with weak processes - rules and processes must exist as first class citizens within the modelling and execution environment. The current "modus operandi" for the rule industry is a focus on stateless decision services, where the separate workflow engine at some point calls out to the separate stateless rule engine to assist in some decision making - after 30 years of research and development is that the best we have to offer, multi-million pound license deals that effectively boil down to glorified spreadsheets called via a stateless web services from some workflow engine. Not to mock this model, as it is actually quite useful, but this level of integration is superficial at most and we need to make sure that we unify these models and allow it to go to much greater depths. Once rules and processes are fully integrated that modelling environment will benefit from any other declarative systems added, such as our plans to add Complex Event Processing (CEP) to the engine and language - this means we can have rules monitoring streams of data and triggering process milestones.


Drools 4.0 ruleflow diagram

Partly the reason for the rule industry flocking to this model of stateless decision services is their existing tech is hard to understand and implement and thus more difficult to sell, decision services are simple to understand and thus easier toimplement and sell - it's the rule engine industry's stab at trying to grow their measly market share, compared to the workflow industry.

The jBPM team have put out their vision of the "Process Virtial Machine" (PVM), but it is a process centric vision. The PVM was a term used by Tom Baeyens, in the linked paper, to present the idea of a generic engine for executing different process models, the "virtual machine" term may not be totally appropriate, and already irks some purists, but we have continued with this terminology for mean while - so we can get apples to oranges comparisons, instead of apples to giraffes :) Mike Brock suggests that Virtual Process Engine, or Generic Process Engine - I prefer something away from the terms process and rules, to something that focuses on the unified modelling concepts, so hopefully someone out there can put in an argument for something more appropriate :)

What we lay out here is what we have started to put in place with Drools, our vision of a PVM+ with rules and processes as first class citizens, tightly integrated modelling GUIs, single unified engine and apis for compilation/building, deployment and runtime execution.

So with the base PVM in place what we are working on now? We'll we still have a lot to do, I've found our compilation framework is too coupled to rules, so I'm busy trying to refactor this so it can compile and build both rules and actions. The engine variables are currently only scoped at two levels, globals and rules; we need to make sure that we can scope variables by both process and sub process, and have the rules executed in those processes also scoped to that level. I need to extend our concept of a rule "duration", which is basically a simple timer, to allow for cron type definitions and allow rules to execute each time, if its still true - this will allow for rich conditional timers. I have plans for stateful high availability, via JBoss Cache, and also I need to put in an optimal framework for persistence and restoring - ideally I want all this done, and more by Q1 :) We do not plan to do the BPEL, BPM etc layers and instead hope the jBPM team will become consumers of our tech, and also core developers (a joining of the two teams), and work on these parts of the domain.

The rest of this blog is a small paper put together by our ruleflow lead Kris Verlaenen, but exemplifies the whole Drool's team vision and commitment to declarative programming, via multiple modelling paradigms - no one tool fits all solutions.

The Process Virtual Machine (PVM)
This is an attempt to clarify our vision on an integrated approach for modelling business logic using rules and processes on top of the Drools Platform. It is intended to serve as a glossary, to create a common set of terms that might help in simplifying future discussions and creating a combined vision regarding this matter.

Figure 1 shows an overview of our approach to unify rules and processes by integrating a powerful process virtual machine (PVM+) into the Drools Platform. This allows us to support the execution of rules as well as the execution of processes based on this PVM+ within the Drools Platform. We believe that creating a unified approach for handling rules and processes (for the end user) will result in a much more powerful business logic system than what can be achieved by simply linking separate rules and workflow products. It will also allow us to create a lot of additional services on top of this unified platform (IDE, web-based management system, etc.), which can then be applied easily for both rules and processes, giving a much more unified experience for the end users of the platform. Each of the terms used in the figure will be explained in more detail in the subsequent sections.

Figure 1
PVM
The Process Virtual Machine defines a common model that supports multiple process models. It is the basis for implementing workflow process models, and their implementation. It represents a state machine that can be embedded into any software application. Therefore it defines:
  • A process (definition) model: Defines concepts like a process, variables, nodes, connections, work definitions, etc.
  • A runtime model: Runtime instances corresponding to each of the elements in the process model, like process instance, variable instance, node instance, work item, etc.
  • API: Process instances can be started, aborted, suspended, the value of variable instances can be retrieved, work items can be completed or aborted, etc.
  • Services: The PVM also implements (non-functional) services which are useful for most process language implementations, like persistence, transaction management, asynchronous continuations, etc. These services should all be pluggable (do not have to be used, minimal overhead if not used) and configurable (different strategies could be used for each of these services, this should be configurable and extensible so people can plug in their own implementation).
On top of this process model, the PVM also defines/shows how to use the concepts of process (instance), node (instance), connection, etc. to implement common workflow patterns (in control flow, data, resource, exceptions) like a sequence of nodes, parallelism, choice, synchronization, state, subprocess, scoped variables, etc. These node implementations can be used as a basis for implementing different process languages.

PVM+
Extends the PVM and integrates it into the Drools Platform. This allows:
  • Integration of rules and processes: Processes can include (the power of) rules in their process model whenever appropriate, e.g. split decisions, assignment of actors to work items, rules as expression language, etc. vice-versa, rules can start processes during their execution.
  • Processes and rules share one common data contextl, no need to integrate two (or more) different systems, continuously pass information between those two systems, synchronize data, etc.
  • Processes (and rules) can use other functionality that is offered by the Drools Platform: a unified audit system, unified API to start processes / rules, single build and deployment infrastructure etc.
  • One engine session can execute multiple different process instances in parallel, where each process can interact with the other processes and rules via changes to the shared variable context.
This PVM+ also defines additional node implementations that show the power of integrating rules and processes and how that power can be used inside a process model, e.g. choice using rules to evaluate conditions, milestones (a state where rules decide when to progress to the next state), timers with built in conditionals, actions supporting pluggable dialects, etc.

Specific workflow languages
On top of the PVM+, different (domain-)specific workflow languages can be implemented:
  • jPDL: the general purpose, expressive workflow language for the Java developer
  • PageFlow: workflow language for specifying the control flow in web pages
  • RuleFlow: a workflow language for specifying the order in which large rule sets should be
  • evaluated
  • WS-BPEL: an implementation of the WS-BPEL standard for web service orchestration
  • ...
These languages each define a process model and implementation for each of their nodes. These
implementations will be based in a lot of cases on (a combination of) common node implementations of the PVM(+).

Pluggability: New node implementations can be added to existing process languages, existing
process languages can be extended with new functionality (e.g. time constraints), or entirley new process languages can be plugged in into the PVM+.

Work Definitions
All communication with the external world is handled by using work items, which are an abstract representation of a unit of work that should be executed. Work item handlers are then responsible for executing these work items whenever necessary during the execution of a process instance. This approach has the following advantages:
  • A much more declarative way of programming, where you only define what should executed (using an abstract work item), not how (no code)
  • Hides implementation details
  • Different handlers can be used in different contexts:
    • A workflow can be reused without modifications in different runtime execution contexts (e.g. different companies or different hospitals in the context of clinical workflow) by creating custom handlers for each of these settings
    • A workflow can behave differently depending on its stage in the life cycle. For example, for testing, handlers that do not actually do anything but simply test the execution of the workflow could be registered. For simulation, some visualization of the work items that should be executed, and the possibility for the person doing the simulation to complete/abort these work items is possible.
  • Work item definitions and handler implementations can be reused across nodes, across
    processes, and even across process models.
The different work items that are available in a specific workflow languages should be defined (by defining a unique id for that type of work item, and parameters for that work item). Different sets of work definitions can be defined:
  • Generic work definitions (and their handler implementation) can be defined for common
    task that might be useful in different workflow languages, e.g. related to communication
    (sending a mail, SMS, etc.), invoking a web service, logging a message, etc.
  • People can define their own domain-specific work items (and their handler implementation), which can then be used for modeling processes in that domain. For example, a clinical workflow language could define work items like “nursing order”, “medication order”, “contact general practitioner”, etc.
Extensions
When a unified approach to processes and rules is used, as part of the Drools Platform, extensions on top of these concepts and APIs can easily be reused for all rules and process
languages:
  • Eclipse-based IDE supports developing applications on top of the Drools Platform
    supporting the use of rules and processes. This IDE includes
    • a graphical workflow editor
    • unified error handling
    • integrated debugging
    • unified simulation
    • pluggability of process languages, custom property panels, etc.
    • ...
  • B(R)MS: Business (Rules) Management System, a web-based application that serves as the repository for all business knowledge. Supports unified packaging, versioning, management, quality assurance, etc.
  • Security management: who is allowed to perform which operations on the Drools Platform.
  • (Human) task list management component that can be shared across rules and process
    languages, for integrating human tasks.
  • Reasoning on business logic, which is a combination of all rules and processes of a business.



Pluggable work items (currently in svn trunk)

I'll be at Javapolis this year presenting a BOF on the concepts of Declarative Programming with Rules, Processes and CEP which will cover most of this blog and more. The BOF is on Monday 10th from 21:00 to 22:00. So please do come along, if you want to talk about this in more detail.

Friday, November 16, 2007

How Big is too Big?

The JBoss Drools boys have something cool brewing. They already have the most useful GWT (Google Web Toolkit) App that I've seen outside of the Googleplex. That's a fully fledged app ready and waiting to be used in anger, not some example widget, or a test case thrown together by somebody playing with the technology. The BRMS app itself, targeted at business / professional users, allows them to catch the knowledge that they have in their heads and share it with the team.

Here's the problem; If you're into Rules, you've probably already downloaded the BRMS - ( Business Rules management system). If you're just a casual browser, as cool a GWT app it may be, but you're not going to download. Why? Unless you have Tomcat or JBoss 4 ready and waiting on your PC you're not going to bother (note ; this will work on other application servers like weblogic , with another couple minutes work, but that just proves my point).

In order to make things easy for us lazy people, the nice people at Drools are preparing a standalone BRMS; download , unzip , start and play. That's presuming you have Java installed. You do have Java installed don't you? But being lazy, how big a download is too big?

I'm not the best person to answer the question - I've already downloaded the (largish - 530mb) Red Hat Developer Studio (now at release candidate 1). So what's your download speed and how big (small) does the BRMS have to be before you'd consider trying it out?

Answers on a postcard below.



Drools User Mailing List Growth Problems



As the above graph shows, ignoring spikes, there has been a steady uptrend in the growth for usage of the Drools user mailing list. One of the things the core developers are proud of is our community support, we try our best to answer all emails and to date have mostly succeeded in this endevour. However it now seems there is a constant stream of emails and regrettably we are no longer able to reply to all of them, the community itself is starting to answer more of its own questions which is helping a little, but many questions are now going unanswered.

I know that myself I tend to work in bursts, scanning the unanswered messages and replying where possible. Obviously the less vague and easier to answer questions tend to get replies. So now more than ever if people need a response they need to be more diligent in how they phrase their questions and the supporting information they supply.

A little while ago we put up on the mailing list page a few pointers on how to have your emails ignored, its worth repeating those here:
  1. Start the email with "URGENT"
  2. Tell everyone how desparate you and how you need an immediate response for your really important project.
  3. Don't wait a minimum of 3 days and resend your email within minutes of hours.
  4. Send emails directly to mailing list members, especially the developers.
  5. Paste pages of code and then say "it doesn't work, please help".
  6. Paste a long stack trace and say "it doesn't work, please help".
  7. Start your email with "please sirs" or include "do the needful".
  8. Ask dumb questions that are in the manual.
  9. Ask basic java questions.
  10. Ask questions about JRules
  11. Reply to an existing thread but starting a new topic
  12. Start your email with "I'm a member of the drools mailinglist, so please answer me"
  13. General begging and pleading
  14. Say some thing to the effect of "Please tell me how I can call a function on the LHS. (hurry my assignment is due today!)"
The positive side is that it shows the Drools community is growing, so we must be doing something right, and as we become unable to answer all questions more people wil have to turn to support subscriptions from JBoss with guaranteed response times - so hopefully this will atleast make my managers happy :)

Thoughts for the Business Rules Forum and RuleML Conference

This year I got invited to attend two expert panels, one for the BRF and one for the RuleML conference, I also gave two presentations. One demonstrating Drools' ruleflow capabilities extensions to rules and the other on an interoperability exercise I did with ilog and Mismo.

One of the things I like most about going to the Business Rules Forum is the people I meet, especial the old school guys who helped founded the technology and the industry. Last year I had dinner with Charles Forgy and this year Paul Haley - both occasions were the highlight of my trip. Paul Haley was involved in the development of ART, which was a hugely powerful state of the art Expert System for it's time (with features we still have yet to do in Drools) and the author of the Eclipse rule engine (not IDE), which formed the tech foundations for Haley Systems Inc - "The Evolution of Rules Languages". Which reminds me, if anyone has any ART or Eclipse manuals hanging around, please do email/post my way :)

What's great about meeting people like Charles and Paul is that they've been thinking about rule engines for 20+ years, while I've just been doing it 5 years, so when I tell them about my latest R&D they already know the various possible solutions and pitfalls - so drawing out nuggets of gold from these guys is immensely beneficial :) For instance I mentioned to Paul that our RuleBases are fully stateless allowing multiple sessions to share the same RuleBase, without concurrency issues, the sessions themseles are light weight negating the need to have any form of pooling. If you have a large number of sessions executing on the same RuleBase, how do you manage a RuleBase update? We currently iterate and lock all sessions, apply the change, and then iterate and unlock all. For most situations this works fine, but if you have a very large number of sessions this can create quite a pause. Paul suggested that instead of locking all the working memories we allow the RuleBase two exist in two states, so sessions don't get locked and only see the updated RuleBase when they are ready, when all sessions are viewing the new state of the RuleBase, we can stop maintain two states in-favour of the most recent. As soon as he said it I realised what he meant, which gave me a doh! moment :) We don't have time to implement this now, but it's certainly gonna simmer away until we do. Thanks Paul, looking forward to extracting more nuggets from you - now where are those thumb screws ;)

While on the subject of Paul the Drools user mailing list today received an email from him, which I take as a huge compliment and I hope he doesn't mind if I share that here:
"Haley Systems (www.haley.com), the company that I founded many years ago (hopefully, some of you have heard of it!), is in the process of being acquired by Ruleburst. I am not going with the acquisition but have started up my own "vendor neutral" business practice in which I anticipate helping improve and support Drools and the emerging standards in rules and web semantics. "


While there I met the infamous "Smart Enough Systems" author James Taylor, who was expectedly as large and entertaining in life as he is in his blogs :) I also met Paul Vincent for the first time, having spoken to him once before and followed his emails on the RIF mailing list, was good to see that Paul's dry humour extended from his online communication to his offline :) Although it seems the thumbscrews didn't work on him and I didn't learn all of Tibco's secrets, heh mental note "more drink needed next time". Although he did give me an interesting insight into a new R&D area where Tibco use ghant charts to represent processes and use cep to effect re-organisation of those changes, sounded kinda cool.

Pega Systems where there as usual. I really like what they do, and have been very much inspired by their "modelling" approach, like them I don't subscribe to a rules or process view of the world that has dogged the current industry main streamers, but a unified "modelling" view where rules and processes are first class citizens. I again met Jon Pellan, who while being very busy, took a few moments to show me over their rule engine (which I really appreciated), it was only a quick glimpse and I didn't gain enough information to draw any conclusions - but their approach looked interesting, and it certainly was different to what anyone else in the industry is doing (I think), it was more like the data flow approach that Stanislav Shor once told me about. At compile time you determine for each change on a fact which are the possible rules to evaluate that fact against, and pull in just those rules for evaluation - PegaRules creates a hash of those rules against the field changes, it then uses subgoaling to match the data against the chained conditions (hope I got that right). I'd really like to learn more about their system, maybe next year, as I was left with not understanding how it would handle joins and a better understand of their subgoaling.

I also spent quite a lot of time with Benjamin Grosof and Said Tabet. Benjamin is the Senior Research Program Manager, from Vulcan and the author of systems like SweetRules, Benjamin was also previously an MIT Sloan Professor - as you can expect he's wickedly smart in real life :) Said Tabet is best known for his RuleML work but also has a huge wealth of experience in the commercial application of production rules systems. We talked a lot about the role that Drools could play in helping with RuleML and Benjamin took me over the SweetRules system and explained his innovative way of deal with the inconsistent way that PR systems deal with 'not' compared to prolog when porting rule syntaxes to different engines. It was a delight spending so much time with these two, and I hope I get to do it again some time.

One of the other events going on, that I really enjoyed, was the RuleML conference - which is where I meet-up with Benjamin and Said. This was a much more academic conference of like minded individuals sharing information. I also finally got to meet the Mandarax author, Jens Dietrich, and the Prova author, Adrian Paschke. Jens is now working on backward chaining derivation rules based system called Take - which I'm going to research for possible integration into Drools this xmas.

I met plenty of other people, but this blog is already getting way too long and its now late, so time for me to draw it to a close and go to bed :)

Friday, November 09, 2007

IJTC and new Production Rule (Rete) explanation in slides

Just got back from the Irish Java Technology Conference, for this presentation I had another go at trying to explain how a production rule engine works - the behaviour, not the algorithm. As I've mentioned in the past I'm finding it easier to talk about SQL to begin with, to frame people's minds. If you jump into an example, straight away they are thinking, or asking, "how is this different from an 'if' statement in java". By taking the SQL approach you hopefully break that problem.

The presentation takes 3 tables with data and shows the resulting rows for 2 different views, and how data might change if we had triggers on those views. So it gets people thinking in terms of cross products and rows of data (tuples). I then show the same data against rules and the resulting rows, which is identical to the views. Showing that basically a rule is a view on data, resulting in rows (tuples) of matched facts. The consequence is executed for each resulting row. This concept is taken further to say that if each rule is a view then the agenda is just an aggregate view of all the rule views. As you insert, retract and update data, rows are added and removed from the "agenda view".

I then introduce the idea of conflict resolution and salience, along with two phase execution, as a way to determine which of the rows in the agenda view have their consequences fired first; a new simple rule with a salience is added, along with the resulting agenda view tables show the impact of this. The presentation then goes on to touch first order logic, specifically 'not' and 'accumulate' and details our ruleflow work, the normal screen shots are supplied for explaining the rest of the capabilities of the system. I also did a populated BRMs demo at the end.

Do give me feedback on my approach to helping people understand, via the sql and views analogy, production rule systems - I'd certainly like to try and improve the slides to help explain this better.

You can get the slides here

IJTC and new Production Rule (Rete) explanation in slides

Just got back from the Irish Java Technology Conference, for this presentation I had another go at trying to explain how a production rule engine works - the behaviour, not the algorithm. As I've mentioned in the past I'm finding it easier to talk about SQL to begin with, to frame people's minds. If you jump into an example, straight away they are thinking, or asking, "how is this different from an 'if' statement in java". By taking the SQL approach you hopefully break that problem.

The presentation takes 3 tables with data and shows the resulting rows for 2 different views, and how data might change if we had triggers on those views. So it gets people thinking in terms of cross products and rows of data (tuples). I then show the same data against rules and the resulting rows, which is identical to the views. Showing that basically a rule is a view on data, resulting in rows (tuples) of matched facts. The consequence is executed for each resulting row. This concept is taken further to say that if each rule is a view then the agenda is just an aggregate view of all the rule views. As you insert, retract and update data, rows are added and removed from the "agenda view". I then introduce the idea of conflict resolution and salience, along with two phase execution, as a way to determine which of the rows in the agenda view have their consequences fired first; a new simple rule with a salience is added, along with the resulting agenda view tables show the impact of this. The presentation then goes on to touch first order logic, specifically 'not' and 'accumulate' and details our ruleflow work, the normal screen shots are supplied for explaining the rest of the capabilities of the system. I also did a populated BRMs demo at the end.

Do give me feedback on my approach to helping people understand production rule systems, via the sql and views analogy, I'd certainly like to try and improve the slides to help explain this better.

You can get the slides here

Tuesday, November 06, 2007

Drools now has 1725 unit and integration tests

One of the great things about Open Source is we are totally open and transparent, so it's very easy to make a judgement on the level of quality of the software and the efforts gone into QA. On this note we would like to bring to everyone's attention that Drools now has 1725 unit and integration tests - which I think is high by anyone's standard - none of these tests were produced by code generation. This report is shown as part our Hudson built test results page, https://hudson.jboss.org/hudson/job/drools/983/testReport/.

Our Hudson build server, https://hudson.jboss.org/hudson/job/drools/, builds Drools after every commit and makes distribution zips publicly available here , so you can always get the latest trunk build for your own testing.

Irish Java Technology Conference - 9th of November

I'm at the Irish Java Technology Conference doing a talk on Drools. My talk is this Friday the 9th of November from 11:00 to 12:15, where I'll be demoing our BRMS.

More great articles

John Dunning has produced a fantastic blog entry summarising his perforamance characteristics research at EBRD, the European Bank of Reconstruction and Development - "Benchmarking Drools" which was a follow up on his "Banking on Drools" blog. In this he takes a simple problem and solves it a number of different ways, benchmarking each, he details all the variations with several perf charts comparing the approaches. Someone else has recently taking a variation (making it harder) of his benchmarks and run it under JRules, with very surprisingly results, I'll provide more details later on when the results have been better verified :) I'll also try and get a zip where you can run this benchmark for Drools and JRules yourself, as luckily ILog have now made their software easily obtainable for trial purposes. Lets's put it this way, which system do you think scales easily to over 500K, even 1mill, objects, and which one didn't :)

Steve Shabino has started a blog on his experience with Drools, as they have an extreme problem to solve involving the need to reason over 2mill objects in a stateful session. I'm really looking forward to seeing his findings.

The mysterious Yaakov Kohen has started blogging again with an insightful blog on the problems with todays acadmic benchmarks.