All things Artificial Intelligence related: Rules, Processes, Events, Agents, Planning, Ontologies and more :)
To be honest, I 've also had the (possible wrong) impression that drools lost some performance in my drools-solver runs somewhere on trunk between 4 and 5.I can very easily make a "benchmark testcase" from the drools-solver examination (or another) example. It runs for about 7 minutes. Is there some easy way to get hudson to run it only weekly (with a maven profile and a separate build job I am thinking) and then to aggregate those results over time?
BTW, the "Bytecode compiled Rete (currently interpreted)" feature should really make a difference.For drools-solver specifically I still have the experience that accumulate and other "from" or "collect" uses hurt performance, because it looks like they aren't purely forward chained. It would be nice to have to option to have them purely forward chained too, because drools-solver must not be mixed with backwards chaining.
Hudson is on a shared host, so wouldn't get reliable results.If we ever get enough worth while benchmarks, I might be able to rig up a machine at home that runs regularly and publishes results.
The new algorithm allows for something we call "true modify", which isn't implemented yet and it's this that I hope improves performance some more again. Where as traditional Rete is retract + assert. This is a single modification that prunes the tree as it progresses, for modifications that result in small changes it should hopefully give a nice boost and impact GC less.
It seems it's only the Waltz benchmark that favours drools/4. The banking test showed 5 to have a slight advantage (see post for details).We've got the data for WaltzDB, and most of it for manners and in those tests drools/5 seems to have better performance. The WaltzDB results go out Wednesday.