Saturday, December 29, 2012

Rolo The Robot says: Happy New Year!


Hi Everyone! This is probably my last post of the year and in this occasion I want to share with you all a side project that I've being working on my spare time with my father. In less than 10 days we manage to build a small robot which runs all its logic inside the Drools Rule Engine. This first step main goal was to prove that it is possible to build a low cost robot which will be entirely controlled by the rule engine and the CEP features provided by Drools Fusion. We didn't include any process execution in this first stage, but it is definitely on the roadmap. This post briefly explains what we have now working and what's planned for the future, because this is just the beginning. The interesting side of the project is to make the robot completely autonomous, which means that it will run entirely on its own and without the need to be connected to a computer or to a power outlet in the wall.
rolo brand


Introduction


We started building this first prototype with the following goals in mind:
  • Demonstrate that the Drools & jBPM Platform can help us to build a reliable and declarative environment to code the robot internal knowledge. 
  • Demonstrate that a robot can be constructed on top of the Rule & Process Engine in a reduced and portable platform. Some important points from our perspective are:
    • It needs to run without an external computer
    • It needs to be autonomous and run on batteries to have freedom of mobility
    • It can be monitored and contacted via wireless
    • It needs to react in near real time and process information without requiring long periods of time. We are doing the tests with 100 millisecond (because that's more than enough in this stage) lapses now but the performance can be improved to support lower latency.
  • Test different hardware options to decide which are the best components to use to build different types of robots
  • Push the limits and mix the Rule/Process Engine arena with the Hardware/Electronics/Robotics arena.
  • Incrementally build a framework to speed up the initial steps
  • and of course, make it open source to improve  collaboration and to join forces with other people which are interested in the same topics.


Hardware


Rolo!
Rolo!
My father (Jose Salatino), "the electronic geek" help me with all the hardware side of the robot. I've started looking at the Lego NXT and WEDO platforms to see if we can reuse some of the cool things that they have designed, but the NXT runs with J2ME and it's old at this point. I'm looking forward to see if they release something new soon. The Lego WEDO and Power Functions looks promising, but they have several limitations such as: reduced number of devices that you can handle via the USB port and there are some expensive pieces that you need to get if you want to make it work. When my father started playing with Arduino we find a lot of advantages which help us to get everything working in almost no time. In the other hand the Lego Motor & Sensors provide us a great and scalable environment to create advanced prototypes. For that reason, we choose to use Arduino as a central Hub to control a set of Sensor, Motors and Actuators, no matter if they are Lego or not.
The following figure shows the wiring between the different components that we are using, most of the components can be changed without affecting the software architecture:
Hardware Wiring
Hardware Wiring

This list summarize the list of components that we are using, please note that there are a lot of things to improve so this infrastructure is in no way set in stone:
  • 1 x Raspberry Pi Model B
  • 1 x Arduino Uno
  • 2 x Lego NXT Servo Motor
  • 1 x SR04 Ultra Sonic Sensor (Distance Sensor)
  • 1 x SG90 180 Servo Motor
  • 2 Battery Packs (10 AA batteries) -> we are working on this, don't worry ;)
  • 1 x USB Wireless Dongle
  • 1 x LDR Sensor (Light Sensor)


Hardware Roadmap


From the Hardware perspective there are a lot of things to do. We will start researching about the I2C  protocol to replace all the serial communications. We know that I2C is the way to go, but we didn't had time yet to do all the necessary tests. We currently have a hardware/physical limitation about the number of devices that we can set up. We want to push the platform limits so we will be looking forward to add more motors and more sensors to increase the robot complexity and see how far we can go.


Software


From the software perspective we have a bunch of things to solve, but this section gives a quick overview about what has been done until now. We need to understand that the Raspberry Pi is not a PC, it's an ARM machine, which is a completely different infrastructure. For Java that's not supposed to be a problem, but it is. When you want to access the serial port or use the USB port to transmit data, you will start facing common issues about native libraries which are not compiled for the ARM platform. Once we manage to solve those issues we need to find a way to interact with the Arduino Board which is programmed in C/C++. Luckily for us there is  software called Firmata which externalise via the Serial port the whole board. Using this software we can read and write digital/analog information from the board pins.  This helps us a lot, because we will write a standard software inside the Arduino which will allow us to write/read all the information that we need into the board to control the motors and read the sensors.  Unfortunately, as every standard we hit a non covered sensor (SR04 - UltraSonic Sensor), and for that reason we provided a slightly modified version of the Firmata Sketch, which can be found inside the project source repository. From the Java Perspective, there is a library called processing (Processing is an open source programming language and environment for people who want to create images, animations, and interactions) which has a number of sub libraries, one of them for interacting with Firmata. I borrowed two classes from Processing in order to customize to my particular needs. From the beginning I wanted to use Processing because believe that it has a lot of potential to be mixed with the Process and Rules Engine, but this initial stage is not taking advantage of it.
The following figure shows from a high level perspective the different software components that runs in order to bring Rolo to life:
Software Components
As you can see, the Rolo Server expose and recieve information via JMS which allows us to build a Monitor to see the information and send more imperative commands or information about the world to the robot. Rolo Server is basically a Drools/jBPM Knowledge Session now, but a more robust schema with multiple sessions for different purpose will be adopted in future stages.
The Rules right now have access to all the Motors and Sensors information allowing us to write rules using those values. All the sensors input data are considered as events and for this reason we can use all the Drools Fusion temporal operators.
The following two rules are simple examples about what is being done inside the robot right now:

rule "Something too close - Robot Go Back"
   when
        $r: RoloTheRobot()
        $m: Motor(  )
        UltraSonicSensor( $sensor: name )
        $n: Number( doubleValue < 30) from accumulate (
                    DistanceReport( sensorName == $sensor, $d: distance )
                                    over window:time( 300ms )
                                    from entry-point "distance-sensor", average($d))

   then
       notifications.write("Process-SOMETHING_TOO_CLOSE:"+$n);
       $m.start(120, DIRECTION.BACKWARD);
       Match item = ( Match ) kcontext.getMatch();
              final Motor motor = $m;
              final HornetQSessionWriter notif = notifications;
              ((AgendaItem)item).setActivationUnMatchListener( new ActivationUnMatchListener() {

                    public void unMatch(Session session,
                                        Match match) {
                        System.out.println(" Stop Motor");

                        motor.stop();
                        try{
                            notif.write("Stopping Motor because avg over: 30");
                        } catch(Exception e){
                            System.out.println("ERROR sending notification!!!");
                        }

                   }
                } );
end
This rule checks the average of the distance received from a Distance Sensor (in this case the UltraSonic Sensor) in the case that the distance is less than 30cm in the last 300ms all the motors will be started at a fixed speed to move away from that object. This allows us to be sure that there is something in front of the robot instead of reacting in the first measure that matches the condition. Different functions can be used to correct wrong reads from the sensors and to improve the overall performance. Notice that after starting the motor we are registering an ActivationUnMatchListener, this will cause that as soon as the Rule doesn't match anymore the motor will be stopped. You will see in the video, that the robot will go backward until the average received from the  Distance Sensor in the last 300ms is over 30 cm.
There is another rule which use the Light Sensor to know how to go out from dark places.

Software Roadmap

After a well deserved holidays, I will be working on improving the code base, to allow to run all the software without the need to have an Arduino Board or any specific hardware. The main idea is to have an environment where we can simulate virtual motors and sensors. This will allows us to improve the development of the software without being tied to the hardware improvements. This will also allow you to collaborate with the project, if I get enough collaborations I can do weekly videos about how the robot behaves using your contributions :)
So, take the following list as a brain dump of the things that I need to do on the project:
  1. Improve the infrastructural code: JMS messages encoding, Monitor App, Simulation App
  2. Create more rules and processes to enable Rolo to do different things such as: recognize the environment/room where its running, interact with different objects,
  3. Mock a coordinate system and a model store different objects recognized from the environment.
  4. Use processing to draw in real time what is being sensed by Rolo in a 3D environment
  5. Enable Rolo to ask questions using the Human Task Services provided by jBPM
  6. Define the requirements for actuators and how to use them
  7. Video Streaming and image analysis

Video

Finally, let me introduce you: Rolo The Robot!



Notice in the last 20 seconds of the video you can see the Rolo Client/Monitor application which shows us all the notifications that are being sent from the robot. You can see a small control panel which allows us to send some commands and also see the values that are being captured from the sensors.
Rolo says: Happy New Year to you all!
Stay tuned!

Monday, December 17, 2012

jBPM Console NG (Update): Rules + Processes + Events


Hi everyone! I'm back with another update about the jBPM Console NG. Yesterday we did a quick demo about the console current features in the JBUG London meetup. Today I've decided to explain the demo in more depth and also explain the last slides from the presentation which describe some scenarios where events and rules influence the execution of our business processes.

Introduction


The main idea of the demo is to show how rules, processes and events can be used to monitor our business processes and influence their execution. In order to understand the runtime behavior we need to obviously understand how Rules and Events works, but I will start explaining the business use case first in order to explain what we are trying to achieve.
The Business Process that we want to execute looks like the following image:
Release Process
This is just a normal process, it includes Human Interactions and System Interactions. We will handle the Human Interactions with the Human Task Services and the System to System interactions will be handled with different WorkItemHandlers implementations.
The process is about releasing artifacts. In order to make a release the files from an specific artifact needs to be staged. We have three directories where we will move the files to be released and they will be processed accordingly. Basically, we will pick a set of files from a repository that has the following directory structure:
Directory Structure
Directory Structure
The sequence will be: Origin (where the original files will be placed for the release process) -> Stage (reviewed by a Person) -> Test (automatically tested) -> Production.
Notice that if the automatic tests fails a special path will be followed and a Person will be in charge of fixing the issues and move the files back to the Staging area.

Keeping our process as simple as possible

We don't want to complicate our business process, we want to keep the process definition as clear and simple as possible. We don't want to add tons of activities to check different situations that doesn't describe the normal flow of actions. But at the same time we want to enforce some extra requirements and deal with exceptional business situations. For recognizing situations where we want to enforce different business policies or recognize business exceptions we can start using rules. If we want to recognize situations that involves time intervals we can include Fusion into the picture.
As I've explained in previous posts, there are several rules to analyze our processes executions using Rules, but from a very high-level perspective we can do the following:
  • Analyze a single process and the process contextual information to execute some actions or influence the process state
  • Analyze a group of processes running in the same context as a logical group and execute an action that can be related with one particular instance, create one or a group of new new instances, terminate/abort one or a group of running instances, create one or a group of human tasks, or execute one or a set of actions.
To demonstrate the different things that you can do we have chosen three different things that we can do without adding more complexity to our process definition:
  1. If an instance of the process go 2 or more times through the Fix Issues branch, we want to get a warning or notify someone about this situation to take an action. Imagine the pain of doing this kind of checks inside the business process, probably adding a new process variable to check the amount of executions of each path, a real nightmare that complicates the process definition.
    Paths and Activities Evaluations
  2. If an instance is doing a release with a set of files or pointing to an specific repository, we must not allow two process instances working with the same resources. If you think about this restriction that involves multiple process instances then it is clear that the logic of checking those restrictions cannot be placed inside a process definition, because it's not a restriction that will be applied per instance. If you think about this kind of situations, you will see that there a lot of similar cases where you can apply more intelligent restrictions to a set of process instances. The main problem is that if we have a "Normal/Old" process engine your application will need to handle those kind of things, or once again you will need to start doing some hacks in order to make that work. Most of the time using traditional BPMSs you don't even think about how to handle these scenarios, because the tooling doesn't even support them.
    Multi Process Instance Evaluations
    Multi Process Instance Evaluations
  3. In some situations we want to solve cross cutting concerns that are solved in multiple processes in the same way. Sometimes we have tasks that are done in several business processes, but we don't want to include the task as part of the process definition because it's a generic task that its not related with the business goal of that business process, but it's related with the work that needs to be done to keep things running. In such cases, we can create an Ad-Hoc task to deal a particular situation. In this case the example shows a task that is being created to improve the performance of an automated task if the execution takes longer than we have expected. We can define the SLAs using rules and dynamically create a human task if it's needed.
    Ad-Hoc Task
    Ad-Hoc Task

jBPM Console NG - technical side

Let's analyze from the technical perspective how the infrastructure should provide us a way to handle situations likes the ones described before. Before going into the rules that are identifying and reacting on different situations, we need to understand how to generate the data that the rule engine will use.
First of all we need to notify the Rule Engine about the Process Instances, so it handle them as facts. For this reason we attach the following process event listener to our sessions:
This process event listener is in charge of inserting, updating and retracting the Process Instance from the Knowledge Session where the process is running. It also keep up to date the process variables that are modified inside the process. This listener also generate and insert Drools Fusion events that can be used for temporal reasoning.
The expected results when we attach this ProcessEventListener to our sessions is:
  • Every time that we create a process instance , the Process Instance object will be available to the rule engine to create rules about it.
  • When a process instance is completed it is automatically retracted from the rule engine context
  • When a process variable is modified/updated the Process Instance fact is updated as well
  • Every time that an activity  is executed "Drools Fusion" events are created and inserted into the session before and after the task is executed. We as users have the responsibility to define these types as events, so the engine can tag them with the correspondant timestamp (Look at the rules file).
Inside the session that have attached this listener we will be able to:
  • Write rules about Process Instances and their internal status, including process variables
  • Write rules that identify situations where we want to measure time between different activities of the same process or a group of processes
  • Influence the business processes execution based on different scenarios
  • If we insert into the session more business context, we will be able to mix all the information that is being generated by the processes execution with our business context to recognize more advanced scenarios
  • Mix all of the above

Resources and Git Backend

One more important thing if you want to try this alpha version, is to understand that we are now picking up all the resources that are being used by the runtime from a Git Repository. This means that our backend repository in this case is github.com. We store all our assets in this repository, and we build up different sessions using the resources located in the remote repository. This gives us a lot of advantages, but the integration is not finished yet. In the future you will be able to point to different repositories and fetch resources on demand to build new runtimes. For now you need to understand that Forms, Processes, Rules and all the configuration resources are being picked up from a remote repository, abstracting our application from where the resources are stored.

Rules, Processes & Events

Once we have all the data inside our session we can start writing our rules.
The complete rules file used for this demo can be found here:
Here are some things that we need to understand from this drl file:
  • Event Declarations: We need to inform the rule engine which facts will be treated as Events. Notice the first lines after the imports:
    declare ProcessStartedEvent
         @role(event)
    end
    In this case we are defining that all the insertion of ProcessStartedEvent needs to be handled as events, which are a special type of facts.
  • We can make services available for the rules to use. For this example I'm injecting the services as globals:
    global RulesNotificationService rulesNotificationService;
    global TaskServiceEntryPoint taskService;The TaskServiceEntryPoint will allow us to create and manage tasks from rules. The RulesNotificationService is exposing to the outside world the rules execution. It's a quick way to notify the users about certain situations. You can think about it as a simple log service about what is happening inside our sessions.
  • Then you can write rules about Processes and the Events generated by the processes:
    rule "Fix Issues Task pending for more than 30 seconds"
      when
         $w1: WorkflowProcessInstanceImpl($id: id)
         $onEntry: ProcessNodeTriggeredEvent(
    processInstance.id == $id,
    $nid: nodeInstance.id,
    nodeInstance.nodeName == "Fix Issues") from entry-point "process-events"
        $onExit: ProcessNodeLeftEvent(
    this after[30s] $onEntry,
    processInstance.id == $id,
    nodeInstance.id == $nid,
    nodeInstance.nodeName == "Fix Issues") from entry-point "process-events"
    then
    ....So this rule is matching situations where a particular node inside (Fix Issues) of our business processes is taking more than 30 seconds to be executed. Notice that the process instance events are being inserted in a special entry-point called "process-events". I suggest you to take a look at the other rules that are being analyzed inside the demo, so you can get in idea about what kind of things can be done in this environment.

DEMO


jBPM Console NG update 14/12/2012 from salaboy on Vimeo.

Full Presentation at JBUG London




Stay tuned for more updates about the console and the book!

Sunday, December 16, 2012

Barcelona JUG - jBPM5 Developer Guide Presentation (19/12/12)


Hi everyone, I'm going to give a presentation in Barcelona about the jBPM5 Developer Guide book. There is no defined venue yet, but it will be next Wednesday (19th) at 7pm somewhere in the city. I will keep you posted! If you are interested to attend please drop me a comment so we can make the necessary adjustments. This will be a Barcelona JUG meetup, so feel free to invite as many friends as you want and please help us to spread the word.


Here are some links from the Barcelona JUG Group that you can follow to see updates about this and future meetups:
Google groups - http://bit.ly/BarcelonaJUG 

Update

Hi everyone, the meetup for tomorrow is confirmed and we now have a place confirmed. The meetup will take place in the Facultad de Informatica de Barcelona:
Edifici B6 del Campus Nord C/Jordi Girona Salgado,1-3 08034 BARCELONA Espanya
the talk will start at 7 pm, so see you there with all your friends!

Spanish: 

Hola a todos! Estare en Barcelona presentando el libro jBPM5 Developer Guide. Todavia no hay sitio definido, pero en el transcurso de mañana estaremos publicando el lugar. Estamos seguros de que sera el Miercoles 19 de Diciembre en algun lado de la ciudad a las 7pm. El evento esta organizado por Barcelona JUG por eso sientanse libres de invitar a cuantos amigos pueda y ayudenos a difundir la palabra.


Actualizacion:
El evento y el lugar esta confirmado!
Nos vemos mañana miércoles 7pm a las 7pm!! En el Edicio A6, Aula 102 de la Facultad de Informática de Barcelona!
Edifici B6 del Campus Nord C/Jordi Girona Salgado,1-3 08034 BARCELONA Espanya
Lleven a sus amigos!


Saturday, December 15, 2012

6.0 Alpha - Annotation Driven development with Multi Version Loading

Drools & jBPM 6.0 alpha should be out end of next week.  6.0 introduces convention based projects that remove the need for boiler plate code - literally just drop in the drl or bpmn2 and get going. Further we now allow rules and processes to be published as maven artifacts, in maven repositories. These artifacts can either be resolve via the classpath or downloaded dynamically on the fly. We even support out of the box side by side version loading, via the maven ReleaseId conventions.

As a little taster here is a new screenshot showing the annotation driven development. The lines below are all that's needed to dynamically load a module from a local or remote maven repository and start working with it. KieSession is the new, shorter name, for StatefulKnowlegeSession. Kie is an acronym for "Knowledge Is Everything", but I'll talk about Kie in another blog, expect to start hearing a lot about it soon :)



And here is a complete example screen shot. Create the drl, define the kmodule and start using them.

(click image to enlarge)


Friday, December 14, 2012

Score flexibility in Planner, shown with vehicle routing


Do we want to minimize distance or minimize time? Should trucks return to their depot after delivering their items?
It depends on what's best for your business. Luckily, changing the score function in Planner is easy, as shown in this demo.