My experiences about a JMS interchange between HornetQ and ActiveMQ over Apache ServiceMix

I recently had a project where I had to deploy some Apache Camel routes, deployed over ServiceMix, that had to connect to a HornetQ messaging system, deployed on a JBoss Application Server.

My first intention was to wrap the HornetQ libraries as OSGi bundles and deploy them on Apache ServiceMix, but it was a real nightmare, with numerous class loading problems, terrible! There is an example of a similar use case of mine, written by Torsten Mielke, but I couldn’t establish the connection between Apache ServiceMix 6.0.0 and JBoss EAP 6.3. On the other hand, there is a HornetQ issue to make it OSGi compliant, but it also didn’t work out in my case.

My final approach to the problem was to set up HornetQ bridges between JBoss EAP 6.3 and the JMS installation of ServiceMix: Apache ActiveMQ. You can find the code of my proof of concept in one of my GitHub repositories. I honestly think that this solution desvirtues the concept of an ESB, because the routing is made outside, but it works!

Routing Oracle AQ messages using Apache Camel in ServiceMix: the XA option

I wrote in my last post about how to route messages between software based on Oracle Advanced Queuing (AQ) and Apache ActiveMQ, using Camel over ServiceMix. Today, I’d like to write about a different option than the ones I talked before: the XA option.

There are situations where, in case of any exception, you have to guarantee that in any way any message is lost and that messages are not received duplicated, these situations are where the X/Open XA standard comes into action.

I started my research on this issue by looking for documentation and examples. Two elements where very useful at this point: an Oracle White Paper about XA and Oracle controlled Distributed Transactions, that allowed me to review the XA topic, and a very good sample of a similar situation that shows how to use XA transactions with Camel across ActiveMQ and WebSphere MQ written by Torsten Mielke. You can find the result code of my XA test on my GitHub repository.

The test has a processor, copied from Torsten’s example, that simulates an exception and allows you to see how the exchange is recovered. But, the interesting issue here is trying to recover after an Oracle database crash or a network failure in the middle of a message exchange. I’ve made many test, for the first ones I used an Oracle 11.2 test database over a Microsoft Windows operative system and Apache ServiceMix 6.0, also over Windows. In this environment, I submitted shutdown abort and shutdown immediate commands to the database and the Geronimo recovery manager didn’t recovered the Oracle branches in prepared state, so I had to commit force the in-doubt transactions.

I wasn’t happy with the previous solution, which it is not recommended by Oracle (read the previously linked white paper) and it isn’t transparent for the user. My final environment wasn’t Windows bases, but Linux, so I ran other tests using another Oracle 11.2 database installed over Linux Red-Hat and Apache ServiceMix 6.0, also over Linux. In this case, the Geronimo recovery manager was capable of recovering prepared transactions and I couldn’t reproduce the error. An Oracle database configuration problem? an operative system issue? That’s I suppose.

Routing Oracle AQ messages using Apache Camel in ServiceMix

I’ve been working with Apache ServiceMix lately and I expect to carry on doing in next months. One of the issues I’ve faced is how to route messages between software based on Oracle Advanced Queuing (AQ) and other systems, using one of the main components of Apache ServiceMix: Camel. In this post, I’d like to start talking about the approach I’ve selected. I’m still working with XA, so this is just the beginning! The code of my proof of concept is on github.

The first topic I had to study was how to deploy the libraries needed to work with Oracle AQ: aqapi.jar and ojdbc6.jar I started embedding the jar files on the bundle, but this approach implies that the libraries have to be attached to each bundle, so I finally decided to create a feature that wraps the files and converts them to OSGI bundles. This is the first module of my code repository, called feature.

Then, I started to work in a basic connection, using the Camel JMS component, by injecting an Oracle AQ connection factory bean. You have my code on the second module of the project, called basic-test. But this solution is not scalable, because the component creates a JMS connection, which implies a JDBC one, each time a message is sent and this is an expensive process, so I investigate the use of connection pools.

The question I had to study at this point was if I just had to create a JDBC connection pool or a JMS one, I selected the second option because a JMS connection not only implies a JDBC connection, as I said before, but also other stuff of JMS api. A very, very basic benchmark showed me that my test environment (Apache Service Mix 6.0.0, Oracle 11.2.0 database on virtual machines) took 3 seconds in sending 100 messages on the performance test, versus 14 seconds on the basic one.

On the other hand, receiving messages test results weren’t so impressive, I tested competing consumers and asynchronous parameters (which cannot be uses in all use cases). My configuration is in the module performance-test of the github code repository, where you can play with the configuration parameters of the bundle and get your own conclusions.

JAAS, Wildfly and Microsoft Active Directory

Some time ago, I wrote about a Java EE Web application that made use of Microsoft Active Directory, through Java Authentication and Authorization Service (JAAS), as its security mechanism. The program was deployed in a Glassfish 4.0 application server. I’ve recently moved this application to a Red Hat Wildfly 9.0.1 server and I’d like to share a couple of issues I consider can be helpful for those involved in the same topic.

The fist one is the configuration of the LDAP realm. Here you have an excerpt of my standalone-full.xml file:

    <security-realm name="LdapRealm">
        <ldap connection="AdConnection" base-dn="OU=TestOU,DC=test,DC=local" recursive="true">
          <username-filter attribute="sAMAccountName"/>
        <ldap connection="AdConnection">
          <group-search group-name-attribute="cn">
            <principal-to-group group-attribute="memberOf"/>
    <ldap name="AdConnection" url="ldap://" 
      search-dn="CN=testapp,CN=Users,DC=test,DC=local" search-credential="password"/>

The CLI script that creates this XML structure on Wildfly 9.0.1 is:

/core-service=management/ldap-connection=AdConnection:add(url="ldap://", \
/core-service=management/security-realm=LdapRealm/authentication=ldap:add(connection=AdConnection, \
  base-dn="OU=TestOU,DC=test,DC=local", \
  recursive="true", \
/core-service=management/security-realm=LdapRealm/authorization=ldap/ \
  group-search=principal-to-group:add(group-name-attribute="cn", \

The second issue is the configuration of a security domain, which is a concept that Glassfish not requires, there you just set up the name of the realm in the login-config element of the file web.xml Wildfly ignores this configuration, but requires instead its specific jboss-web.xml file in the WEB-INF folder of the application:

<?xml version="1.0" encoding="UTF-8"?>
<jboss-web version="7.1"

Finally, here you have the CLI script that creates the security domain in Wildfly:

/subsystem=security/security-domain=test-domain/authentication=classic:add \
 (login-modules=[{code="RealmDirect", \
  flag="required", \

Setting up a JMS bridge between Weblogic and ActiveMQ

Almost four years ago, I wrote about how to setup a JMS bridge between Weblogic and HornetQ. Lately, I’ve had to research how to do the same work with ActiveMQ. Here you have my findings. As it was for HornetQ, the first step was to copy the client libraries to a folder residing in a filesystem of my Weblogic server, in the case of ActiveMQ these files are:

  • activemq-client-5.10.0.jar
  • hawtbuf-1.10.jar
  • slf4j-api-1.7.5.jar

Later on, I set the PRE_CLASSPATH variable pointing it to these libraries into the script setDomainEnv. Creating messaging destinations is more or less the same job as it was for HornetQ, you just have to change the initial context factory and connection url parameters:

  <name>JMS Bridge Destination-Target</name>

But the former configuration is not enough, because ActiveMQ hasn’t his own JNDI provider (I had ActiveMQ as a JMS provider for a ServiceMix ESB) and requires a file with the mappings between physical destinations and tha jndi ones, but, how to configure the properties file in the context of a Weblogic messaging bridge? Here is the trick: create a JAR archive containing the file and put the JAR in the CLASSPATH (the same way at it was described before for activemq-client-5.10.0.jar). The steps to create the JAR archive are:

  • Create a file called with the following entries:

connectionFactoryNames=ConnectionFactory,XAConnectionFactory queue.TestQueue=TestQueue

  • Create the archive with the following command:

jar cvf jndi.jar

I found this trick in this post, after have tested many different configurations, including the setup of a Weblogic Foreign JMS Server, without success. Finally, I’d like to point out that I am not completely happy with this setup because it has an obvious drawback: you have modify and redeploy the jndi.jar archive every time you add a new queue or topic, so suggestions are welcome!!!


SQL Server data sources in JBoss AS 7

Last week, I set up a SQL Server 2008 data source on a JBoss AS 7.1.1 server, in order to be used by a Java EE application, so I’d like to share what I’ve learned.

The first step was to install the driver. There are two ways to do this, the quick one is simply to deploy the jdbc driver (sqljdbc4.jar) as a regular deployment, by typing this command in CLI (the name parameter is optional, but I found it useful):

deploy C:\software\drivers\sqljdbc4.jar –name=sqlserver

The second option is to install the jdbc driver as a core module, which it was what I finally did. This one was a bit more laborious. First of all, I turned off the server and I set up a directory structure under JBoss modules folder, in my case C:\jboss-as-7.1.1.Final\modules\com\microsoft\sqlserver\main, after that, I copied the driver sqljdbc4.jar there and I created a file called module.xml with the following content:

  <?xml version="1.0" encoding="UTF-8"?>
  <module xmlns="urn:jboss:module:1.0" name="">
      <resource-root path="sqljdbc4.jar"/>
      <module name="javax.api"/>
      <module name="javax.transaction.api"/>

The key here is to create a directory structure that matches the module name. The final step of this option was to start the server and run the following CLI command:


Once I had the driver configured, I created the data source by using this CLI command (for this sample, I set up a local SQL Server EXPRESS instance, with a test database and a test user):

data-source add –name=TestDS –jndi-name=java:/jdbc/Test –driver-name=sqlserver –connection-url=jdbc:sqlserver://localhost\SQLEXPRESS;databaseName=Test –user-name=test –password=test –min-pool-size=10 –max-pool-size=50 –pool-use-strict-min=true –pool-prefill=true –jta=true –use-ccm=true –prepared-statements-cache-size=32

 The data source has to be enabled:

data-source enable –name=TestDS

Finally, I tested the configuration with the following command:


Creating a XA data source was slightly different. The first step was to check out that my SQL Server installation was properly configured, by reviewing the chapter titled Configuration Instructions of this article. After that, I ran these commands:

xa-data-source add –name=TestDS–jndi-name=java:/jdbc/Test/XA –driver-name=sqlserver –user-name=test –password=test–min-pool-size=10 –max-pool-size=50 –pool-use-strict-min=true –pool-prefill=true –jta=true –use-ccm=true –prepared-statements-cache-size=32 –same-rm-override=false
xa-data-source enable –name=TestDS

A final tip, if you decide to deploy the driver, instead of installing it as a core module, you have to add the parameter –xa-datasource-class to the command xa-data-source add with the value


Oracle AQ: working with PL/SQL asynchronous notifications

I like Oracle Streams Advanced Queuing (AQ), it’s reliable and fast. I’ve been working with this technology for the last four years, with 10g and 11g database versions; most of the time, I’ve had to interact with Java EE systems, through Java Message Service (JMS), which it’s fully supported by Oracle AQ. JMS has Message Driven Beans (MDB) as the standard way to consume messages in Java EE, as its counter part on the Oracle database you can register asynchronous notifications to PL/SQL procedures. To be perfectly honest, I’ve always considered that the configuration of this functionality is a bit tricky, because sometimes you don’t get an error, it simply doesn’t work. That’s why I’m writing this post, to present a simple example of PLSQL asynchronous notifications that you can download from github.

The first point that I’d like to deal is the sign of the callback procedure, standalone or belonging to a package, that will consume the messages:

procedure receive_message_callback (
   context  raw,
   payload  raw,
   payloadl number

I think that the key here is the type of the argument payload: raw or varchar2, depending on the type of the message. I’ve prepared my sample with my personal Oracle Database Express Edition 11g Release 2, where I couldn’t work with Oracle AQ JMS types, so I’ve used a custom data type as the payload of the messages, which implies that the payload argument type has to be raw, but if you work, for example, with JMS Text Messages, SYS.AQ$_JMS_TEXT_MESSAGE in Oracle AQ, the payload argument type has to be varchar2.

The second issue is that the configuration needed varies depending if the destination is a queue or a topic.
In a queue each message is consumed by just one consumer, so you simply has to register the callback procedure. Here you have an excerpt of my code:

   dbms_aqadm.create_queue_table (queue_table => 'queues_qt',
                                  queue_payload_type => 'TESTAQ.MESSAGES_T');

   dbms_aqadm.create_queue (queue_name => 'test_queue',
                            queue_table => 'queues_qt');

   dbms_aqadm.start_queue (queue_name => 'test_queue');


The tip to remember here is not to forget the schema name prefix before queue and callback procedure names.

In a topic each message can be consumed by one or several subscriptors and each of them can process the message in a different way, for example sending an email, instead of processing by a PL/SQL procedure. So, you first have to register a subscriber, an agent in Oracle AQ terminology, and then register the PLSQL consumer. Here you have an excerpt of my code:

   -- It's a topic, so multiple_consumers parameter is specified.
   dbms_aqadm.create_queue_table (queue_table => 'topics_qt',
                                  queue_payload_type => 'TESTAQ.MESSAGES_T',
                                  multiple_consumers => true);

   dbms_aqadm.create_queue (queue_name => 'test_topic',
                            queue_table => 'topics_qt');

   dbms_aqadm.start_queue (queue_name => 'test_topic');

   dbms_aqadm.add_subscriber (queue_name => 'test_topic',
                              subscriber =>$_agent(
                                               name => 'demo_subscriber',
                                               address => null,
                                               protocol => 0));


The tip to remember here is not to forget to put the name of the subscriber after the name of the topic, when you’re registering the callback procedure.