Notes on Message Queues

Russell Bateman
March 2015
last update:

Here are some notes I'm putting down on message queueing.

I made a slide presentation of this: Message Queueing

Here's some more, and more reasonable, ActiveMQ code. First, sending a message:

import stomp

conn = stomp.Connection10()

conn.start()
conn.connect()

conn.send( '/topic/SampleTopic', 'Simples Assim' )

conn.disconnect()

Next, sending a message (not so durable, then durable):

import stomp
import time

class SampleListener( object ):
def on_message( self, headers, message ):
	print message

conn = stomp.Connection10()
conn.set_listener( 'SampleListener', SampleListener() )

conn.start()

if not durable:
# - This is not so durable... -------------------------
conn.connect()
conn.subscribe( '/topic/SampleTopic' )
else:
# - This is better... ---------------------------------
conn.connect( headers={ 'client-id' : 'SampleClient' } )
conn.subscribe( destination='/topic/SampleTopic', headers={ 'activemq.subscriptionName' : 'SampleSubscription' } )
# -----------------------------------------------------

time.sleep( 1 )
conn.disconnect()

A fuller quick-start page can be found here.

I ended up doing a quick, iFriday presentation on this as noted earlier that actually garnered a couple of votes, not my own, however, as I voted for Dakota's test efforts.

Here's something relevant to our case. We'll want messages to persist across reboots of the server. By default, Stomp produces messages set to non-persistent. We must explicitly tell the Stomp library to add "persistent:true" to all send requests we want to persist across ActiveMQ restarts. (This is the opposite of the default for JMS submitted messages.) See ActiveMQ: The STOMP Guide.

All my batch-runner code was committed and pushed with the message, "Added throttle to slow down compound queries to search server. Added more stats to batch_runner main().

Two classes of queues...

The two classes of queues can't really be mixed semantically.

The publish-subscribe model

...where:

The event model

...where:

Note that Kafka can be used for both models.