Open source RabbitMQ: core server and tier 1 (built-in) plugins

Overview

Test

RabbitMQ Server

RabbitMQ is a feature rich, multi-protocol messaging broker. It supports:

  • AMQP 0-9-1
  • AMQP 1.0
  • MQTT 3.1.1
  • STOMP 1.0 through 1.2

Installation

Tutorials & Documentation

Some key doc guides include

Commercial Support

Getting Help from the Community

Contributing

See CONTRIBUTING.md and our development process overview.

Questions about contributing, internals and so on are very welcome on the mailing list.

Licensing

RabbitMQ server is licensed under the MPL 2.0.

Building From Source and Packaging

Copyright

(c) 2007-2021 VMware, Inc. or its affiliates.

Issues
  • Several clients run into `basic.publish` framing errors when transient flow control kicks in

    Several clients run into `basic.publish` framing errors when transient flow control kicks in

    opened by michaelklishin 59
  • Hung pids/queues on node failure

    Hung pids/queues on node failure

    (This is with RabbitMQ 3.5.3, Erlang 16B03)

    I've got a three node RabbitMQ cluster as part of an OpenStack deployment. To trigger the following, all I have to do is force power off one of the three machines. This reproduces 100% of the time for me. I'm going to cut down info to a minimum to try and keep the issue readable. If you need more information anywhere, please ask.

    I'm going to focus on one particular queue, conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155. Note that there's several other queues stuck in the same manner that I am omitting.

    Here's the setup. Before I triggered the issue, I took note of the current state of the queue:

    # rabbitmqctl list_queues name pid slave_pids
    ...
    conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155       <[email protected]>    [<[email protected]>, <[email protected]>]
    ...
    

    On the master node (mac5254005e6a60), we see the queue declared and mirrored to the other two nodes. This is the last time the master logs about this queue.

    =INFO REPORT==== 13-Jul-2015::22:27:34 ===
    Mirrored queue 'conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155' in vhost '/': Adding mirror on node '[email protected]': <20600.1360.0>
    
    =INFO REPORT==== 13-Jul-2015::22:27:34 ===
    Mirrored queue 'conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155' in vhost '/': Adding mirror on node '[email protected]': <20595.1830.0>
    

    So far so good.

    Then, I hard power off the mac525400f5a30a node. What happens next, is the remaining slave (mac525400797585) thinks that the other slave has died, which is true, but it also thinks the master has died as well, which is not true. It then promotes itself to the master for the queue:

    =INFO REPORT==== 13-Jul-2015::22:38:20 ===
    Mirrored queue 'conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155' in vhost '/': Slave <[email protected]> saw deaths of mirrors <[email protected]> <[email protected]>
    
    =INFO REPORT==== 13-Jul-2015::22:38:20 ===
    Mirrored queue 'conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155' in vhost '/': Promoting slave <[email protected]> to master
    

    At this point, the node that was the original master has the following hung chain of processes:

    [{pid,<5436.1878.0>},
     {registered_name,[]},
     {current_stacktrace,[{supervisor2,shutdown,2,
                                       [{file,"src/supervisor2.erl"},{line,1078}]},
                          {supervisor2,do_terminate,2,
                                       [{file,"src/supervisor2.erl"},{line,1039}]},
                          {supervisor2,terminate_children,3,
                                       [{file,"src/supervisor2.erl"},{line,1033}]},
                          {gen_server,terminate,6,
                                      [{file,"gen_server.erl"},{line,719}]},
                          {proc_lib,init_p_do_apply,3,
                                    [{file,"proc_lib.erl"},{line,239}]}]},
     {initial_call,{proc_lib,init_p,5}},
     {dictionary,[{'$ancestors',[rabbit_tcp_client_sup,rabbit_sup,<5436.625.0>]},
                  {'$initial_call',{supervisor2,init,1}}]},
     {message_queue_len,0},
     {links,[<5436.829.0>]},
     {monitors,[{process,<5436.1879.0>}]},
     {monitored_by,[]},
     {heap_size,233}]
    [{pid,<5436.1879.0>},
     {registered_name,[]},
     {current_stacktrace,[{supervisor2,shutdown,2,
                                       [{file,"src/supervisor2.erl"},{line,1078}]},
                          {supervisor2,do_terminate,2,
                                       [{file,"src/supervisor2.erl"},{line,1039}]},
                          {supervisor2,terminate_children,3,
                                       [{file,"src/supervisor2.erl"},{line,1033}]},
                          {gen_server,terminate,6,
                                      [{file,"gen_server.erl"},{line,719}]},
                          {proc_lib,init_p_do_apply,3,
                                    [{file,"proc_lib.erl"},{line,239}]}]},
     {initial_call,{proc_lib,init_p,5}},
     {dictionary,[{'$ancestors',[<5436.1878.0>,rabbit_tcp_client_sup,rabbit_sup,
                                 <5436.625.0>]},
                  {'$initial_call',{supervisor2,init,1}}]},
     {message_queue_len,0},
     {links,[<5436.1881.0>]},
     {monitors,[{process,<5436.1882.0>}]},
     {monitored_by,[<5436.1878.0>]},
     {heap_size,233}]
    [{pid,<5436.1882.0>},
     {registered_name,[]},
     {current_stacktrace,[{supervisor2,wait_dynamic_children,5,
                                       [{file,"src/supervisor2.erl"},{line,1207}]},
                          {supervisor2,terminate_dynamic_children,3,
                                       [{file,"src/supervisor2.erl"},{line,1144}]},
                          {gen_server,terminate,6,
                                      [{file,"gen_server.erl"},{line,719}]},
                          {proc_lib,init_p_do_apply,3,
                                    [{file,"proc_lib.erl"},{line,239}]}]},
     {initial_call,{proc_lib,init_p,5}},
     {dictionary,[{'$ancestors',[<5436.1879.0>,<5436.1878.0>,
                                 rabbit_tcp_client_sup,rabbit_sup,<5436.625.0>]},
                  {'$initial_call',{supervisor2,init,1}}]},
     {message_queue_len,0},
     {links,[]},
     {monitors,[{process,<5436.1883.0>}]},
     {monitored_by,[<5436.1879.0>]},
     {heap_size,233}]
    [{pid,<5436.1883.0>},
     {registered_name,[]},
     {current_stacktrace,[{supervisor2,shutdown,2,
                                       [{file,"src/supervisor2.erl"},{line,1078}]},
                          {supervisor2,do_terminate,2,
                                       [{file,"src/supervisor2.erl"},{line,1039}]},
                          {supervisor2,terminate_children,3,
                                       [{file,"src/supervisor2.erl"},{line,1033}]},
                          {gen_server,terminate,6,
                                      [{file,"gen_server.erl"},{line,719}]},
                          {proc_lib,init_p_do_apply,3,
                                    [{file,"proc_lib.erl"},{line,239}]}]},
     {initial_call,{proc_lib,init_p,5}},
     {dictionary,[{'$ancestors',[<5436.1882.0>,<5436.1879.0>,<5436.1878.0>,
                                 rabbit_tcp_client_sup,rabbit_sup,<5436.625.0>]},
                  {'$initial_call',{supervisor2,init,1}}]},
     {message_queue_len,0},
     {links,[<5436.1885.0>,<5436.1884.0>]},
     {monitors,[{process,<5436.1886.0>}]},
     {monitored_by,[<5436.1882.0>]},
     {heap_size,376}]
    [{pid,<5436.1886.0>},
     {registered_name,[]},
     {current_stacktrace,
         [{gen,do_call,4,[{file,"gen.erl"},{line,211}]},
          {gen_server2,call,3,[{file,"src/gen_server2.erl"},{line,336}]},
          {delegate,safe_invoke,2,[{file,"src/delegate.erl"},{line,203}]},
          {delegate,'-safe_invoke/2-lc$^0/1-0-',2,
              [{file,"src/delegate.erl"},{line,200}]},
          {delegate,'-safe_invoke/2-lc$^0/1-0-',2,
              [{file,"src/delegate.erl"},{line,200}]},
          {delegate,invoke,2,[{file,"src/delegate.erl"},{line,126}]},
          {rabbit_amqqueue,notify_down_all,2,
              [{file,"src/rabbit_amqqueue.erl"},{line,633}]},
          {rabbit_channel,notify_queues,1,
              [{file,"src/rabbit_channel.erl"},{line,1630}]}]},
     {initial_call,{proc_lib,init_p,5}},
     {dictionary,
         [{{xtype_to_module,topic},rabbit_exchange_type_topic},
          {'$ancestors',
              [<5436.1883.0>,<5436.1882.0>,<5436.1879.0>,<5436.1878.0>,
               rabbit_tcp_client_sup,rabbit_sup,<5436.625.0>]},
          {{xtype_to_module,fanout},rabbit_exchange_type_fanout},
          {msg_size_for_gc,18060},
          {process_name,
              {rabbit_channel,
                  {<<"192.168.200.9:59289 -> 192.168.200.7:5672">>,1}}},
          {'$initial_call',{gen,init_it,6}}]},
     {message_queue_len,185},
     {links,[]},
     {monitors,
         [{process,<5436.1899.0>},
          {process,<5436.1914.0>},
          {process,<5436.1914.0>},
          {process,<5436.1889.0>}]},
     {monitored_by,[<5436.857.0>,<5436.1883.0>,<5436.1889.0>]},
     {heap_size,46422}]
    [{pid,<5436.1914.0>},
     {registered_name,[]},
     {current_stacktrace,
         [{rabbit_mirror_queue_master,'-stop_all_slaves/2-lc$^1/1-1-',1,
              [{file,"src/rabbit_mirror_queue_master.erl"},{line,211}]},
          {rabbit_mirror_queue_master,stop_all_slaves,2,
              [{file,"src/rabbit_mirror_queue_master.erl"},{line,211}]},
          {rabbit_mirror_queue_master,delete_and_terminate,2,
              [{file,"src/rabbit_mirror_queue_master.erl"},{line,199}]},
          {rabbit_amqqueue_process,'-terminate_delete/3-fun-1-',6,
              [{file,"src/rabbit_amqqueue_process.erl"},{line,251}]},
          {rabbit_amqqueue_process,terminate_shutdown,2,
              [{file,"src/rabbit_amqqueue_process.erl"},{line,276}]},
          {gen_server2,terminate,3,[{file,"src/gen_server2.erl"},{line,1131}]},
          {gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1026}]},
          {proc_lib,wake_up,3,[{file,"proc_lib.erl"},{line,249}]}]},
     {initial_call,{proc_lib,init_p,5}},
     {dictionary,
         [{{xtype_to_module,direct},rabbit_exchange_type_direct},
          {'$ancestors',
              [<5436.1913.0>,rabbit_amqqueue_sup_sup,rabbit_sup,<5436.625.0>]},
          {process_name,
              {rabbit_amqqueue_process,
                  {resource,<<"/">>,queue,
                      <<"conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155">>}}},
          {'$initial_call',{gen,init_it,6}},
          {guid,{{3921839934,3021920886,3257830969,1505589835},0}}]},
     {message_queue_len,2},
     {links,[<5436.1913.0>]},
     {monitors,[{process,<6438.1360.0>}]},
     {monitored_by,[<5436.1886.0>,<5436.754.0>,<5436.1886.0>,<5436.636.0>]},
     {heap_size,2586}]
    
    

    This state does not change with time. It's the same immediately after triggering the issue as it is many hours later. So I'm convinced it's hung for infinity.

    The serious operational impact is that the "conductor" queue seems to be totally wedged in this state. These messages also sat in the queue for hours:

    # rabbitmqctl list_queues messages name | grep conductor
    0       conductor.mac5254005e6a60.example.org
    0       conductor.mac525400797585.example.org
    0       conductor.mac525400f5a30a.example.org
    0       conductor_fanout_0633312f0f4945c494e79a7f23de94ad
    0       conductor_fanout_3d22a7cd7a6641b19924ca80743cedb5
    0       conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155
    185     conductor
    

    In the end, I killed pid <5436.1914.0> that is stuck in stop_all_slaves at the end of the chain. Immediately, the messages drain out of the conductor queue and things seem to go back to normal:

    # rabbitmqctl eval 'erlang:exit(c:pid(0,1914,0),kill).'
    true
    # rabbitmqctl list_queues messages name | grep conductor
    0       conductor
    0       conductor.mac5254005e6a60.example.org
    0       conductor.mac525400797585.example.org
    0       conductor.mac525400f5a30a.example.org
    0       conductor_fanout_0633312f0f4945c494e79a7f23de94ad
    0       conductor_fanout_3d22a7cd7a6641b19924ca80743cedb5
    0       conductor_fanout_5aa9bb2eb11744e2a0b88559c6146155
    
    

    I don't really understand why the slave is promoting itself to master while the master is still alive. I'm guessing that's the crux of the problem?

    bug effort-medium 
    opened by jeckersb 51
  • Switch to Lager for logging

    Switch to Lager for logging

    Tried to make lager work with all default logging locations and settings. Log rotation have 2 sec sleep now to let lager finish all writes to old file and open a new one. Support for configuring lager from .config, so it's not overwritten by default rabbit config. Need review

    opened by hairyhum 43
  • Windows: RabbitMQ spawns wmic periodically and wmiprvse leaks resources

    Windows: RabbitMQ spawns wmic periodically and wmiprvse leaks resources

    RabbitMQ spawns wmic in infinite loop and wmiprvse "eats" CPU and memory is slowly leaking. When it is going, cpu load is up to 50% (4 cores i5) and memory very slowly leaking from the system. Screenshot of tracing and finding out who spanws process is attached. When rabbitmq is off, everything is ok.

    Installed version of rabbitmq - 3.6.11 with enabled management plugin and default configuration (OS Win 10 x64). No consumers/producers connected to mq.

    wmi

    On screenshot above you can see that processes spawned all the time, executing wmic request takes some time and cpu

    bug effort-low pkg-windows 
    opened by lasfromhell 41
  • Track and expose the timestamp property of the first msg in a queue

    Track and expose the timestamp property of the first msg in a queue

    Summary

    The timestamp property is part of AMQP but until now has not been used within Rabbit, only passed through. This patch makes the timestamp of the head message visible in the management stats, so (typically) identifying the oldest message in the queue. Monitoring systems can then create alerts when service times are exceeding, or about to exceed, prescribed limits.

    Requirement

    Service times are a common aspect of a service level agreement that obtains between organisations or parts of an organisation. As such, it is particularly relevant where Rabbit is being used for integration across zones of responsibility such as application support teams or external service providers.

    Alternatives

    Tracking this directly via the message timestamp has a number of advantages over alternatives such as monitoring queue depths:

    • Unambiguous: a large queue depth, a slow ingress rate or an increasing reject count may indicate a significant problem or may just indicate a bursty sending pattern, a temporary slowdown in the consumer or other transient problem. Monitoring such statistics can result in false alarms while still failing to detect a few aged and unprocessed messages.
    • Flexible: because the AMQP timestamp is provided explicitly it can reflect the time a series of activities started rather the single activity associated with a specific message.
    • Efficient: this monitoring does not involve message processing and so is much more efficient than alternative such as peeking at the head message or (in some JMS systems) using a QueueBrowser to watch the front of the queue.

    History

    • 6/2009 Feature implemented in a JMS system using QueueBrowser at large financial services organisation
    • 12/2010 Suggested as a RabbitMQ feature in customer meeting with VMware/Pivotal
    • 4/2011 RabbitMQ replaces JMS system, service time alerts are missed by ops staff
    • 2/2014 Discussed possible implementation with Rabbit team member
    • 7/2014 v1 implementation, reviewed by Rabbit team member - performance concerns as it extended the message_properties record; also impacted backing_queue_behaviour
    • 10/2014 v2 much less intrusive version leaving the timestamp as a backing queue prop
    • 2/2015 RabbitMQ moves to GitHub and invites contribs! Temptation proves too great.

    References

    TODO

    • Current testing script (relying on management plugin) needs replacing by native rabbit-server tests. However, this isn't particularly straightforward as there are currently no similar tests of queue stats (only channel stats) to extend/adapt so will attempt once feature accepted in principle.
    effort-low enhancement 
    opened by alexethomas 39
  • Dead lettering more than two times results in a crashed queue

    Dead lettering more than two times results in a crashed queue

    We have a similar setup to #161 where we publish messages into (unique) timeout queues that drop the messages back into their original queues for retrying.

    Our setup started to loose messages after we updated to RabbitMQ 3.5.3. Looking at the logs we found this:

    =ERROR REPORT==== 23-Jun-2015::11:47:19 ===
    ** Generic server <0.678.0> terminating
    ** Last message in was {drop_expired,1}
    ** When Server state == 
    [...]
    ** Reason for termination ==
    ** {{case_clause,{value,{<<"count">>,signedint,1}}},
        [{rabbit_dead_letter,x_death_event_key,3,[]},
         {rabbit_dead_letter,ensure_xdeath_event_count,2,[]},
         {rabbit_dead_letter,ensure_xdeath_event_count,2,[]},
         {rabbit_dead_letter,'-group_by_queue_and_reason/1-fun-0-',3,[]},
         {lists,foldl,3,[{file,"lists.erl"},{line,1197}]},
         {rabbit_dead_letter,group_by_queue_and_reason,1,[]},
         {rabbit_dead_letter,update_x_death_header,2,[]},
         {rabbit_dead_letter,'-make_msg/5-fun-2-',8,[]}]}
    
    =ERROR REPORT==== 23-Jun-2015::11:47:19 ===
    Restarting crashed queue 'toq-gen2-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c' in vhost '/'.
    

    We were able to reproduce this everytime a message was to drop out of the timeout queue (toq) for the third time (hence the "gen2" part of the queue name in the logs).

    Full log (names of queues and exchanges redacted):

    =ERROR REPORT==== 23-Jun-2015::11:47:19 ===
    ** Generic server <0.678.0> terminating
    ** Last message in was {drop_expired,1}
    ** When Server state == {q,
                             {amqqueue,
                              {resource,<<"/">>,queue,
                               <<"toq-gen2-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c">>},
                              false,false,none,
                              [{<<"x-message-ttl">>,signedint,9000},
                               {<<"x-dead-letter-exchange">>,longstr,
                                <<"REDACTED_EXCHANGE.dlx">>},
                               {<<"x-expires">>,signedint,10000},
                               {<<"x-dead-letter-routing-key">>,longstr,
                                <<"REDACTED_QUEUE_NAME">>}],
                              <0.678.0>,[],[],[],undefined,[],[],live},
                             none,false,rabbit_priority_queue,
                             {passthrough,rabbit_variable_queue,
                              {vqstate,
                               {0,{[],[]}},
                               {0,{[],[]}},
                               {delta,undefined,0,undefined},
                               {0,{[],[]}},
                               {1,
                                {[],
                                 [{msg_status,0,
                                   <<156,89,98,190,170,144,189,166,217,107,135,227,
                                     40,191,240,59>>,
                                   {basic_message,
                                    {resource,<<"/">>,exchange,<<>>},
                                    [<<"toq-gen2-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c">>],
                                    {content,60,
                                     {'P_basic',undefined,undefined,
                                      [{<<"x-message-id">>,longstr,
                                        <<"96aba311-624f-4dc0-802b-70138917816c">>},
                                       {<<"x-puka-delivery-tag">>,signedint,4},
                                       {<<"x-death">>,array,
                                        [{table,
                                          [{<<"count">>,signedint,1},
                                           {<<"exchange">>,longstr,<<>>},
                                           {<<"time">>,timestamp,1435052830},
                                           {<<"queue">>,longstr,
                                            <<"toq-gen1-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c">>},
                                           {<<"reason">>,longstr,<<"expired">>},
                                           {<<"routing-keys">>,array,
                                            [{longstr,
                                              <<"toq-gen1-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c">>}]}]},
                                         {table,
                                          [{<<"count">>,signedint,1},
                                           {<<"exchange">>,longstr,<<>>},
                                           {<<"routing-keys">>,array,
                                            [{longstr,
                                              <<"toq-gen0-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c">>}]},
                                           {<<"queue">>,longstr,
                                            <<"toq-gen0-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c">>},
                                           {<<"reason">>,longstr,<<"expired">>},
                                           {<<"time">>,timestamp,1435052826}]}]},
                                       {<<"x-origin-queue">>,longstr,
                                        <<"REDACTED_QUEUE_NAME">>}],
                                      undefined,undefined,undefined,undefined,
                                      undefined,undefined,undefined,undefined,
                                      undefined,undefined,undefined},
                                     <<32,0,0,0,3,242,12,120,45,109,101,115,115,97,
                                       103,101,45,105,100,83,0,0,0,36,57,54,97,98,
                                       97,51,49,49,45,54,50,52,102,45,52,100,99,48,
                                       45,56,48,50,98,45,55,48,49,51,56,57,49,55,
                                       56,49,54,99,19,120,45,112,117,107,97,45,100,
                                       101,108,105,118,101,114,121,45,116,97,103,
                                       73,0,0,0,4,7,120,45,100,101,97,116,104,65,0,
                                       0,3,26,70,0,0,1,136,5,99,111,117,110,116,73,
                                       0,0,0,1,8,101,120,99,104,97,110,103,101,83,
                                       0,0,0,0,4,116,105,109,101,84,0,0,0,0,85,137,
                                       43,30,5,113,117,101,117,101,83,0,0,0,150,
                                       116,111,113,45,103,101,110,49,45,95,116,101,
                                       115,116,95,112,97,121,109,101,110,116,95,
                                       114,101,102,117,110,100,95,111,110,95,102,
                                       105,110,97,108,95,100,101,97,116,104,46,112,
                                       121,58,58,116,101,115,116,95,112,117,98,108,
                                       105,115,104,95,111,110,95,102,105,110,97,
                                       108,95,100,101,97,116,104,95,48,48,52,57,48,
                                       101,54,53,45,49,57,98,49,45,52,55,52,53,45,
                                       98,56,49,100,45,99,53,57,100,49,52,52,56,53,
                                       57,54,101,45,57,54,97,98,97,51,49,49,45,54,
                                       50,52,102,45,52,100,99,48,45,56,48,50,98,45,
                                       55,48,49,51,56,57,49,55,56,49,54,99,6,114,
                                       101,97,115,111,110,83,0,0,0,7,101,120,112,
                                       105,114,101,100,12,114,111,117,116,105,110,
                                       103,45,107,101,121,115,65,0,0,0,155,83,0,0,
                                       0,150,116,111,113,45,103,101,110,49,45,95,
                                       116,101,115,116,95,112,97,121,109,101,110,
                                       116,95,114,101,102,117,110,100,95,111,110,
                                       95,102,105,110,97,108,95,100,101,97,116,104,
                                       46,112,121,58,58,116,101,115,116,95,112,117,
                                       98,108,105,115,104,95,111,110,95,102,105,
                                       110,97,108,95,100,101,97,116,104,95,48,48,
                                       52,57,48,101,54,53,45,49,57,98,49,45,52,55,
                                       52,53,45,98,56,49,100,45,99,53,57,100,49,52,
                                       52,56,53,57,54,101,45,57,54,97,98,97,51,49,
                                       49,45,54,50,52,102,45,52,100,99,48,45,56,48,
                                       50,98,45,55,48,49,51,56,57,49,55,56,49,54,
                                       99,70,0,0,1,136,5,99,111,117,110,116,73,0,0,
                                       0,1,8,101,120,99,104,97,110,103,101,83,0,0,
                                       0,0,12,114,111,117,116,105,110,103,45,107,
                                       101,121,115,65,0,0,0,155,83,0,0,0,150,116,
                                       111,113,45,103,101,110,48,45,95,116,101,115,
                                       116,95,112,97,121,109,101,110,116,95,114,
                                       101,102,117,110,100,95,111,110,95,102,105,
                                       110,97,108,95,100,101,97,116,104,46,112,121,
                                       58,58,116,101,115,116,95,112,117,98,108,105,
                                       115,104,95,111,110,95,102,105,110,97,108,95,
                                       100,101,97,116,104,95,48,48,52,57,48,101,54,
                                       53,45,49,57,98,49,45,52,55,52,53,45,98,56,
                                       49,100,45,99,53,57,100,49,52,52,56,53,57,54,
                                       101,45,57,54,97,98,97,51,49,49,45,54,50,52,
                                       102,45,52,100,99,48,45,56,48,50,98,45,55,48,
                                       49,51,56,57,49,55,56,49,54,99,5,113,117,101,
                                       117,101,83,0,0,0,150,116,111,113,45,103,101,
                                       110,48,45,95,116,101,115,116,95,112,97,121,
                                       109,101,110,116,95,114,101,102,117,110,100,
                                       95,111,110,95,102,105,110,97,108,95,100,101,
                                       97,116,104,46,112,121,58,58,116,101,115,116,
                                       95,112,117,98,108,105,115,104,95,111,110,95,
                                       102,105,110,97,108,95,100,101,97,116,104,95,
                                       48,48,52,57,48,101,54,53,45,49,57,98,49,45,
                                       52,55,52,53,45,98,56,49,100,45,99,53,57,100,
                                       49,52,52,56,53,57,54,101,45,57,54,97,98,97,
                                       51,49,49,45,54,50,52,102,45,52,100,99,48,45,
                                       56,48,50,98,45,55,48,49,51,56,57,49,55,56,
                                       49,54,99,6,114,101,97,115,111,110,83,0,0,0,
                                       7,101,120,112,105,114,101,100,4,116,105,109,
                                       101,84,0,0,0,0,85,137,43,26,14,120,45,111,
                                       114,105,103,105,110,45,113,117,101,117,101,
                                       83,0,0,0,104,95,116,101,115,116,95,112,97,
                                       121,109,101,110,116,95,114,101,102,117,110,
                                       100,95,111,110,95,102,105,110,97,108,95,100,
                                       101,97,116,104,46,112,121,58,58,116,101,115,
                                       116,95,112,117,98,108,105,115,104,95,111,
                                       110,95,102,105,110,97,108,95,100,101,97,116,
                                       104,95,48,48,52,57,48,101,54,53,45,49,57,98,
                                       49,45,52,55,52,53,45,98,56,49,100,45,99,53,
                                       57,100,49,52,52,56,53,57,54,101>>,
                                     rabbit_framing_amqp_0_9_1,
                                     [<<"REDACTED_MESSAGE_BODY">>]},
                                    <<156,89,98,190,170,144,189,166,217,107,135,
                                      227,40,191,240,59>>,
                                    false},
                                   false,false,false,false,queue_index,
                                   {message_properties,1435052839418192,false,
                                    55}}]}},
                               1,
                               {0,nil},
                               {0,nil},
                               {0,nil},
                               {qistate,
                                "/var/lib/rabbitmq/mnesia/REDACTED_NODE_NAME/queues/BB5UQYHDZNVEPMDAZY48D2OBL",
                                {{dict,0,16,16,8,80,48,
                                  {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
                                   []},
                                  {{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
                                    []}}},
                                 []},
                                undefined,0,65536,
                                #Fun<rabbit_variable_queue.2.117761292>,
                                #Fun<rabbit_variable_queue.3.48316793>,
                                {0,nil},
                                {0,nil}},
                               {undefined,
                                {client_msstate,msg_store_transient,
                                 <<185,208,160,140,133,214,105,177,135,249,236,77,
                                   148,196,41,229>>,
                                 {dict,0,16,16,8,80,48,
                                  {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
                                   []},
                                  {{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],
                                    []}}},
                                 {state,286792,
                                  "/var/lib/rabbitmq/mnesia/REDACTED_NODE_NAME/msg_store_transient"},
                                 rabbit_msg_store_ets_index,
                                 "/var/lib/rabbitmq/mnesia/REDACTED_NODE_NAME/msg_store_transient",
                                 <0.393.0>,290889,282695,294986,299083}},
                               false,0,1,55,0,0,0,infinity,1,0,0,55,0,0,
                               {rates,0.1294337302079736,0.0,0.0,0.0,
                                {1435,52831,419020}},
                               {0,nil},
                               {0,nil},
                               {0,nil},
                               {0,nil},
                               0,0,0,0}},
                             {state,
                              {queue,[],[],0},
                              {inactive,1435052830418220,9010,1.0}},
                             10000,undefined,undefined,
                             {erlang,#Ref<0.0.0.3748>},
                             {state,fine,5000,undefined},
                             {0,nil},
                             9000,
                             {erlang,#Ref<0.0.0.3756>},
                             1435052839418192,
                             {state,
                              {dict,1,16,16,8,80,48,
                               {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
                               {{[],
                                 [[<0.639.0>|#Ref<0.0.0.3755>]],
                                 [],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},
                              delegate},
                             {resource,<<"/">>,exchange,<<"REDACTED_EXCHANGE.dlx">>},
                             <<"REDACTED_QUEUE_NAME">>,
                             undefined,undefined,1,running}
    ** Reason for termination ==
    ** {{case_clause,{value,{<<"count">>,signedint,1}}},
        [{rabbit_dead_letter,x_death_event_key,3,[]},
         {rabbit_dead_letter,ensure_xdeath_event_count,2,[]},
         {rabbit_dead_letter,ensure_xdeath_event_count,2,[]},
         {rabbit_dead_letter,'-group_by_queue_and_reason/1-fun-0-',3,[]},
         {lists,foldl,3,[{file,"lists.erl"},{line,1197}]},
         {rabbit_dead_letter,group_by_queue_and_reason,1,[]},
         {rabbit_dead_letter,update_x_death_header,2,[]},
         {rabbit_dead_letter,'-make_msg/5-fun-2-',8,[]}]}
    
    =ERROR REPORT==== 23-Jun-2015::11:47:19 ===
    Restarting crashed queue 'toq-gen2-REDACTED_QUEUE_NAME-96aba311-624f-4dc0-802b-70138917816c' in vhost '/'.
    
    bug effort-low 
    opened by riyad 39
  • Channels sometimes enter permanent flow control state

    Channels sometimes enter permanent flow control state

    We've heard from a user that channels in after upgrading to 3.5.0 enter flow control state and stay there. What you see on the list is in part a red herring.

    A later report off-list suggests that

    • Channels are credit-blocked on pids on remote nodes (as suggested by the 1st pid segment)
    • Eliminating mirroring via a policy makes the issue go away

    So this may be related to changes in bug26527.

    bug 
    opened by michaelklishin 39
  • systemd notification: missing shell escape can cause startup failures

    systemd notification: missing shell escape can cause startup failures

    Tried to debug a failing rabbitmq-server start in docker (see https://github.com/rabbitmq/chef-cookbook/issues/435).

    The shellout shenanigans in https://github.com/rabbitmq/rabbitmq-server/blob/e07ca0eacc0f2db77685a48253b3457a22c0e269/src/rabbit.erl#L433 are not using proper shell escaping:

    What it does:

    [[email protected] /]# systemctl show --property=ActiveState -.slice
    systemctl: invalid option -- '.'
    
    

    What it should:

    systemctl show --property=ActiveState \\-.slice
    ActiveState=inactive
    

    Result:

    
    [[email protected] /]# systemctl status rabbitmq-server.service
    ● rabbitmq-server.service - RabbitMQ broker
       Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: disabled)
       Active: activating (start) since Sat 2017-04-22 14:49:26 UTC; 14min ago
     Main PID: 3169 (beam.smp)
       CGroup: /docker/01b3d8c9ee668b5396d2d374bb279181567a5f73840432192d5bd9bb62b14eea/system.slice/rabbitmq-server.service
               ├─3169 /usr/lib64/erlang/erts-5.10.4/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 32000 -K true -- -root /usr/li...
               ├─3393 inet_gethost 4
               └─3394 inet_gethost 4
               ‣ 3169 /usr/lib64/erlang/erts-5.10.4/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 32000 -K true -- -root /usr/li...
    
    Apr 22 14:49:28 01b3d8c9ee66 rabbitmq-server[3169]: ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
    Apr 22 14:49:28 01b3d8c9ee66 rabbitmq-server[3169]: ##  ##
    Apr 22 14:49:28 01b3d8c9ee66 rabbitmq-server[3169]: ##########  Logs: /var/log/rabbitmq/[email protected]
    Apr 22 14:49:28 01b3d8c9ee66 rabbitmq-server[3169]: ######  ##        /var/log/rabbitmq/[email protected]
    Apr 22 14:49:28 01b3d8c9ee66 rabbitmq-server[3169]: ##########
    Apr 22 14:49:28 01b3d8c9ee66 rabbitmq-server[3169]: Starting broker...
    Apr 22 14:49:30 01b3d8c9ee66 rabbitmq-server[3169]: systemd unit for activation check: "-.slice"
    Apr 22 14:49:30 01b3d8c9ee66 rabbitmq-server[3169]: Unexpected status from systemd "systemctl: invalid option -- '.'\n"
    Apr 22 14:49:30 01b3d8c9ee66 rabbitmq-server[3169]: systemd READY notification failed, beware of timeouts
    Apr 22 14:49:30 01b3d8c9ee66 rabbitmq-server[3169]: completed with 0 plugins.
    
    

    => never notifies systemd, "start" hangs forever.

    "-.slice" is probably a CentOS7 thing.

    [[email protected] /]# rpm -qf /usr/lib/systemd/system/-.slice 
    systemd-219-30.el7_3.8.x86_64
    
    pkg-deb pkg-rpm usability 
    opened by rmoriz 35
  • rabbitmq-server won't be able to read a RABBITMQ_ENABLED_PLUGINS_FILE it created if umask is strict

    rabbitmq-server won't be able to read a RABBITMQ_ENABLED_PLUGINS_FILE it created if umask is strict

    Some environments require strict umask, for example PCI DSS complaint hosts. Or just if somebody is concerned with security and uses strict defaults.

    Command rabbitmq-plugins should be run from root. If root umask=0027, it creates file /etc/rabbitmq/enabled_plugins with permissions root:root 0640. rabbitmq-server running under rabbitmq user can not read it.

    It should either always use pre-defined umask=0022 or make rabbitmq user owner or group owner for /etc/rabbitmq/enabled_plugins, so it can read it.

    Example of error:

    [email protected]:~# umask
    0027
    [email protected]:~# rabbitmq-plugins enable rabbitmq_management 
    The following plugins have been enabled:
      mochiweb
      webmachine
      rabbitmq_web_dispatch
      amqp_client
      rabbitmq_management_agent
      rabbitmq_management
    
    Applying plugin configuration to [email protected] failed.
    Error: {cannot_read_enabled_plugins_file,"/etc/rabbitmq/enabled_plugins",
               eacces}
    [email protected]:~# ls -l /etc/rabbitmq/enabled_plugins
    -rw-r----- 1 root root 23 Feb 27 18:12 /etc/rabbitmq/enabled_plugins
    
    effort-low usability 
    opened by selivan 34
  • Queues become unbound to default exchange, but in an inconsistent way

    Queues become unbound to default exchange, but in an inconsistent way

    Hi. I have a 3 node cluster, and today i encountered a weird and buggy behavior. I'm using the latest version (3.7.11) and It seems that I have reproduced what is described here: The end result displayed these weird symptoms:

    1. some of my queues did not appear bound to the default exchange (didn't show up on api/bindings)
    2. even though they were not bound to default exchange, publishing on default exchange with queue name as routing key, was successfull. (message got into queue).
    3. publishing to an exchange that has no other bindings and has the default exchange as alternate-exchange, with queue name as routing key, resulted in an unrouteable error (which, before the incident, resulted in successful publish to queue)

    so there is inconsistency in the behaviour. the solution was to delete the queue, re-declare it as non-durable, delete it, and re-declare it as durable again.

    it seems like a race condition in the cluster shutdown, causes some inconsistencies in your durable queues registry (making it still publishable directly from the default exchange, but not via alternate-exchange to the default exchange).

    while moving to qurom queues in 3.8.0 may solve some of the race itself, i think it might worth taking a look into how such inconsistency is possible in the queue registry (possibly a bug in the implementation of the default exchange?)

    I hope this information is helpful, and thank you in advance!

    duplicate 
    opened by Avivsalem 31
  • Fix recovery when terms are accidentally empty

    Fix recovery when terms are accidentally empty

    This is a fix for an issue that occurs when shutting down a node (via SIGTERM) while the queues and more specifically the queue index is recovering. When that happens rabbit_recovery_terms has already started, and when it starts it calls dets:open_file/2 which creates an empty recovery.dets file. After the node is down and restarted again, the node thinks the shutdown was clean because the recovery file is there, except it is empty and therefore the queues have lost all their state.

    This results in RabbitMQ thinking there are 0 messages in all classic queues.

    To avoid this issue, we consider a shutdown to be dirty in the case where we have a recovery file BUT we do not find our state in the recovery terms.

    To reliably reproduce the issue this fixes:

    • Start a node

    • Fill it with many messages (800k is more than enough)

    • Wait a little and then kill the node via Ctrl+C twice (to force dirty recovery next start)

    • Start the node again

    • While it says "Starting broker", after waiting about 5 seconds, send a SIGTERM (killall beam.smp) to shutdown the node "cleanly"

    • Start the node again

    • Management will show 0 messages in all classic queues

    Types of Changes

    What types of changes does your code introduce to this project? Put an x in the boxes that apply

    • [x] Bug fix (non-breaking change which fixes issue #NNNN)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause an observable behavior change in existing systems)
    • [ ] Documentation improvements (corrections, new content, etc)
    • [ ] Cosmetic change (whitespace, formatting, etc)

    Checklist

    Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask on the mailing list. We're here to help! This is simply a reminder of what we are going to look for before merging your code.

    • [x] I have read the CONTRIBUTING.md document
    • [x] I have signed the CA (see https://cla.pivotal.io/sign/rabbitmq)
    • [ ] All tests pass locally with my changes
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] I have added necessary documentation (if appropriate)
    • [ ] Any dependent changes have been merged and published in related repositories
    opened by lhoguin 0
  • Close stream socket if client doesn't follow authentication protocol

    Close stream socket if client doesn't follow authentication protocol

    Before this commit, sending garbage data to the server stream port caused the RabbitMQ node to eat more and more memory. In this commit, we fix it by expecting the client to go through the proper authentication sequence. Otherwise, the server closes the socket.

    Pair: @mkuratczyk

    opened by ansd 0
  • Creating a queue with invalid name by moving messages

    Creating a queue with invalid name by moving messages

    It is possible to create a queue with a name longer than the maximum character limit by moving the messages.

    I tried to create a queue with the same name via "Add a new queue" panel and it doesn't let me create it. However, it is possible while moving messages. Also, it is not possible to delete the queue by using UI or HTTP API because it returns “414 Request-URI Too Large”.

    Reproducing steps:

    • Create a queue
    • Publish a random message
    • Copy the response of https://swapi.dev/api/people/ to use it as the name
    • Move the message to a destination queue with the name of the copied json.
    opened by kaanyilgin 2
  • Global counters

    Global counters

    Introduces global counters using the new seshat library. Also adds some missing metrics to the stream plugin.

    Requires https://github.com/rabbitmq/osiris/pull/30 & https://github.com/rabbitmq/ra/pull/221

    Types of Changes

    • [ ] Bug fix (non-breaking change which fixes issue #NNNN)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause an observable behavior change in existing systems)
    • [ ] Documentation improvements (corrections, new content, etc)
    • [ ] Cosmetic change (whitespace, formatting, etc)

    Checklist

    • [x] I have read the CONTRIBUTING.md document
    • [x] I have signed the CA (see https://cla.pivotal.io/sign/rabbitmq)
    • [x] All tests pass locally with my changes
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] I have added necessary documentation (if appropriate)
    • [ ] Any dependent changes have been merged and published in related repositories
    opened by dcorbacho 0
  • Replace classic queue index with a modern implementation

    Replace classic queue index with a modern implementation

    This is a preview of the work I have been doing on the modern classic queue index. Tests now pass (most of the time, there's at least one flaky test). There has been no real benchmarking going on, however backing_queue_SUITE tests run faster than before, DESPITE the tests now moving 4+ times as many messages.

    Types of Changes

    What types of changes does your code introduce to this project? Put an x in the boxes that apply

    • [x] Bug fix (non-breaking change which fixes issue #NNNN)
    • [ ] New feature (non-breaking change which adds functionality)
    • [x] Breaking change (fix or feature that would cause an observable behavior change in existing systems)
    • [ ] Documentation improvements (corrections, new content, etc)
    • [ ] Cosmetic change (whitespace, formatting, etc)

    Checklist

    Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask on the mailing list. We're here to help! This is simply a reminder of what we are going to look for before merging your code.

    • [x] I have read the CONTRIBUTING.md document
    • [x] I have signed the CA (see https://cla.pivotal.io/sign/rabbitmq)
    • [x] All tests pass locally with my changes
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] I have added necessary documentation (if appropriate)
    • [ ] Any dependent changes have been merged and published in related repositories

    Further Comments

    To be benchmarked!

    opened by lhoguin 11
  • Improvement request: Faster Quorum Queue startup

    Improvement request: Faster Quorum Queue startup

    Hi,

    Currently when a quorum queue starts up it will go through all the snapshot files:

    https://github.com/rabbitmq/ra/blob/94ea25111b6f9f795e4f8b30fb3018b74426a4ec/src/ra_log.erl#L940

    This is very inefficient for large queues, it can take minutes to read through all these one by one. it is not necessary that it's done serially, it could be done through a worker pool, or even queues could just start a few workers for themselves. Probably care would need to be taken so long queues don't hold up short queues from starting if the working pool solution is implemented.

    On my machine a single queue does around 15-20 Megabyte / sec, while it should be possible to do 200 MB/sec (which I tried with more queues and it works).

    Especially can be a problem after a "full" restart, which shows long QQs in "NaN" state until they recover.

    Probably with some redesign this segref operation could be made to take a few milliseconds instead of reading all snapshots, if somehow it could be stored in the file header.

    opened by luos 9
  • WIP: Only register the first web-dispatch listener

    WIP: Only register the first web-dispatch listener

    WIP because I have not done any testing yet. Big log at the bottom showing the issue.

    The problem: on certain double IPv4 and IPv6 environments, web-dispatch will start a single listener (for management for example) but will register two listeners in the Rabbit Mnesia table. Later, RabbitMQ will crash when trying to stop or suspend them (rabbitmq-upgrade drain) because only one of the two listeners exist in Ranch.

    The fix: we register only the first listener, because that's the one we used to create the Ranch ref (see rabbit_networking:ranch_ref/1).

    Types of Changes

    What types of changes does your code introduce to this project? Put an x in the boxes that apply

    • [x] Bug fix (non-breaking change which fixes issue #NNNN)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause an observable behavior change in existing systems)
    • [ ] Documentation improvements (corrections, new content, etc)
    • [ ] Cosmetic change (whitespace, formatting, etc)

    Checklist

    Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask on the mailing list. We're here to help! This is simply a reminder of what we are going to look for before merging your code.

    • [x] I have read the CONTRIBUTING.md document
    • [x] I have signed the CA (see https://cla.pivotal.io/sign/rabbitmq)
    • [ ] All tests pass locally with my changes
    • [ ] I have added tests that prove my fix is effective or that my feature works
    • [ ] I have added necessary documentation (if appropriate)
    • [ ] Any dependent changes have been merged and published in related repositories

    Further Comments

    Present in at least 3.8.11, tested on Windows.

    @dumbbell is all over the git blame for the original implementation so a review would be great. Also I am not sure this is the right fix, I am wondering if it worked before and if perhaps something else broke it. Or perhaps the drain command did not exist at the time.

    RabbitMQ and Ranch disagreeing on what listeners are available:

    PS C:\Windows\system32> rabbitmqctl.bat eval 'rabbit_networking:node_listeners(node()).'
    [{listener,[email protected],http,"WinDev2104Eval",
               {0,0,0,0,0,0,0,0},
               15672,
               [{cowboy_opts,[{sendfile,false}]},{port,15672}]},
     {listener,[email protected],http,"WinDev2104Eval",
               {0,0,0,0},
               15672,
               [{cowboy_opts,[{sendfile,false}]},{port,15672}]},
     {listener,[email protected],clustering,"WinDev2104Eval",
               {0,0,0,0,0,0,0,0},
               25672,[]},
     {listener,[email protected],amqp,"WinDev2104Eval",
               {0,0,0,0,0,0,0,0},
               5672,
               [{backlog,128},
                {nodelay,true},
                {linger,{true,0}},
                {exit_on_close,false}]},
     {listener,[email protected],amqp,"WinDev2104Eval",
               {0,0,0,0},
               5672,
               [{backlog,128},
                {nodelay,true},
                {linger,{true,0}},
                {exit_on_close,false}]}]
    PS C:\Windows\system32> rabbitmqctl.bat eval 'ets:tab2list(ranch_server).'
    [{{addr,{acceptor,{0,0,0,0},5672}},{{0,0,0,0},5672}},
     {{addr,{acceptor,{0,0,0,0,0,0,0,0},5672}},{{0,0,0,0,0,0,0,0},5672}},
     {{addr,{acceptor,{0,0,0,0,0,0,0,0},15672}},{{0,0,0,0},15672}},
     {{conns_sup,{acceptor,{0,0,0,0},5672}},<11164.749.0>},
     {{conns_sup,{acceptor,{0,0,0,0,0,0,0,0},5672}},<11164.734.0>},
     {{conns_sup,{acceptor,{0,0,0,0,0,0,0,0},15672}},<11164.621.0>},
    ...
    

    25672 is the distribution so it's expected that it's not in Ranch. 5672 has two in both cases. 15672 has one in Ranch and two in RabbitMQ: this is the problem.

    opened by lhoguin 1
  • WIP Tracing: serialize timestamp value as a long

    WIP Tracing: serialize timestamp value as a long

    I took the opportunity to simplify the message-to-headers conversion. The code may look silly to some but it is immediately obvious what's going on now and we can special case certain keys, while previously it was a moderately complex foldl/3 which said nothing about what the intent is.

    The message properties record hasn't changed in years so the benefit of iterating over its fields is negligible.

    References #2991.

    opened by michaelklishin 0
  • Timestamp is int32 on a received trace message

    Timestamp is int32 on a received trace message

    Copy from https://groups.google.com/g/rabbitmq-users/c/YipXsgD7XpM

    We've setup the vhost tracing to queue all published messages into a separate queue. When consuming messages from said queue in dotnet (probably irrelevant), the timestamp of the original message is delivered/read as int32. Shouldn't this be a long/int64?

    It seems to me (I don't really know the language, so I can be wrong) that this line is responsible for the incorrect forwarding.

    => This issue prevents us from sending a timestamp with ms resolution instead of s, as the resulting long value is > 32bit value.

    opened by bollhals 10
Releases(v3.8.17)
Distributed Task Queue (development branch)

Version: 5.1.0b1 (singularity) Web: https://docs.celeryproject.org/en/stable/index.html Download: https://pypi.org/project/celery/ Source: https://git

Celery 17.4k Jun 5, 2021
Siberite is a simple, lightweight, leveldb backed message queue written in Go.

Siberite Siberite is a simple leveldb backed message queue server (twitter/kestrel, wavii/darner rewritten in Go). Siberite is a very simple message q

Anton Bogdanovich 581 May 15, 2021
simple, lightweight message queue

Darner Darner is a very simple message queue server. Unlike in-memory servers such as redis, Darner is designed to handle queues much larger than what

Wavii Inc 850 Mar 21, 2021
Micro second messaging that stores everything to disk

Chronicle Queue Contents Table of Contents Contents About Chronicle Software What is Chronicle Queue Java Docs Usage More benchmarks Downloading Chron

Chronicle Software : Open Source 2.3k Jun 5, 2021
Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later.

Resque Resque (pronounced like "rescue") is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and processing

Resque 8.9k Jun 6, 2021
Mirror of Apache Kafka

Apache Kafka See our web site for details on the project. You need to have Java installed. We build and test Apache Kafka with Java 8, 11 and 15. We s

The Apache Software Foundation 19.1k Jun 6, 2021
Disque is a distributed message broker

Disque, an in-memory, distributed job queue Disque is an ongoing experiment to build a distributed, in-memory, message broker. Its goal is to capture

Salvatore Sanfilippo 7.8k Jun 1, 2021
Kue is a priority job queue backed by redis, built for node.js.

Kue Kue is no longer maintained Please see e.g. Bull as an alternative. Thank you! Kue is a priority job queue backed by redis, built for node.js. PRO

Automattic 9.3k Jun 3, 2021
ZeroMQ core engine in C++, implements ZMTP/3.1

ZeroMQ Welcome The ZeroMQ lightweight messaging kernel is a library which extends the standard socket interfaces with features traditionally provided

The ZeroMQ project 7k Jun 7, 2021
High-Performance server for NATS, the cloud native messaging system.

NATS is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Fo

NATS - The Cloud Native Messaging System 9.4k Jun 7, 2021
Mirror of Apache ActiveMQ Apollo

The Apollo Project - DEPRECATED This project has died, and is now Deprecated, we strongly recommend you to use ActiveMQ 5.x or ActiveMQ Artemis. Synop

The Apache Software Foundation 145 May 30, 2021
Queueing jobs in Node.js using PostgreSQL like a boss

Queueing jobs in Node.js using PostgreSQL like a boss. async function readme() { const PgBoss = require('pg-boss'); const boss = new PgBoss('postg

Tim Jones 564 Jun 4, 2021

gearmand The latest version of gearmand source code and versions 1.1.13 and later can be found at GitHub Repository. Older versions released before 1.

Gearman Job Server 618 Jun 4, 2021
Simple job queues for Python

RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is desi

RQ 7.8k Jun 5, 2021