{"id":500,"date":"2012-10-16T10:19:36","date_gmt":"2012-10-16T01:19:36","guid":{"rendered":"http:\/\/blog.belliny.net\/?p=500"},"modified":"2012-10-16T10:19:36","modified_gmt":"2012-10-16T01:19:36","slug":"rabbitmq-performance-chart","status":"publish","type":"post","link":"https:\/\/www.belliny.net\/?p=500","title":{"rendered":"RabbitMQ performance chart"},"content":{"rendered":"<p>http:\/\/www.rabbitmq.com\/blog\/2012\/04\/25\/rabbitmq-performance-measurements-part-2\/<\/p>\n<h2>Some Simple Scenarios<\/h2>\n<div>\n<div>auto-ack<strong>44824<\/strong>msg\/s<\/div>\n<p>This first scenario is the simplest &#8211; just one producer and one consumer. So we have a baseline.<\/p>\n<\/div>\n<div>\n<div>no-consume<strong>53710<\/strong>msg\/s<\/div>\n<p>Of course we want to produce impressive figures. So we can go a bit faster than that &#8211; if we don&#8217;t consume anything then we can publish faster.<\/p>\n<\/div>\n<div>\n<div>max publish<strong>149910<\/strong>msg\/s<\/div>\n<div>\n<div>This uses a couple of the cores on our server &#8211; but not all of them. So for the best headline-grabbing rate, we start a number of parallel producers, all publishing into nothing.<\/div>\n<div><\/div>\n<\/div>\n<\/div>\n<div>\n<div>max consume<strong>64315<\/strong>msg\/s<\/div>\n<table>\n<tbody>\n<tr>\n<td>\u00a0Of course, consuming is rather important! So for the headline consuming rate, we publish to a large number of consumers in parallel.<\/td>\n<td><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>Of course to some extent this quest for large numbers is a bit silly, we&#8217;re more interested in relative performance. So let&#8217;s revert to one producer and one consumer.<\/p>\n<div>\n<div>mandatory<strong>22402<\/strong>msg\/s<\/div>\n<table>\n<tbody>\n<tr>\n<td>\u00a0Now let&#8217;s try publishing with the mandatory flag set. We drop to about 40% of the non-mandatory rate. The reason for this is that the channel we&#8217;re publishing to can&#8217;t just asynchronously stream messages at queues any more; it synchronously checks with the queues to make sure they&#8217;re still there. (Yes, we could probably make mandatory publishing faster, but it&#8217;s not very heavily used.)<\/td>\n<td><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div>\n<div>immediate<strong>22035<\/strong>msg\/s<\/div>\n<table>\n<tbody>\n<tr>\n<td>\u00a0The immediate flag gives us almost exactly the same drop in performance. This isn&#8217;t hugely surprising &#8211; it has to make the same synchronous check with the queue.<\/td>\n<td><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div>\n<div>ack<strong>32005<\/strong>msg\/s<\/div>\n<table>\n<tbody>\n<tr>\n<td><\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td>Scrapping the rarely-used mandatory and immediate flags, let&#8217;s try turning on acknowledgements for delivered messages. We still see a performance drop compared to delivering without acknowledgements (the server has to do more bookkeeping after all) but it&#8217;s less noticeable.<\/td>\n<td><\/td>\n<td><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div>\n<div>ack-confirm<strong>26103<\/strong>msg\/s<\/div>\n<table>\n<tbody>\n<tr>\n<td>Now we turn on publish confirms as well. Performance drops a little more but we&#8217;re still at over 60% the speed of neither acks nor confirms.<\/td>\n<td><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div>\n<div>a-c-persist<strong>4725<\/strong>msg\/s<\/div>\n<table>\n<tbody>\n<tr>\n<td><\/td>\n<td><\/td>\n<\/tr>\n<tr>\n<td>Finally, we enable message persistence. The rate becomes much lower, since we&#8217;re throwing all those messages at the disk as well.<\/td>\n<td><\/td>\n<td><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>http:\/\/www.rabbitmq.com\/blog\/2012\/04\/25\/rabbitmq-performance-measurements-part-2\/ Some Simple Scenarios auto-ack44824msg\/s This first scenario is the simplest &#8211; just one producer and one consumer. So we have a baseline. no-consume53710msg\/s Of course we want to produce impressive figures. So we can go a bit faster than that &#8211; if we don&#8217;t consume anything then we can publish faster. max publish149910msg\/s This &hellip; <a href=\"https:\/\/www.belliny.net\/?p=500\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;RabbitMQ performance chart&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[12],"tags":[],"class_list":["post-500","post","type-post","status-publish","format-standard","hentry","category-rabbitmq"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.belliny.net\/index.php?rest_route=\/wp\/v2\/posts\/500","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.belliny.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.belliny.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.belliny.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.belliny.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=500"}],"version-history":[{"count":0,"href":"https:\/\/www.belliny.net\/index.php?rest_route=\/wp\/v2\/posts\/500\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.belliny.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=500"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.belliny.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=500"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.belliny.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=500"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}