I\'m getting close to deploying an application built on Rails 3.1.x and started running some performance tests. After fiddling with ab
for a bit, I\'m seeing some
There are some pretty low-hanging fruit that almost always yield pretty worthy performance gains:
include
and join
where appropriate, and make sure you're using empty?
over any?
where possible to avoid SELECT
s when you just need a COUNT
.You need to be careful not to spend too much time optimizing Ruby routines. Unless you're doing something with a huge amount of data or processing (e.g. image resizing) you probably won't see very significant gains from optimizing loops or minimizing memory usage. And if you find certain pages are problematic, dig into your logs and see what is happening during those requests.
And if you're not already, autoscaling applications like HireFireApp are great for letting you handle loads of requests by scaling horizontally without the cost of running extraneous dynos during slow periods.
PS: There is a new Heroku Add-On called Blitz that lets you test a concurrent load of up to 5,000 users.
The most comprehensive single answer is to use something like NewRelic to instrument your application and find the slow spots. Then, you can apply optimizations or caching to your code to smooth out those slow spots. As a Heroku customer, you get a NewRelic install for free - it's an add-in you can add to your deployment from the Heroku console.
Once you have an understanding of what's slowing you down, then you can start to approach it. Heroku handles most all of the dev-ops end of performance tuning, so you don't need to do anything there. However, you'll still be able to make large gains by optimizing database queries and performing fragment- and action-level caching where appropriate.
I've spent some time tuning my app on heroku, and have spent some time working on performance tuning of Rails apps in a variety of settings.
When I run ab -n 300 -c 75 ...myapp.com.... # which is a backup to my main site, and is on the free cedar plan with unicorn
Requests per second: 132.11 [#/sec] (mean)
Time per request: 567.707 [ms] (mean)
Time per request: 7.569 [ms] (mean, across all concurrent requests)
(this is against a home page that doesn't do anything intense, so I'm providing it only as a "how fast could heroku be on the free plan with a very simple page?" example, not a "your apps should be this fast")
Here is my Rails Performance Tuning 101 checklist:
Measure the browser/page load time first (the browser makes lots of requests, ab is only telling you about one of them, and usually your main page request is not the issue), get page load baseline numbers from tools like www.webpagetest.org or www.gtmetrix.com for the public facing pages, or browser tools Yslow, google page speed, or dynatrace for the private pages. If you look at the page load waterfall diagram (the 'Net' panel in chrome/firefox), it usually shows that your html loads quickly (under a sec), but then everything else takes 1-3 sec to load. Follow the Yslow/page speed recommendations on how to improve (make sure you are using the Rails 3.1 assets pipeline stuff to its full extent)
Read through your log files/new relic to find the sweet spot of the 'slowest/most frequently hit' request, and profile what happens for that request (is it slow ruby/lots of mem usage, or lots of queries?) You need to have a reliable way to detect and monitor performance issues, and not just go changing things at random. Once you have identified some target areas, create test scripts to help with before/after testing and prove that your change helps, and detect if a regression creeps in.
Lack of Indexes on db columns is one of the most common issues, and easiest to address. Run explain on the target queries, or look through your slow query log, to see what the query planner is doing. Add indexes for foreign keys, search columns, or primary data (covering index) as appropriate. Retest with actual production data to prove that it makes a difference. (you can run explain in heroku, as well as run queries for missing or unused indexes)
Most poor performing Rails apps suffer from N+1 queries because its so easy to write order.owner.address.city and not think about what happens when that's in a loop. N+1 queries aren't necessarily slow queries, so they don't show up in the slow query log, its just that there are lots of them, and its more efficient to do it all at once. Use :include or .includes() for eager loading of that data, or look at doing your query another way.
Analyze the flow of your app and look for caching opportunities. If the user bounces back and forth between the index page and a details page, and back again, perhaps an ajax view of the details, without leaving the index page would give them the data they need in a faster way. I wrote some more thoughts about that on my blog
I gave a presentation on these techniques and other ideas in Chicago at this year's WindyCityRails conference. You can see the video here on my www.RailsPerformance.com blog What I love about heroku is that you have to be scalable from the start. When you look at the discussions on the mailing list, you see that most people are aware of the performance best practices, and how to get the most out of the server. I also like how you if you want to stay cheap, you learn how the performance tuning tricks that will get you the most bang.
Good luck!
As nothing has come up yet, I'll provide an answer for the PostgreSQL part. I can't assist with Ruby.
You can find excellent starting points for optimizing performance at the PostgreSQL wiki.