In the past, I used Microsoft Web Application Stress Tool and Pylot to stress test web applications. I'd written a simple home page, login script, and site walkthrough (in an ecommerce site adding a few items to a cart and checkout).
Just hitting the homepage hard with a handful of developers would almost always locate a major problem. More scalability problems would surface at the second stage, and even more - after the launch.
The URL of the tools I used were Microsoft Homer (aka Microsoft Web Application Stress Tool) and Pylot.
The reports generated by these tools never made much sense to me, and I would spend many hours trying to figure out what kind of concurrent load the site would be able to support. It was always worth it because the stupidest bugs and bottlenecks would always come up (for instance, web server misconfigurations).
What have you done, what tools have you used, and what success have you had with your approach? The part that is most interesting to me is coming up with some kind of a meaningful formula for calculating the number of concurrent users an app can support from the numbers reported by the stress test application.
Here's another vote for JMeter.
JMeter is an open-source load testing tool, written in Java. It's capable of testing a number of different server types (for example, web, web services, database, just about anything that uses requests basically).
It does however have a steep learning curve once you start getting to complicated tests, but it's well worth it. You can get up and running very quickly, and depending on what sort of stress-testing you want to do, that might be fine.
Pros:
- Open-Source/Free tool from the Apache project (helps with buy-in)
- Easy to get started with, and easy to use once you grasp the core concepts. (Ie, how to create a request, how to create an assertion, how to work with variables etc).
- Very scalable. I've run tests with 11 machines generating load on the server to the tune of almost a million hits/hour. It was much easier to setup than I was expecting.
- Has an active community and good resources to help you get up and running. Read the tutorials first and play with it for a while.
Cons:
- The UI is written in Swing. (ugh!)
- JMeter works by parsing the response text returned by the server. So if you're looking to validate any sort of javascript behaviours, you're out of luck.
- Learning curve is steep for non-programmers. If you're familiar with regular expressions, you're already ahead of the game.
- There are large numbers of (insert expletive) idiots in the support forum asking stupid questions that could be easily solved if they'd give the documentation even a cursory glance. ('How do I use JMeter to stress-test my Windows GUI' shows up quite frequently).
- Reporting 'out of the box' leaves much to be desired, particularly for larger tests. In the test I mentioned above, I ended up having to write a quick console app to do some of the 'xml-logfile' to 'html' conversions. That was a few years ago though, so it's probable that this would no longer be required.
I've used The Grinder. It's open source, pretty easy to use, and very configurable. It is Java based and uses Jython for the scripts. We ran it against a .NET web application, so don't think it's a Java only tool (by their nature, any web stress tool should not be tied to the platform it uses).
We did some neat stuff with it... we were a web based telecom application, so one cool use I set up was to mimick dialing a number through our web application, then used an auto answer tool we had (which was basically a tutorial app from Microsoft to connect to their RTC LCS server... which is what Microsoft Office Communicator connects to on a local network... then modified to just pick up calls automatically). This then allowed us to use this instead of an expensive telephony tool called The Hammer (or something like that).
Anyways, we also used the tool to see how our application held up under high load, and it was very effective in finding bottlenecks. The tool has built in reporting to show how long requests are taking, but we never used it. The logs can also store all the responses and whatnot, or custom logging.
I highly recommend this tool, very useful for the price... but expect to do some custom setup with it (it has a built in proxy to record a script, but it may need customization for capturing something like sessions... I know I had to customize it to utilize a unique session per thread).
A little late to this party. I agree that Pylot is the best up-and-coming open source tool out there. It's simple to use and is actively worked on by a great guy (Corey Goldberg). As the founder of OpenQA, I'm also happy that Pylot now is listed on our home page and uses some of our infrastructure (namely the forums).
However, I also recently decided that the entire concept of load testing was flawed: emulating HTTP traffic, with applications as complex as they have become, is a pain in the butt. That's why I created the commercial tool BrowserMob. It's an external load testing service that uses Selenium to control real web browsers when playing back load.
The approach obviously requires a ton more hardware than normal load testing techniques, but hardware is actually pretty cheap when you are using cloud computing. And a nice side effect of this is that the scripting is much easier than normal load testing. You don't have to do any advanced regex matching (like JMeter requires) to extract out cookies, .NET session state, Ajax request parameters, etc. Since you're using real browsers, they just do what they are supposed to do.
Sorry to blatantly pitch a commercial product, but hopefully the concept is interesting to some folks and at least gets them thinking about some new ways to deal with load testing when you have access to a bunch of extra hardware!
I've used JMeter. Besides testing the web server you can also test your database backend, messaging services and email servers.
For simple usage, I perfer ab(apache benchmark) and siege, later one is needed as ab don't support cookie and would create endless sessions from dynamic site.
both are simple to start:
ab -c n -t 30 url
siege -b -c n -t 30s url
siege can run with more urls.
last siege version turn verbose on in siegerc, which is annoy. you can only disable it by edit that file(/usr/local/etc/siegerc
).
As this question is still open, I might as well weigh in.
The good news is that over the past 5 or so years the Open Source tools have really matured and taken off in the space, the bad news is there are so many of them out there.
Here are my thoughts:-
Jmeter vs Grinder
Jmeter is driven from an XML style specification, that is constructed via a GUI.
Grinder uses Jython scripting within a muti-threaded Java framework, so more oriented to programmers.
Both tools will handle HTTP and HTTPS and have a proxy recorder to get you started. Both tools use the Controller model to drive multiple test agents so scalability is not an issue (given access to the Cloud).
Which is better:-
A hard call as the learning curve is steep with both tools as you get into the more complicated scripting requirements for url rewriting, correlation, providing unique data per Virtual User and simulating first time or returning Users (by manipulating the HTTP Headers).
That said I would start with Jmeter as this tool has a huge following and there are many examples and tutorials on the web for using this tool. If and when you come to a 'road block', that is something you can't 'easily' do with Jmeter then have a look at the Grinder. The good news is both these tools have the same Java requirement and a 'mix and match' solution is not out of the question.
Something new to add – Headless browsers running multiple instances of Selenium WebDriver.
This is a relatively new approach because it relies on the availability of resources that can now be provisioned from the Cloud. With this approach a Selenium (WebDriver) script is taken and run within a headless browser (i.e. WebDriver = New HtmlUnitDriver()) driver in multiple threads.
From experience around 25 instances of 'headless browsers' can be executed from the Amazon M1 Small Instance.
What this means is that all of the correlation, url rewriting issues disappear as you repurpose your functional testing scripts to become performance testing scripts.
The scalability is compromised as more VMs will be needed to drive the load, as compared with a HTTP driver such as the Grinder or Jmeter. That said, if you are looking to drive 500 Virtual Users then with 20 Amazon Small Instances (6 cents an hour each) at a cost of just $1.20 per hour gives you load that is very close to the Real User Experience.
We recently started using Gatling for load testing. I would highly recommend to try out this tool for load testing. We had used SOASTA and JMETER in past. Our main reason to consider Gatling is following:
- Recorder to record the scenario
- Using Akka and Netty which gives better performance compare to Jmeter Threading model
- DSL Scala which is very much maintainable compare to Jmeter XML
- Easy to write the tests, don't scare if it's scala.
- Reporting
Let me give you simple example to write the code using Gatling Code:
// your code starts here
val scn = scenario("Scenario")
.exec(http("Page")
.get("http://example.com"))
// injecting 100 user enter code here's on above scenario.
setUp(scn.inject(atOnceUsers(100)))
However you can make it as complicated as possible. One of the feature which stand out for Gatling is reporting which is very detail.
Here are some links:
Gatling
Gatling Tutorial
I recently gave talk on it, you can go through the talk here:
https://docs.google.com/viewer?url=http%3A%2F%2Ffiles.meetup.com%2F3872152%2FExploring-Load-Testing-with-Gatling.pdf
This is an old question, but I think newer solutions are worthy of a mention. Checkout LoadImpact: http://www.loadimpact.com.
I tried WebLoad it's a pretty neat tool. It comes with and test script IDE which allows you to record user action on a website. It also draws a graph as it perform stress test on your web server. Try it out, I highly recommend it.
Trying all mentioned here, I found curl-loader as best for my purposes. very easy interface, real-time monitoring, useful statistics, from which I build graphs of performance. All features of libcurl are included.
Blaze meter has a chrome extension for recording sessions and exporting them to JMeter (currently requires login). You also have the option of paying them money to run it on their cluster of JMeter servers (their pricing seems much better than LoadImpact which I've just stopped using):
I don't have any association with them, I just like the look of their service, although I haven't used the paid version yet.
You asked this question almost a year ago and I don't know if you still are looking for another way of benchmarking your website. However since this question is still not marked as solved I would like to suggest the free webservice LoadImpact (btw. not affiliated). Just got this link via twitter and would like to share this find. They create a reasonable good overview and for a few bucks more you get the "full impact mode". This probably sounds strange, but good luck pushing and braking your service :)
I found IBM Page Detailer also an interesting tool to work with.
I've used openSTA.
This allows a session with a web site to be recorded and then played back via a relatively simple script language.
You can easily test web services and write your own scripts.
It allows you to put scripts together in a test in any way you want and configure the number of iterations, the number of users in each iteration, the ramp up time to introduce each new user and the delay between each iteration. Tests can also be scheduled in the future.
It's open source and free.
It produces a number of reports which can be saved to a spreadsheet. We then use a pivot table to easily analyse and graph the results.
We use the Microsoft tool mentioned - Microsoft Web Application Stress Tool. It is the easiest tool I have used. It is limited in many ways, including only being able to hit port 80 on manually created tests. But, its ease of use means it actually gets used.
We supplement the load from this tool with other tools including OpenSTA and link check spiders.
JMeter looks good from my initial evaluation, I hope to include it in our continuous integration going forward. But, JMeter is complex and non trivial to roll out.
I'd suggest opening another question regarding interpreting the MS stress tool results.
Visual Studio Test Edition 2010 (2008 good too). This is a really easy and powerful tool to create web/load tests with.
The bonus with this tool when using against Windows servers is that you get integrated access to all the perfmon server stats in your report. Really useful.
The other bonus is that with Visual Studio project you can integrate a "Performance Session" that will profile the code execution of your website.
If you are serving webpages from a windows server, this is the best tool out there.
There is a separate and expensive licence required to use several machines to load test the application however.
We have developed a process that treats load and performance measurenment as a first-class concern - as you say, leaving it to the end of the project tends to lead to disappointment...
So, during development, we include very basic multi-user testing (using selenium), which checks for basic craziness like broken session management, obvious concurrency issues, and obvious resource contention problems. Non-trivial projects include this in the continuous integration process, so we get very regular feedback.
For projects that don't have extreme performance requirements, we include basic performance testing in our testing; usually, we script out the tests using BadBoy, and import them into JMeter, replacing the login details and other thread-specific things. We then ramp these up to the level that the server is dealing with 100 requests per second; if the response time is less than 1 second, that's usually sufficient. We launch and move on with our lives.
For projects with extreme performance requirements, we still use BadBoy and JMeter, but put a lot of energy into understanding the bottlenecks on the servers on our test rig(web and database servers, usually). There's a good tool for analyzing Microsoft event logs which helps a lot with this. We typically find unexpected bottlenecks, which we optimize if possible; that gives us an application that is as fast as it can be on "1 web server, 1 database server". We then usually deploy to our target infrastructure, and use one of the "Jmeter in the cloud" services to re-run the tests at scale.
Again, PAL reports help to analyze what happened during the tests - you often see very different bottlenecks on production environments.
The key is to make sure you don't just run your stress tests, but also that you collect the information you need to understand the performance of your application.
There are a lot of good tools mentioned here. I wonder if tools are an answer the question: "How do you stress test a web application?" The tools don't really provide a method to stress a Web app. Here's what I know:
Stress testing shows how a Web app fails while serving responses to an increasing population of users. Stress testing shows how the Web app functions while it fails. Most Web apps today - especially the Social/Mobile Web apps - are integrations of services. For example, when Facebook had its outage in May 2011 you could not log onto Pepsi.com's Web app. The app didn't fail entirely, just a big portion of its normal function become unavailable to users.
Performance testing shows a Web app's ability to maintain response times independent of how many users are concurrently using the app. For example, an app that handles 10 transactions per second with 10 concurrent users should handle 20 transactions per second at 20 users. If the app handles less than 20 transactions per second the response times are growing longer and the app is not able to achieve linear scalability.
Also, in the above example the transaction-per-second count should be of only successful operations of a test use case/workflow. Failures typically happen in shorter timespans and will make the TPS measurement overly optimistic. Failures are important to a stress and performance test since they generate load on the app too.
I wrote up the PushToTest methodology in the TestMaker User Guide at http://www.pushtotest.com/pushtotest-testmaker-6-methodology. TestMaker comes in two flavors: Open Source (GPL) Community version and TestMaker Enterprise (commercial with great professional support.)
-Frank
Take a look at LoadBooster(https://www.loadbooster.com). It utilizes headless scriptable browser PhantomJS/CasperJs to test web sites. Phantomjs will parse and render every page, execute the client-side script. The headless browser approach is easier to write test scenarios to support complex AJAX heavy Web 2.0 app,browser navigation, mouse click and keystrokes into the browser or wait until an element exists in DOM. LoadBooster support selenium HTML script too.
Disclaimer: I work for LoadBooster.
Try ZebraTester which is much easier to use than jMeter. I have used jMeter for a long time but the total setup time for a load test was always an issue. Although ZebraTester isn't open source, the time that I have saved in the last six months makes up for it. They also have a SaaS portal which can be used for quickly running tests using their load generators.
One more note, for our web application, I found that we had huge performance issues due to contention between threads over locks... so the moral was to think over the locking scheme very carefully. We ended up having worker threads to throttle too many requests using an asynchronous http handler, otherwise the application would just get overwhelmed and crash and burn. It meant a huge backlog could pile up, but at least the site would stay up.
Take a look at TestComplete.
I second the opensta suggestion. I would just add that it allows you to do things to monitor the server you're testing using SMTP. We keep track of processor load, memory used, byes sent, etc. The only downside is that if you find something boken and want to do a fix it relies on several open-source libraries that are no longer kept up, so getting a compiling version of the source is more tricky than with most OSS.
I played with JMeter. One think it could not not test was ASP.NET Webforms. The viewstate broke my tests. I am not shure why, but there are a couple of tools out there that dont handle viewstate right. My current project is ASP.NET MVC and JMeter works well with it.
I had good results with FunkLoad :
- easy to script user interaction
- reports are clear
- can monitor server load
At the risk of being accused of shameless self promotion I would like to point out that in my quest for a free load testing tool I went to this article: http://www.devcurry.com/2010/07/10-free-tools-to-loadstress-test-your.html
Either I couldn't get the throughput I wanted, or I couldn't get the flexibility I wanted. AND I wanted to easily aggregate the results of multiple load test generation hosts in post test analysis.
I tried out every tool on the list and to my frustration found that none of them quite did what I wanted to be able to do. So I built one and am sharing it.
Here it is: http://sourceforge.net/projects/loadmonger
PS: No snide comments on the name from folks who are familiar with urban slang. I wasn't but am slightly more worldly now.
I vote for jMeter too and I want add some quotes to @PeterBernier answer.
The main question that load testing answers is how many concurrent users can my web application support? In order to get a proper answer, load testing should represent real application usage, as close as possible.
Keep above in mind, jMeter has many building blocks Logical Controllers, Config Elements, Pre Processors, Listeners ,... which can help you in this.
You can mimic real world situation with jMeter, for example you can:
- Configure jMeter to act as real Browser by configuring (
concurrent resource download
,browser cache
,http headers
,setting request time out
,cookie management
,https support
,encoding
,ajax support
,... ) - Configure jMeter to generate user requests (by defining
number of users per second
,ramp-up time
,scheduling
,...) - Configure lots of client with jMeter on them, to do a distributed load test.
- Process response to find if the server is responding correctly during test. ( For example
assert
response to find a text in it)
Please consider:
- It is easy to start a real web application test with jMeter in minutes. The jMeter has a very easy tool which record your test scenario ( know as
HTTP(S) Test Script Recorder
). - jMeter has lots of plugins at http://jmeter-plugins.org.
- The jMeter UI is swing based and has made good changes in jMeter 3.2. On the other hand please consider that JMeter GUI should only be used for test and debugging. It is not good practice to use it in GUI mode for actual test. https://www.blazemeter.com/blog/5-ways-launch-jmeter-test-without-using-jmeter-gui. Configure and test your scenario and run it on non-gui mode.
- The are lots of reporting showing tools in jMeter (Known as
listeners
) but there are not meant to be on during test. You must run your test and generate reports (.jtl
files). Then you must use these tools to analyze result. Please have a look at https://www.blazemeter.com/blog/jmeter-listeners-part-1-basic-display-formats or https://www.tutorialspoint.com/jmeter/jmeter_listeners.htm.
The https://www.blazemeter.com/jmeter has very good and practical information to help you configure your test environment.
来源:https://stackoverflow.com/questions/7492/performing-a-stress-test-on-web-application