DB (SQL) automated stress/load tools?

后端 未结 4 604
粉色の甜心
粉色の甜心 2021-01-02 16:07

I want to measure the performance and scalability of my DB application. I am looking for a tool that would allow me to run many SQL statements against my DB, taking the DB a

相关标签:
4条回答
  • 2021-01-02 16:30

    JMeter from Apache can handle different server types. I use it for load tests against web applications, others in the team use it for DB calls. It can be configured in many ways to get the load you need. It can be run in console mode and even be clustered using different clients to minimize client overhead ( and so falsifying the results).

    It's a java application and a bit complex at first sight. But still we love it. :-)

    0 讨论(0)
  • 2021-01-02 16:32

    The SQL Load Generator is another such tool:

    http://sqlloadgenerator.codeplex.com/

    I like it, but it doesn't yet have the option to save test setup.

    0 讨论(0)
  • 2021-01-02 16:36

    Did you check Bristlecone an open source tool from Continuent? I don't use it, but it works for Postgres and seems to be able to do the things that your request. (sorry as a new user, I cannot give you the direct link to the tool page, but Google will get you there ;o])

    0 讨论(0)
  • 2021-01-02 16:52

    We never really found an adequate solution for stress testing our mainframe DB2 database so we ended up rolling our own. It actually just consists of a bank of 30 PCs running Linux with DB2 Connect installed.

    29 of the boxes run a script which simply wait for a starter file to appear on an NFS mount then start executing fixed queries based on the data. The fact that these queries (and the data in the database) are fixed means we can easily compare against previous successful runs.

    The 30th box runs two scripts in succession (the second is the same as all the other boxes). The first empties then populates the database tables with our known data and then creates the starter file to allow all the other machines (and itself) to continue.

    This is all done with bash and DB2 Connect so is fairly easily maintainable (and free).

    We also have another variant to do random queries based on analysis of production information collected over many months. It's harder to check the output against a known successful baseline but, in that circumstance, we're only looking for functional and performance problems (so we check for errors and queries that take too long).

    We're currently examining whether we can consolidate all those physical servers into virtual machines, on both the mainframe running zLinux (which will use the shared-memory HyperSockets for TCP/IP, basically removing the network delays) and Intel platforms with VMWare, to free up some of that hardware.

    It's an option you should examine if you don't mind a little bit of work up front since it gives you a great deal of control down the track.

    0 讨论(0)
提交回复
热议问题