How to send 4000+ requests in exactly 1 second?

后端 未结 4 1554
无人及你
无人及你 2021-01-17 08:29

I have an HTTP GET request. I need to send the request to the application server for more than 4000 times exactly in 1 second.

I\'m sending

相关标签:
4条回答
  • 2021-01-17 09:15

    Do you want to stick with JMeter? Otherwise Httperf is a decent tool and easy to use:

    httperf --server=www.example.com --rate=4000 --num-conns=4000
    

    for instance.

    Hope this helps a bit, although not entirely what you asked for.

    0 讨论(0)
  • 2021-01-17 09:17

    Consider putting your HTTP Request under the Synchronizing Timer - this way you'll be sure your requests are kicked off at exactly the same moment.

    Also 4000 threads is quite a "heavy" load so I would suggest following recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure guide to get the most performance from your JMeter instance(s).

    0 讨论(0)
  • 2021-01-17 09:19

    How about starting with whether the server is configured correctly to avoid such load. Requests can be of any type. If they are of static requests then work to ensure that the absolute minimum number of these hit your origin server due to caching policies or architecture, such as

    • If you have returning users and no CDN, make sure your cache policy is storing at the client, expiring with your build schedule. This avoids repeat requests from returning visitors
    • If you have no returning users and no CDN, make sure that your cache policy is set to at least 120% of the maximum page to page delay visible in your logs for a given user set
    • If you have a CDN, make sure all static request headers, 301 & 404 headers are set to allow your CDN to cache your requests to expire with your new build push schedule.
    • If you do not have a CDN, consider a model where you place all static resources on a dedicated server where everything on that server is marked for caching at the client at a high level. You can also front that one server with varnish or squid as a caching proxy to take the load

    Utlimately I would suspect a design issue at play with this high a consistent request level. 4000 requests per second becomes 14,400,000 requests per hour and 345,600,000 per 24 hour period.

    On a process basis, I would also suggest a minimum of three load generators: Two for primary load and one for a control virtual user of a single virtual user|thread for your business process. In your current model for all on one load generator you have no control element to determine the overhead imposed by the potential overload of your load generator. The use of the control element will help you determine if you have a load generator imposed skew in your driving of load. Essentially, you have a resource exhausting which is adding a speed break on your load generator. Go for a deliberate underload philosophy on your load generators. Adding another load generator is cheaper than the expense of political capital when someone attacks your test for lack of a control element and then you need to re-run your test. It is also far less expensive than chasing an engineering ghost which appears as a slow system but which is really an overloaded load generator

    0 讨论(0)
    1. If your PC is not enough, you should use distributed testing in Jmeter
    2. Keep in mind, that in theory you can send 4000 requests per second, they will spend some time on the way to the server, so there is probability that they will come not in 1 second. To avoid this, try to use high bandwidth lan( for example, you can host your server in Azure cloud and install Jmeter in cloud too. )
    3. If you will have no success with JMeter try to use Tank This tool specialized on high loading, and it should be possible to send even 10k requests in 1 second from 1 machine.
    0 讨论(0)
提交回复
热议问题