How to reliably reproduce curl_multi timeout while testing public proxies

末鹿安然 提交于 2019-12-01 15:33:41

I've gotten reproducible behavior and I'm waiting for badger on GitHub to reply. Try running a program like Ettercap to get more information.

To me it looks that you are not having problem with the curl itself but doing too much connections concurrently to the proxy servers if the connections are refused. You might be blacklisted permanently or for some period.

Check that by running your curl from current IP and do stat: how many connections were established, how many refused, how many timed out. Do it several times and collect an average. Change then server to other that has different IP and check what stats you have there. At the first run you should have much better statistics, that probably if you repeat test at new IP will get only worse. Good idea might be to not use all pool of the proxies to connect to do stat but select a slice from them and check on actual IP and repeat that check on new IP so if the reason is you abusing service you don't blacklist yourself at all proxies but still be having next group of 'untouched' proxies to test on them on new IP if this is really the case. Be aware that even if the IPs of proxies are at different location they can belong to the same service provider. That probably has one abuse list for all of their proxy serves so if you are not seen well with the amount of requests you do in one country you can be blocked in other country as well, even before you connect to the other country proxy.

If you still want to check if this is not curl then you can set up a test environment with multiple serves. This test environment you can pass to curl maintainer so he can replicate the error. You can use docker and create 10, 20 or 100 proxy servers and connect to them to see if curl has a problem or not.

you will need docker it can be installed on Win/Mac/Linux
one of the proxy image to create proxies
create network tutorial for the containers (bridge should be ok)
attach containers to network --network
good to set for each proxy container their --ip
make for each proxy container possible to read config and write error log (so you can read why they disconnected if that happens) by mountig error log/config files/direcotires with --volume
and all proxy containers should be runnig

you can connect to a proxy that is running inside container two ways. if you would like to have curl outside these containers then you need to expose with -p these proxies' ports from container to the outside world (curl in your case).

or

you may use another container image that has linux + curl. For example Alpine linux + curl and connect it the same network the same way as you do with proxies. If you do that you don't need to publish (expose) ports of proxies and don't need to think about what number of proxy port should I expose for this particular proxy.

at each step you can issue a command

docker ps -a

to see all containers and their status.

to stop and remove all containers (not the images they are coming from but running containers) in case you had some erros with container that exited.

docker stop $(docker ps -aq) && docker rm $(docker ps -aq)

or to stop and remove from the list a particular container

docker stop <container-id>
docker rm <container-id>

to see all containers that are connected to bridge network (default)

docker network inspect bridge

If you confirm there is problem really with connection to proxies that are at your local machine then this is something maintainer of curl can replicate.

just put all commands like above to create all proxies connect them to network etc in a file for example replicate.sh script starting with

#!/bin/sh

and your comands here

save that file and issue then command

chmod +x ./replicate.sh

to make it executable.

you can run it to double check if everything is working as expected

./replicate.sh

and send the maintainer of curl to replicate environment on which you had experienced problem.

If you don't like to put a lot of commands like doker run for the proxies to run, you can use docker compose instead that allows you to define whole testing environment in one file.

If you run lot of containers you can limit resources for example memory each of them consume, may help you in case of so many proxies

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!