问题
I'm establishing a HttpURLConnection
to a WebServer with basically the following two methods:
private HttpURLConnection establishConnection(URL url) {
HttpURLConnection conn = null;
try {
conn = (HttpURLConnection) url.openConnection();
conn = authenticate(conn);
conn.setRequestMethod(httpMethod);
conn.setConnectTimeout(50000);
conn.connect();
input= conn.getInputStream();
return conn;
} catch (IOException e1) {
e1.printStackTrace();
}
return null;
}
private HttpURLConnection authenticate(HttpURLConnection conn) {
String userpass = webServiceUserName + ":" + webServicePassword;
byte[] authEncBytes = Base64.encodeBase64(userpass.getBytes());
String authStringEnc = new String(authEncBytes);
conn.setRequestProperty("Authorization", "Basic " + authStringEnc);
return conn;
}
This works quite well, the Server is sending some XML-File and I can continue with it. The Problem I'm encountering is, i have to do about ~220 of these and they add up to about 25s processing time. The data is used in a WebPage
, so 25s response time is not really acceptable.
The code above takes about: 86000036ns (~86ms), so im searching for a way to improve the speed somehow. I tried using the org.apache.http.*
package, but that was a bit slower than my current implementation.
Thanks
Markus
Edit: input=conn.getInputStream();
Is responsible for ~82-85ms of that delay. Is there anyway "around" it?
Edit2: I used the Connection Manager aswell
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(200);
cm.setDefaultMaxPerRoute(20);
HttpHost localhost = new HttpHost(webServiceHostName, 443);
cm.setMaxPerRoute(new HttpRoute(localhost), 50);
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(
new AuthScope(webServiceHostName, 443),
new UsernamePasswordCredentials(webServiceUserName, webServicePassword));
httpclient = HttpClients.custom().setConnectionManager(cm).setDefaultCredentialsProvider(credsProvider).build();
But the runtime increases to ~40s and i get a Warning from my Tomcat after every request that the Cookie was rejeceted because of a "Illegal path attribute"
回答1:
You may be able to get a substantial boost by downloading a number of files in parallel.
I had a project where I had to download 20 resources from a server over a satellite backhaul (around 700ms round-trip delay). Downloading them sequentially took around 30 seconds; 5 at a time took 6.5 seconds, 10 at a time took 3.5 seconds, and all 20 at once was a bit over 2.5 seconds.
Here is an example which will perform multiple downloads concurrently, and if support by the server, will use connection keep-alive.
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.protocol.BasicHttpContext;
import org.apache.http.protocol.HttpContext;
import org.apache.http.util.EntityUtils;
public class Downloader {
private static final int MAX_REQUESTS_PER_ROUTE = 10;
private static final int MAX_REQUESTS_TOTAL = 50;
private static final int MAX_THREAD_DONE_WAIT = 60000;
public static void main(String[] args) throws IOException,
InterruptedException {
long startTime = System.currentTimeMillis();
// create connection manager and http client
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setDefaultMaxPerRoute(MAX_REQUESTS_PER_ROUTE);
cm.setMaxTotal(MAX_REQUESTS_TOTAL);
CloseableHttpClient httpclient = HttpClients.custom()
.setConnectionManager(cm).build();
// list of download items
List<DownloadItem> items = new ArrayList<DownloadItem>();
items.add(new DownloadItem("http://www.example.com/file1.xml"));
items.add(new DownloadItem("http://www.example.com/file2.xml"));
items.add(new DownloadItem("http://www.example.com/file3.xml"));
items.add(new DownloadItem("http://www.example.com/file4.xml"));
// create and start download threads
DownloadThread[] threads = new DownloadThread[items.size()];
for (int i = 0; i < items.size(); i++) {
threads[i] = new DownloadThread(httpclient, items.get(i));
threads[i].start();
}
// wait for all threads to complete
for (int i = 0; i < items.size(); i++) {
threads[i].join(MAX_THREAD_DONE_WAIT);
}
// use content
for (DownloadItem item : items) {
System.out.println("uri: " + item.uri + ", status-code: "
+ item.statusCode + ", content-length: "
+ item.content.length);
}
// done with http client
httpclient.close();
System.out.println("Time to download: "
+ (System.currentTimeMillis() - startTime) + "ms");
}
static class DownloadItem {
String uri;
byte[] content;
int statusCode;
DownloadItem(String uri) {
this.uri = uri;
content = null;
statusCode = -1;
}
}
static class DownloadThread extends Thread {
private final CloseableHttpClient httpClient;
private final DownloadItem item;
public DownloadThread(CloseableHttpClient httpClient, DownloadItem item) {
this.httpClient = httpClient;
this.item = item;
}
@Override
public void run() {
try {
HttpGet httpget = new HttpGet(item.uri);
HttpContext context = new BasicHttpContext();
CloseableHttpResponse response = httpClient.execute(httpget,
context);
try {
item.statusCode = response.getStatusLine().getStatusCode();
HttpEntity entity = response.getEntity();
if (entity != null) {
item.content = EntityUtils.toByteArray(entity);
}
} finally {
response.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
回答2:
Without knowing what kind of work your web request do, I assume that more than 99% of the 25 seconds consist of network time, and waiting around for various resources to respond (disk systems, LDAP servers, Name servers etc.).
The Speed of Light
I see you use userid/password against the webserver. Is this an external webserver? If so, the network distance itself could account for the 86 ms. With many request you start to feel the restriction of speed of light.
The way to optimize you program is to minimize all the waiting time stacking up. This might be done by running requests in parallel, or by allowing for multiple request in one request (if you can change on the web server).
Connection pooling itself won't solve the problem if you still run the requests in sequence.
An possible solution
Base on further description in comments you might use the following sequence:
- Request the overview XML.
- Extract list of devices from overview XML.
- Request device details for all devices in parallel.
- Collect responses from all requests.
- Run through XML again, and this time update with the responses.
来源:https://stackoverflow.com/questions/23343248/performance-issue-with-httpurlconnection