[Android]Volley源码分析(三)

a 夏天 提交于 2020-03-28 05:10:17

上篇看了关于Request的源码,这篇接着来看下RequestQueue的源码。

RequestQueue类图:

RequestQueue是一个请求调度队列,里面包含多个NetworkDispatcher调度器与一个CacheDispatcher调度器

主要属性:

mSequenceGenerator: 请求序号生成器

mWaitingRequests: Staging area for requests that already have a duplicate request in flight. 相当于一个等待队列,根据请求url来将以前发起过的请求先加入这个队列中。避免同样的请求多次发送。

mCurrentRequests: 正在被请求队列处理的请求集合

mCacheQueue: 请求缓存队列,请求可以被缓存也可以不缓存,保存可以缓存的请求

mNetworkQueue: 需要进行网络访问的请求队列

mCache: 可以保存与获取请求响应的缓存,把请求响应保存在disk中

mNetwork: 真正执行Http请求的网络接口

mDelivery: 将请求响应进行解析并交付给请求发起者

mDispatchers: 网络请求调度器,每一个调度器都是一个线程

mCacheDispatcher: 缓存调度器

主要方法:

start(),启动所有调度器线程

 1 /**
 2      * Starts the dispatchers in this queue.
 3      */
 4     public void start() {
 5         stop();  // Make sure any currently running dispatchers are stopped.
 6         // Create the cache dispatcher and start it.
 7         mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
 8         mCacheDispatcher.start();
 9 
10         // Create network dispatchers (and corresponding threads) up to the pool size.
11         for (int i = 0; i < mDispatchers.length; i++) {
12             NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
13                     mCache, mDelivery);
14             mDispatchers[i] = networkDispatcher;
15             networkDispatcher.start();
16         }
17     }

stop()终止所有调度器线程

 1 /**
 2      * Stops the cache and network dispatchers.
 3      */
 4     public void stop() {
 5         if (mCacheDispatcher != null) {
 6             mCacheDispatcher.quit();
 7         }
 8         for (int i = 0; i < mDispatchers.length; i++) {
 9             if (mDispatchers[i] != null) {
10                 mDispatchers[i].quit();
11             }
12         }
13     }

cancellAll(tag) 取消具有tag标识的所有请求

 1 /**
 2      * Cancels all requests in this queue with the given tag. Tag must be non-null
 3      * and equality is by identity.
 4      */
 5     public void cancelAll(final Object tag) {
 6         if (tag == null) {
 7             throw new IllegalArgumentException("Cannot cancelAll with a null tag");
 8         }
 9         cancelAll(new RequestFilter() {
10             @Override
11             public boolean apply(Request<?> request) {
12                 return request.getTag() == tag;
13             }
14         });
15     }

add(Request),将一个请求加入请求队列中。中间有一个判断,如果请求不应被缓存,则只加入mNetworkQueue中,否则根据请求的cacheKey是否已经存在于mWaitingRequests将请求加入mWaitingRequestsmCacheQueue中(同时将cacheKey存入mWaitingRequests)。cacheKey就是请求的Url。

 1 /**
 2      * Adds a Request to the dispatch queue.
 3      * @param request The request to service
 4      * @return The passed-in request
 5      */
 6     public <T> Request<T> add(Request<T> request) {
 7         // Tag the request as belonging to this queue and add it to the set of current requests.
 8         request.setRequestQueue(this);
 9         synchronized (mCurrentRequests) {
10             mCurrentRequests.add(request);
11         }
12 
13         // Process requests in the order they are added.
14         request.setSequence(getSequenceNumber());
15         request.addMarker("add-to-queue");
16 
17         // If the request is uncacheable, skip the cache queue and go straight to the network.
18         if (!request.shouldCache()) {  //如果请求不应该被缓存,则只加入网络请求队列中
19             mNetworkQueue.add(request);
20             return request;
21         }
22 
23         // Insert request into stage if there's already a request with the same cache key in flight.
24         synchronized (mWaitingRequests) {
25             String cacheKey = request.getCacheKey();
26             if (mWaitingRequests.containsKey(cacheKey)) {
27                 // There is already a request in flight. Queue up.
28                 Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
29                 if (stagedRequests == null) {
30                     stagedRequests = new LinkedList<Request<?>>();
31                 }
32                 stagedRequests.add(request);
33                 mWaitingRequests.put(cacheKey, stagedRequests);
34                 if (VolleyLog.DEBUG) {
35                     VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
36                 }
37             } else {
38                 // Insert 'null' queue for this cacheKey, indicating there is now a request in
39                 // flight.
40                 mWaitingRequests.put(cacheKey, null);
41                 mCacheQueue.add(request);
42             }
43             return request;
44         }
45     }

finish(request), 表示已处理请求request,如果请求被缓存,则清除waitingRequests中的记录,并将其加入mCacheQueue中。

 1 /**
 2      * Called from {@link Request#finish(String)}, indicating that processing of the given request
 3      * has finished.
 4      *
 5      * <p>Releases waiting requests for <code>request.getCacheKey()</code> if
 6      *      <code>request.shouldCache()</code>.</p>
 7      */
 8     void finish(Request<?> request) {
 9         // Remove from the set of requests currently being processed.
10         synchronized (mCurrentRequests) {
11             mCurrentRequests.remove(request);
12         }
13 
14         if (request.shouldCache()) {
15             synchronized (mWaitingRequests) {
16                 String cacheKey = request.getCacheKey();
17                 Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
18                 if (waitingRequests != null) {
19                     if (VolleyLog.DEBUG) {
20                         VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
21                                 waitingRequests.size(), cacheKey);
22                     }
23                     // Process all queued up requests. They won't be considered as in flight, but
24                     // that's not a problem as the cache has been primed by 'request'.
25                     mCacheQueue.addAll(waitingRequests);
26                 }
27             }
28         }
29     }

因为文章是边看源码边记录的,所以一开始完全根据当前的源代码来理解。读的代码越多,理解可能更深入。看完CacheDispatcher与NetworkDispatcher这两个类,这里梳理一下mCurrentRequests, mWaitingRequests,mCacheQueue,mNetworkQueue,mCache,mDispatchers,mCacheDispatcher之间的关系及他们是怎样共同作业来完成一次请求处理的。

1. 当用户以RequestManager.getRequestQueue().add(request)的方式向请求队列中添加一个请求时,即调用RequestQueue.add(request)方法(上面已列出源码), 这个方法首先会把这个request加入mCurrentRequests中,表示这是一个正在被处理的请求。

2. 然后会判断该请求是否可被缓存(通过Request的mShouldCache属性,这个属性默认为true),如果不能,则直接加入mNetworkQueue,由mDispatchers中的某个调度器线程来处理,如果可被缓存(默认都是可被缓存的),则根据Request的cacheKey属性(值为请求的Url)判断请求是否存在于mWaitingRequests这个Map中,如果已经加入,说明之前已经发起过这个请求,那就先将这个请求放入等待队列中,因为请求结果可被缓存,这样等先前那个请求处理完,再处理等待队列中同样的请求(见上面finish()方法),则可以直接从缓存中拿到请求响应,避免了同样的请求多次发送。如果cacheKey不存在于mWaitingRequests中,则将cacheKey存进mWaitingRequests中,表示已发起这样的一个请求,再将这个request加入mCacheQueue中进行处理。

3. mCacheQueue队列中的请求时由mCacheDispatcher这个调度器来调度处理的。 mCacheDispatcher是一个对可缓存的请求进行调度处理的线程,在RequestQueue的start()方法中,对它进行了初始化及启动。

1 mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
2 mCacheDispatcher.start();

CacheDispatcher的类图:

 CacheDispatcher的run()方法

 1     @Override
 2     public void run() {
 3         if (DEBUG) VolleyLog.v("start new dispatcher");
 4         Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
 5 
 6         // Make a blocking call to initialize the cache.
 7         mCache.initialize();
 8 
 9         while (true) {
10             try {
11                 // Get a request from the cache triage queue, blocking until
12                 // at least one is available.
13                 final Request<?> request = mCacheQueue.take();
14                 request.addMarker("cache-queue-take");
15 
16                 // If the request has been canceled, don't bother dispatching it.
17                 if (request.isCanceled()) {
18                     request.finish("cache-discard-canceled");
19                     continue;
20                 }
21 
22                 // Attempt to retrieve this item from cache.
23                 Cache.Entry entry = mCache.get(request.getCacheKey());
24                 if (entry == null) {
25                     request.addMarker("cache-miss");
26                     // Cache miss; send off to the network dispatcher.
27                     mNetworkQueue.put(request);
28                     continue;
29                 }
30 
31                 // If it is completely expired, just send it to the network.
32                 if (entry.isExpired()) {
33                     request.addMarker("cache-hit-expired");
34                     request.setCacheEntry(entry);
35                     mNetworkQueue.put(request);
36                     continue;
37                 }
38 
39                 // We have a cache hit; parse its data for delivery back to the request.
40                 request.addMarker("cache-hit");
41                 Response<?> response = request.parseNetworkResponse(
42                         new NetworkResponse(entry.data, entry.responseHeaders));
43                 request.addMarker("cache-hit-parsed");
44 
45                 if (!entry.refreshNeeded()) {
46                     // Completely unexpired cache hit. Just deliver the response.
47                     mDelivery.postResponse(request, response);
48                 } else {
49                     // Soft-expired cache hit. We can deliver the cached response,
50                     // but we need to also send the request to the network for
51                     // refreshing.
52                     request.addMarker("cache-hit-refresh-needed");
53                     request.setCacheEntry(entry);
54 
55                     // Mark the response as intermediate.
56                     response.intermediate = true;
57 
58                     // Post the intermediate response back to the user and have
59                     // the delivery then forward the request along to the network.
60                     mDelivery.postResponse(request, response, new Runnable() {
61                         @Override
62                         public void run() {
63                             try {
64                                 mNetworkQueue.put(request);
65                             } catch (InterruptedException e) {
66                                 // Not much we can do about this.
67                             }
68                         }
69                     });
70                 }
71 
72             } catch (InterruptedException e) {
73                 // We may have been interrupted because it was time to quit.
74                 if (mQuit) {
75                     return;
76                 }
77                 continue;
78             }
79         }
80     }

在这个while循环中,会不断地从mCacheQueue队列中去拿请求,如果请求被取消了,则继续拿下一个请求,否则先从mCache中去查是否缓存了该请求的响应结果,如果没有缓存或者缓存已过期,则将请求加入mNetworkQueue队列中来进行一次网络访问,如果存在缓存并且未过期,则从缓存中取出请求响应并进行解析, 如果缓存没有Soft-expired(Cache.Entry中有两个属性ttl与softTtl,通过这两个值跟当前时间比较来判断缓存是否过期或者“软过期” Soft-expired),则直接通过mDelivery将解析好的结果交付给请求发起者,否则表示虽然缓存没有过期,但是需要通过网络来更新请求响应,则在交付结果之后,将请求加入mNetworkQueue队列发起一个新的网络请求。

4. 在3中知道了当某个请求的响应在mCache中没有缓存或者缓存已过期(包括expired与Soft-expired)时,会把请求加入mNetworkQueue队列中发起一次网络请求。mNetworkQueue这个队列中的请求是由多个NetworkDispatcher调度器来调度处理的。在RequestQueue的start()方法中,有初始化及启动

 

1        // Create network dispatchers (and corresponding threads) up to the pool size.
2         for (int i = 0; i < mDispatchers.length; i++) {
3             NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
4                     mCache, mDelivery);
5             mDispatchers[i] = networkDispatcher;
6             networkDispatcher.start();
7         }

 

NetworkDispatcher类:

 

 

 NetworkDispatcher的run()方法

 1     @Override
 2     public void run() {
 3         Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
 4         Request<?> request;
 5         while (true) {
 6             try {
 7                 // Take a request from the queue.
 8                 request = mQueue.take();
 9             } catch (InterruptedException e) {
10                 // We may have been interrupted because it was time to quit.
11                 if (mQuit) {
12                     return;
13                 }
14                 continue;
15             }
16 
17             try {
18                 request.addMarker("network-queue-take");
19 
20                 // If the request was cancelled already, do not perform the
21                 // network request.
22                 if (request.isCanceled()) {
23                     request.finish("network-discard-cancelled");
24                     continue;
25                 }
26 
27                 addTrafficStatsTag(request);
28 
29                 // Perform the network request.
30                 NetworkResponse networkResponse = mNetwork.performRequest(request);
31                 request.addMarker("network-http-complete");
32 
33                 // If the server returned 304 AND we delivered a response already,
34                 // we're done -- don't deliver a second identical response.
35                 if (networkResponse.notModified && request.hasHadResponseDelivered()) {
36                     request.finish("not-modified");
37                     continue;
38                 }
39 
40                 // Parse the response here on the worker thread.
41                 Response<?> response = request.parseNetworkResponse(networkResponse);
42                 request.addMarker("network-parse-complete");
43 
44                 // Write to cache if applicable.
45                 // TODO: Only update cache metadata instead of entire record for 304s.
46                 if (request.shouldCache() && response.cacheEntry != null) {
47                     mCache.put(request.getCacheKey(), response.cacheEntry);
48                     request.addMarker("network-cache-written");
49                 }
50 
51                 // Post the response back.
52                 request.markDelivered();
53                 mDelivery.postResponse(request, response);
54             } catch (VolleyError volleyError) {
55                 parseAndDeliverNetworkError(request, volleyError);
56             } catch (Exception e) {
57                 VolleyLog.e(e, "Unhandled exception %s", e.toString());
58                 mDelivery.postError(request, new VolleyError(e));
59             }
60         }
61     }

这个线程会不断地从mNetworkQueue中拿请求,如果请求被取消,则拿下一个,然后通过mNetwork根据情况选择HttpClient或HttpURLConnection(具体怎么选择,后面再说)来进行网络请求,将请求结果进行解析,如果请求可被缓存,则将请求结果进行缓存,最后将结果交付给请求发起者。

其中的addTrafficStatsTag()方法应该是做流量统计用的,request.getTrafficStatsTag()返回的是请求URL中host部分的hashcode

 

1  @TargetApi(Build.VERSION_CODES.ICE_CREAM_SANDWICH)
2     private void addTrafficStatsTag(Request<?> request) {
3         // Tag the request (if API >= 14)
4         if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
5             TrafficStats.setThreadStatsTag(request.getTrafficStatsTag());
6         }
7     }

 

这样,一次完整的request处理就完成了。 这应该是Volley比较核心的东西了。 先到这,余下的东西后续再看。

 

 

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!