源码解析Volley框架

Posted 加冰雪碧

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了源码解析Volley框架相关的知识,希望对你有一定的参考价值。

Volley是Google在2013年I/O大会上发布的一个网络异步请求和图片加载框架。框架设计的非常好,可扩展性极强,很值得我们去学习。在这篇文章中重点去分析一下它的源码,Volley的使用在这里就不多加赘述了,如果有疑问可以参考实例文档。

Volley的代码虽然不是很多,但是总有一种看多了代码记不住类的感觉,在这里先贴出一张类图关系,大家如果在后面感觉有找不清类关系的时候可以看一下,图片来源:点击打开链接


我们使用Volley大多数时候都是从Volley.java这个类的方法开始,但是我认为有必要先了解一下请求和响应的结构,这样会使后续读代码时更顺利。所以先来看一下代表请求的Request类:

Request

首先看一下其中的属性

// POST请求和PUT请求默认的编码方式
    private static final String DEFAULT_PARAMS_ENCODING = "UTF-8";
 //请求的方法
    private final int mMethod;
//Volley支持的请求方法
    public interface Method 
        int DEPRECATED_GET_OR_POST = -1;
        int GET = 0;
        int POST = 1;
        int PUT = 2;
        int DELETE = 3;
        int HEAD = 4;
        int OPTIONS = 5;
        int TRACE = 6;
        int PATCH = 7;
    
 //请求的URL
    private final String mUrl;
    
    //出现3xx时的重定向URL
    private String mRedirectUrl;

    //请求的唯一标识符,就是ID
    private String mIdentifier;

    //默认的流量统计标识
    private final int mDefaultTrafficStatsTag;

    //发生响应错误时用来回调的监听器
    private final Response.ErrorListener mErrorListener;

    //请求的序列号
    private Integer mSequence;

    //请求队列
    private RequestQueue mRequestQueue;

    //请求是否支持缓存
    private boolean mShouldCache = true;

    //这个请求是否已经被取消了
    private boolean mCanceled = false;
 //这个请求所对应的响应是否已经传送出去了
    private boolean mResponseDelivered = false;
 //请求的重试策略
    private RetryPolicy mRetryPolicy;
//保存了该请求的缓存
    private Cache.Entry mCacheEntry = null;
//该请求的标识,用来取消请求
    private Object mTag;

在这里我删除掉了一些内容,并且写出来的内容可能在我们的主线分析中也不会出现,如果有兴趣可以更深入的去挖掘代码细节。大家先对这些属性有个印象就可以,下面看一下它的构造方法:
public Request(int method, String url, Response.ErrorListener listener) 
        mMethod = method;
        mUrl = url;
        mIdentifier = createIdentifier(method, url);
        mErrorListener = listener;
        setRetryPolicy(new DefaultRetryPolicy());

        mDefaultTrafficStatsTag = findDefaultTrafficStatsTag(url);
    

所有对构造方法的调用都会直接或间接的调用这个构造方法,在其中将一些必要的属性保存了起来,createIdentifier方法通过请求方法和url还有当前时间等一些元素生成了一个唯一标识,请求的重试策略不是我们主线所关心的重点,不去深究它。默认的流量监视的标识采用了URL中HOST的哈希码,这里也不是主线相关,不去深究。这个类中的方法大多是针对上述属性的get/set方法,我们重点看一下以下几个方法:
<span style="font-size:18px;">abstract protected Response<T> parseNetworkResponse(NetworkResponse response);</span>

根据名字也可以看出来是解析网络响应的数据,但是解析什么数据只有子类才知道,所以强制放到子类执行。
abstract protected void deliverResponse(T response);</span>

由子类去实现将结果分发到注册到子类的监听器中,同样只有子类知道怎么处理,所以也将其设置为抽象的。
 protected Map<String, String> getParams() throws AuthFailureError 
        return null;
    

当使用POST或者PUT请求需要传递数据时重写这个方法然后返回代表数据的map集合。
 public byte[] getBody() throws AuthFailureError 
        Map<String, String> params = getParams();
        if (params != null && params.size() > 0) 
            return encodeParameters(params, getParamsEncoding());
        
        return null;
    

使用到了上面的返回值,返回了一个编码过的byte类型数组。
 private byte[] encodeParameters(Map<String, String> params, String paramsEncoding) 
        StringBuilder encodedParams = new StringBuilder();
        try 
            for (Map.Entry<String, String> entry : params.entrySet()) 
                encodedParams.append(URLEncoder.encode(entry.getKey(), paramsEncoding));
                encodedParams.append('=');
                encodedParams.append(URLEncoder.encode(entry.getValue(), paramsEncoding));
                encodedParams.append('&');
            
            return encodedParams.toString().getBytes(paramsEncoding);
         catch (UnsupportedEncodingException uee) 
            throw new RuntimeException("Encoding not supported: " + paramsEncoding, uee);
        
    

参数编码的实现方式。

还有一个比较重要的finish方法,我在这里将打印Log的语句去掉了:

void finish(final String tag) 
        if (mRequestQueue != null) 
            mRequestQueue.finish(this);
        
    

可以看到只是简单的调用了请求队列的finish方法,这个在后面会提到。

到这里Request类大体上就走完了,只是简单的有个印象就好,我们现在来看一下代表响应的Response类。

Response

public final T result;

    public final Cache.Entry cacheEntry;

    public final VolleyError error;

Response类非常的简单,涉及到的有用的信息只有以上三项,很简单,分别是返回的实体结果数据,缓存的实体,包含了请求错误信息的对象,这里不再去深入了。

现在对基础的请求和响应有了一个大致的了解,我们看一下平时使用的入口Volley类:

Volley

public static RequestQueue newRequestQueue(Context context) 
        return newRequestQueue(context, null);
    

这个是我们最长使用的构造方法,它调用了以下一系列的构造方法:
public static RequestQueue newRequestQueue(Context context, HttpStack stack)
    
    	return newRequestQueue(context, stack, -1);
    


 public static RequestQueue newRequestQueue(Context context, int maxDiskCacheBytes) 
        return newRequestQueue(context, null, maxDiskCacheBytes);
    


 public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) 
    	//getCacheDir()方法用于获取/data/data/<application package>/cache目录
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try 
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
         catch (NameNotFoundException e) 
        

        if (stack == null) 
            if (Build.VERSION.SDK_INT >= 9) 
                stack = new HurlStack();
             else 
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            
        
        Network network = new BasicNetwork(stack);
        
        RequestQueue queue;
        
        if (maxDiskCacheBytes <= -1)
        
        	// No maximum size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        
        else
        
        	// Disk cache size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        

        queue.start();

        return queue;
    

最后都将调用最后一个,解释一下最后一个构造方法中参数的意思,HttpStack封装了具体使用什么策略去访问网络,maxDiskCacheBytes代表了本地缓存数据的最大大小。在我们的调用中stack参数是null,maxDiskCacheBytes是-1,所以看一下构造方法,首先根据不同的SDK版本创建了不同的网络访问策略,在API9以下使用了HttpClientStack,看名字大概也可以想出是对HttpClient访问的封装。在API9以上使用的是HurlStack,它是对HttpUrlConnection的封装,如果有兴趣可以深入这两个类,我们这里就先不关注了。而后创建出了请求网络的Network和请求本地缓存的DiskBasedCache,并且用这两个对象构造出了RequestQueue,我们简单看一下NetWork和DiskBasedCache:

public NetworkResponse performRequest(Request<?> request) throws VolleyError;

Network接口很简单,方法的意思就是去处理这个请求,并返回一个NetworkResponse类的对象,这个在后面会提到。

public interface Cache 
    /**
     * Retrieves an entry from the cache.
     * @param key Cache key
     * @return An @link Entry or null in the event of a cache miss
     */
    public Entry get(String key);

    /**
     * Adds or replaces an entry to the cache.
     * @param key Cache key
     * @param entry Data to store and metadata for cache coherency, TTL, etc.
     */
    public void put(String key, Entry entry);

    /**
     * Performs any potentially long-running actions needed to initialize the cache;
     * will be called from a worker thread.
     */
    public void initialize();

    /**
     * Invalidates an entry in the cache.
     * @param key Cache key
     * @param fullExpire True to fully expire the entry, false to soft expire
     */
    public void invalidate(String key, boolean fullExpire);

    /**
     * Removes an entry from the cache.
     * @param key Cache key
     */
    public void remove(String key);

    /**
     * Empties the cache.
     */
    public void clear();

    /**
     * Data and metadata for an entry returned by the cache.
     */
    public static class Entry 
        /** The data returned from cache. */
        public byte[] data;

        /** ETag for cache coherency. */
        public String etag;

        /** Date of this response as reported by the server. */
        public long serverDate;

        /** The last modified date for the requested object. */
        public long lastModified;

        /** TTL for this record. */
        public long ttl;

        /** Soft TTL for this record. */
        public long softTtl;

        /** Immutable response headers as received from server; must be non-null. */
        public Map<String, String> responseHeaders = Collections.emptyMap();

        /** True if the entry is expired. */
        public boolean isExpired() 
            return this.ttl < System.currentTimeMillis();
        

        /** True if a refresh is needed from the original data source. */
        public boolean refreshNeeded() 
            return this.softTtl < System.currentTimeMillis();
        
    


DiskBasedCache实现了Cache接口,所以我们简单的看一下接口提供的方法就可以。简单的封装了一下对缓存操作的方法,以及代表缓存实体的内部类。

回到主线Volley中,最后对创建的RequestQueue对象调用了start方法。纵观整个类可以发现设计的非常到位,对扩展基本上完全开放,都是使用接口的契约去编程,我们如果想添加一个网络请求的方法只需要继承HttpStack就可以。可以很容易的定制自己的请求队列。

RequestQueue

在Volley类中创建出了RequestQueue对象,我们就从它的构造方法看起:

public RequestQueue(Cache cache, Network network) 
        this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
    


 public RequestQueue(Cache cache, Network network, int threadPoolSize) 
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    


public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) 
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    

我们调用的构造方法最终都会被汇总到第三个构造方法上,简单的解释一下参数信息,Cache和Network已经很熟悉了,threadPoolSize代表了网络请求的线程数量,默认的DEFAULT_NETWORK_THREAD_POOL_SIZE是4,ResponseDeliver接口表示对一个响应的分发,看一下它的方法:

    //从网络或者缓存解析一个响应并且对其进行分发
    public void postResponse(Request<?> request, Response<?> response);

   //和上面一样进行分发,只不过分发之后执行Runnable对象的run方法
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable);

    //传递错误
    public void postError(Request<?> request, VolleyError error);

在这里我们使用了它的一个实现类ExecutorDelivery,在后文中会进行介绍。

在Volley类中调用了RequestQueue的start方法,现在来看一下:

 public void start() 
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) 
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        
    

方法也不是很长,创建出了一个缓存调度器和四个网络请求的调度器,而这两个类都是继承自Thread类,在start方法里面直接就将线程启动了。

正常在使用Volley时创建了RequestQueue后都会向其中add请求,在看add方法之前先来看几个属性:

 //如果一个请求正在被处理,并且可缓存,后续的请求将会放到这个队列中
    private final Map<String, Queue<Request<?>>> mWaitingRequests =
            new HashMap<String, Queue<Request<?>>>();


// 正在进行中,尚未完成的请求集合
    private final Set<Request<?>> mCurrentRequests = new HashSet<Request<?>>();


 //通过本地缓存处理请求的无界优先队列
    private final PriorityBlockingQueue<Request<?>> mCacheQueue =
        new PriorityBlockingQueue<Request<?>>();


 //通过网络处理请求的无界优先队列
    private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
        new PriorityBlockingQueue<Request<?>>();

然后是add方法:


 public <T> Request<T> add(Request<T> request) 
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) 
            mCurrentRequests.add(request);
        

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) 
            mNetworkQueue.add(request);
            return request;
        

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) 
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) 
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) 
                    stagedRequests = new LinkedList<Request<?>>();
                
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) 
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                
             else 
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            
            return request;
        
    

按以下几个步骤来执行:

1.首先给传递的请求注入一个当前的请求队列对象。

2.将请求添加到mCurrentRequests集合中。

3.判断传入的请求是否支持缓存,如果不支持,将它放到网络请求的队列中。

4.如果支持缓存,看mWaitingRequest是否包含代表当前请求缓存关键值的key,如果不包含,向mWaitingRequest中添加一个key为当前请求缓存关键值,value为null的键值对,并且将request加入缓存请求队列。

5.如果mWaitingRequest中包含,取出value代表的队列(如果不存在就创建一个),然后将当前的请求入队。

可以看到add方法的作用仅仅是向缓存请求队列或者网络请求队列加如了request请求,但是请求究竟是如何被调用的呢,我们接下来看一下NetworkDispatcher和CacheDispatcher。

NetworkDispatcher

public NetworkDispatcher(BlockingQueue<Request<?>> queue,
            Network network, Cache cache,
            ResponseDelivery delivery) 
        mQueue = queue;
        mNetwork = network;
        mCache = cache;
        mDelivery = delivery;
    

首先是构造方法,在创建NetworkDispatcher 对象时我们已经将四个参数注入进来了,如果有不太清楚的可以回到文章前面的介绍。

既然是继承自线程,那么最关心的就是run方法:

 public void run() 
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        while (true) 
            long startTimeMs = SystemClock.elapsedRealtime();
            Request<?> request;
            try 
                // Take a request from the queue.
                request = mQueue.take();
             catch (InterruptedException e) 
                // We may have been interrupted because it was time to quit.
                if (mQuit) 
                    return;
                
                continue;
            

            try 
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) 
                    request.finish("network-discard-cancelled");
                    continue;
                

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) 
                    request.finish("not-modified");
                    continue;
                

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) 
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
             catch (VolleyError volleyError) 
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
             catch (Exception e) 
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            
        
    

看一下它的主线

1.从mQueue中取出一个request对象,这个mQueue就是RequestQueue对象中的网络请求队列,由于它是一个阻塞队列,所以在没有对象可以取出来的时候run方法就会被阻塞住。

2.判断当前的请求是否被取消了,如果被取消了调用request对象的finish方法,这里间接调用了requestQueue的finish方法。

3.直接使用Network的实现类去处理网络请求,并将结果封装成NetworkResponse。

4.如果服务器返回的是304并且已经有响应在传输,直接finish。

5.解析NetworkResponse网络响应并且将其结果封装成我们最开始看的Response。

6.如果这个请求是可以被缓存的并且刚刚请求网络拿到的缓存也不为空的话,用Cache的实现对象将缓存储存起来。

7.ResponseDelivery的实例对象将这个结果分发出去。

这个类大体上是清晰了,再来看看CacheDispatcher的run方法。

CacheDispatcher

public void run() 
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        while (true) 
            try 
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request<?> request = mCacheQueue.take();
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) 
                    request.finish("cache-discard-canceled");
                    continue;
                

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) 
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) 
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) 
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                 else 
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() 
                        @Override
                        public void run() 
                            try 
                                mNetworkQueue.put(request);
                             catch (InterruptedException e) 
                                // Not much we can do about this.
                            
                        
                    );
                

             catch (InterruptedException e) 
                // We may have been interrupted because it was time to quit.
                if (mQuit) 
                    return;
                
                continue;
            
        
    

方法比较长,还是一点一点来看:

1.还是看一下request是否被取消了,如果取消了就finish掉。

2.尝试着拿当前请求的缓存,如果没拿到,就将它重新的放入到网络请求队列中。

3.如果拿到了缓存,判断一下是否已经过期了,如果过期了,重新放到网络请求队列中。

4.在没过期的情况下用缓存构造出一个NetworkResponse并且解析它,解析成Response对象。

5.在缓存没过期的情况下判断一下是否需要再刷新一次,如果需要,将其放入请求队列中。

6.如果不需要,直接调用ResponseDelivery的实例对象的方法将其分发出去。

NetworkDispatcher和CacheDispatcher我们都用到了ResponseDelivery,在这里它的实例对象是ExecutorDelivery,我们简单的看一下:

ExecutorDelivery

 @Override
    public void postResponse(Request<?> request, Response<?> response) 
        postResponse(request, response, null);
    

    @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) 
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    

可以看出它的分发都是通过mResponsePoster来执行的,看一下它的定义:

public ExecutorDelivery(final Handler handler) 
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() 
            @Override
            public void execute(Runnable command) 
                handler.post(command);
            
        ;
    

在类的构造方法中创建了这个对象,是一个线程池,但是线程池执行却调用了handler的post方法。还有印象这个handler是从哪里创建的么,没错,就是在RequestQueue的构造方法中以主线程的Looper创建出来的Handler对象,那么它的post方法就应该是在主线程中来执行。根据上面的代码,执行的是一个ResponseDeliveryRunnable,看一下定义:

 private class ResponseDeliveryRunnable implements Runnable 
        private final Request mRequest;
        private final Response mResponse;
        private final Runnable mRunnable;

        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) 
            mRequest = request;
            mResponse = response;
            mRunnable = runnable;
        

        @SuppressWarnings("unchecked")
        @Override
        public void run() 
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) 
                mRequest.finish("canceled-at-delivery");
                return;
            

            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) 
                mRequest.deliverResponse(mResponse.result);
             else 
                mRequest.deliverError(mResponse.error);
            

            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) 
                mRequest.addMarker("intermediate-response");
             else 
                mRequest.finish("done");
            

            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) 
                mRunnable.run();
            
       
    

最重要的一句话,在响应成功了的话调用request对象的deliverResponse方法,如果还有印象可以想到这个是一个抽象方法,我们拿它的一个实例JsonRequest来看一下究竟执行了什么。

JsonRequest

 @Override
    protected void deliverResponse(T response) 
        mListener.onResponse(response);
    

可以看到调用了mListener的方法,这个mListener是哪里来的呢?

 public JsonRequest(int method, String url, String requestBody, Listener<T> listener,
            ErrorListener errorListener) 
        super(method, url, errorListener);
        mListener = listener;
        mRequestBody = requestBody;
    

构造方法传递的!这是一个回调的方法。到这里整个线路就清晰了,不过我们还有一点没处理,就是RequestQueue的finish方法,被request的finish方法间接调用了,来看一下。

RequestQueue

<T> void finish(Request<T> request) 
        // Remove from the set of requests currently being processed.
        synchronized (mCurrentRequests) 
            mCurrentRequests.remove(request);
        
        synchronized (mFinishedListeners) 
          for (RequestFinishedListener<T> listener : mFinishedListeners) 
            listener.onRequestFinished(request);
          
        

        if (request.shouldCache()) 
            synchronized (mWaitingRequests) 
                String cacheKey = request.getCacheKey();
                Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
                if (waitingRequests != null) 
                    if (VolleyLog.DEBUG) 
                        VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
                                waitingRequests.size(), cacheKey);
                    
                    // Process all queued up requests. They won't be considered as in flight, but
                    // that's not a problem as the cache has been primed by 'request'.
                    mCacheQueue.addAll(waitingRequests);
                
            
        
    

在这里面主要从mCurrentRequest对象中移除了请求。但是很重要的一点就是mWaitingRequest对象,它里面存放了重复的请求,当finish的时候它将请求都放到了缓存队列中让其自行读取缓存了。


到这里整个Volley的主线源码就分析完毕了,代码写的非常的精彩,针对接口编程,扩展性极强~

下面附上Volley的总体设计图:

和请求流程图:

图片来源:点击打开链接




以上是关于源码解析Volley框架的主要内容,如果未能解决你的问题,请参考以下文章

再回首 -- Volley源码解析

Volley源码解析——从实现角度深入剖析volley

源码解析Volley框架

源码解析Volley框架

Volley源码解析

Volley源码解析