转载

Android Volley库源码简析(HTTP Request部分)

本文仅对Volley中关于Http Request部分的一些简单用例做解析

源码目录树

首先,用 脚本 生成了该项目源码的目录树:

-[ android ]   -[ volley ]     |- AuthFailureError.java     |- Cache.java     |- CacheDispatcher.java     |- DefaultRetryPolicy.java     |- ExecutorDelivery.java     |- InternalUtils.java     |- Network.java     |- NetworkDispatcher.java     |- NetworkError.java     |- NetworkResponse.java     |- NoConnectionError.java     |- ParseError.java     |- RedirectError.java     |- Request.java     |- RequestQueue.java     |- Response.java     |- ResponseDelivery.java     |- RetryPolicy.java     |- ServerError.java     |- TimeoutError.java     |- VolleyError.java     |- VolleyLog.java     |     -[ toolbox ]       |- AndroidAuthenticator.java       |- Authenticator.java       |- BasicNetwork.java       |- ByteArrayPool.java       |- ClearCacheRequest.java       |- DiskBasedCache.java       |- HttpClientStack.java       |- HttpHeaderParser.java       |- HttpStack.java       |- HurlStack.java       |- ImageLoader.java       |- ImageRequest.java       |- JsonArrayRequest.java       |- JsonObjectRequest.java       |- JsonRequest.java       |- NetworkImageView.java       |- NoCache.java       |- PoolingByteArrayOutputStream.java       |- RequestFuture.java       |- StringRequest.java       |- Volley.java 

可以看出,Volley源码放置得较为杂乱,不同功能模块的类并没有归到不同的包中。相比之下 UIL 的源码结构较为规范和合理。

从常用case入手,推断其项目架构

官网 上给出的最简单的使用例子如下所示:

final TextView mTextView = (TextView) findViewById(R.id.text); ... // Instantiate the RequestQueue. RequestQueue queue = Volley.newRequestQueue(this); // 1. 新建一个Queue String url ="http://www.google.com"; // Request a string response from the provided URL. StringRequest stringRequest = new StringRequest(Request.Method.GET, url, // 2. 新建一个Request,写好listener    new Response.Listener<String>() {  @Override  public void onResponse(String response) {   // Display the first 500 characters of the response string.   mTextView.setText("Response is: "+ response.substring(0,500));  } }, new Response.ErrorListener() {  @Override  public void onErrorResponse(VolleyError error) {   mTextView.setText("That didn't work!");  } }); // Add the request to the RequestQueue. queue.add(stringRequest); // 3. 将Request放到Queue里面执行  

结合下面这张图:

Android Volley库源码简析(HTTP Request部分)

我们可以大致了解Volley的使用方法(见注释)和内部结构。下面就这个usecase展开进行源码级别的简述。

Volley类

Volley类提供了4个静态方法来方便用户新建Queue。其中:

public static RequestQueue newRequestQueue(Context context) {     return newRequestQueue(context, null); } 

一句最终会调用:

// 传入 context,stack=null,maxDiskCacheBytes=-1 public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {  File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);  String userAgent = "volley/0"; //1. 设置userAgent  try {   String packageName = context.getPackageName();   PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);   userAgent = packageName + "/" + info.versionCode;  } catch (NameNotFoundException e) {  }  if (stack == null) {   if (Build.VERSION.SDK_INT >= 9) { //2. 选择用哪个httpclient    stack = new HurlStack();   } else {    // Prior to Gingerbread, HttpUrlConnection was unreliable.    // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html    stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));   }  }  Network network = new BasicNetwork(stack);  RequestQueue queue;  if (maxDiskCacheBytes <= -1)  {   // No maximum size specified   queue = new RequestQueue(new DiskBasedCache(cacheDir), network); //3. 新建Queue  }  else  {   // Disk cache size specified   queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);  }  queue.start();// 4. 传入Queue  return queue; }  

值得注意的是:

  1. Volley会根据SDK的version来决定使用java.net.HttpURLConnection(Build.VERSION.SDK_INT >= 9)还是org.apache.http.client.HttpClient

  2. 新建Queue后,Queue马上会被start。

  3. stack类负责发送request(com.android.volley.Request)和获取response(org.apache.http.HttpResponse),network类负责分析和处理response,包装成NetworkResponse(com.android.volley.NetworkResponse)。

我们首先忽略掉network相关的细节,看一下queue的实现和request的调度。

RequestQueue

先来看一下RequestQueue的构造方法:

public RequestQueue(Cache cache, Network network) {     this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE); } 

调用:

public RequestQueue(Cache cache, Network network, int threadPoolSize) {     this(cache, network, threadPoolSize,             new ExecutorDelivery(new Handler(Looper.getMainLooper()))); } 

这里出现了一个新面孔ExecutorDelivery,根据字面意思可以猜测它是负责将请求的结果分发到主线程上,或者在主线程上执行回调(listener)。继续调用:

public RequestQueue(Cache cache, Network network, int threadPoolSize,  ResponseDelivery delivery) {     mCache = cache;     mNetwork = network;     mDispatchers = new NetworkDispatcher[threadPoolSize];     mDelivery = delivery; }  

这里又出现了一个新面孔NetworkDispatcher。留意到threadPoolSize这个数组长度参数的字面意义,结合上面的Volley架构图,猜想NetworkDispatcher是一个work thread,循环等待并通过network执行在Queue上的request。

RequestQueue被实例化后,便调用其start()方法:

public void start() {  stop();  // Make sure any currently running dispatchers are stopped.  // Create the cache dispatcher and start it.  mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);  mCacheDispatcher.start();  // Create network dispatchers (and corresponding threads) up to the pool size.  for (int i = 0; i < mDispatchers.length; i++) {   NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,     mCache, mDelivery);   mDispatchers[i] = networkDispatcher;   networkDispatcher.start();  } }  

相应地有:

public void stop() {  if (mCacheDispatcher != null) {   mCacheDispatcher.quit();  }  for (int i = 0; i < mDispatchers.length; i++) {   if (mDispatchers[i] != null) {    mDispatchers[i].quit();   }  } }  

这里的逻辑很简单:

  1. 开始之前停止所有旧的任务(即interrupt所有worker thread)。

  2. 启动一个负责cache的worker thread。

  3. 启动n个负责network的worker thread。

  4. worker thread开始不断地等待来自Queue的request。

Request

接下来执行 queue.add(stringRequest); ,一个request被加入到queue中,代码如下所示:

public <T> Request<T> add(Request<T> request) {  // Tag the request as belonging to this queue and add it to the set of current requests.  request.setRequestQueue(this);  synchronized (mCurrentRequests) {   mCurrentRequests.add(request);  }  // Process requests in the order they are added.  request.setSequence(getSequenceNumber());  request.addMarker("add-to-queue"); // marker用来指示request当前的状态,实际上是用来打log  // If the request is uncacheable, skip the cache queue and go straight to the network.  if (!request.shouldCache()) {   mNetworkQueue.add(request);   return request;  }  // Insert request into stage if there's already a request with the same cache key in flight.  synchronized (mWaitingRequests) {   String cacheKey = request.getCacheKey();   if (mWaitingRequests.containsKey(cacheKey)) {    // There is already a request in flight. Queue up.    Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);    if (stagedRequests == null) {     stagedRequests = new LinkedList<Request<?>>();    }    stagedRequests.add(request);    mWaitingRequests.put(cacheKey, stagedRequests);    if (VolleyLog.DEBUG) {     VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);    }   } else {    // Insert 'null' queue for this cacheKey, indicating there is now a request in    // flight.    mWaitingRequests.put(cacheKey, null);    mCacheQueue.add(request);   }   return request;  } }  

这里的逻辑是:

  1. 对新加进来的request进行一些设置。

  2. 如果不需要cache,那么把request直接加到network queue中。

  3. 根据key检查request是否正在执行。如果是,则将其放入到waiting链表中。猜想当request完成的时候会调用某个方法将key在waiting链表中删除,然后依次执行waiting的request。如果否,则将其加入cache queue中。

CacheDispatcher

假设该uri访问是第一次执行,那么对应的request会被放到cache queue中。cache worker thread(cache dispatcher)发现cache queue中存在request,会马上将其dequeue并执行。我们来看一下CacheDispatcher的run方法:

public class CacheDispatcher extends Thread {  ...  private final Cache mCache; // 一开始传入了“new DiskBasedCache(cacheDir)”  ...  public void quit() {   mQuit = true;   interrupt();  }  @Override  public void run() {   if (DEBUG) VolleyLog.v("start new dispatcher");   Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);   // Make a blocking call to initialize the cache.   mCache.initialize();   Request<?> request;   while (true) {    // release previous request object to avoid leaking request object when mQueue is drained.    request = null; //确保最后一个request做完后能及时回收内存。    try {     // Take a request from the queue.     request = mCacheQueue.take(); // 堵塞    } catch (InterruptedException e) {     // We may have been interrupted because it was time to quit.     if (mQuit) {      return;  // 退出点     }     continue;    }    try {     request.addMarker("cache-queue-take");     // If the request has been canceled, don't bother dispatching it.     if (request.isCanceled()) {      request.finish("cache-discard-canceled");      continue;     }     // Attempt to retrieve this item from cache.     Cache.Entry entry = mCache.get(request.getCacheKey()); // miss cache则直接将request放到network queue中     if (entry == null) {      request.addMarker("cache-miss");      // Cache miss; send off to the network dispatcher.      mNetworkQueue.put(request);      continue;     }     // If it is completely expired, just send it to the network.     if (entry.isExpired()) { // cache 过期了,直接将request放到network queue中      request.addMarker("cache-hit-expired");      request.setCacheEntry(entry);      mNetworkQueue.put(request);      continue;     }     // We have a cache hit; parse its data for delivery back to the request.     request.addMarker("cache-hit");     Response<?> response = request.parseNetworkResponse( // 将cache包装成一个response       new NetworkResponse(entry.data, entry.responseHeaders));     request.addMarker("cache-hit-parsed");     if (!entry.refreshNeeded()) {      // Completely unexpired cache hit. Just deliver the response.      mDelivery.postResponse(request, response);     } else {      // Soft-expired cache hit. We can deliver the cached response,      // but we need to also send the request to the network for      // refreshing.      request.addMarker("cache-hit-refresh-needed");      request.setCacheEntry(entry);      // Mark the response as intermediate.      response.intermediate = true;      // Post the intermediate response back to the user and have      // the delivery then forward the request along to the network.      final Request<?> finalRequest = request;      mDelivery.postResponse(request, response, new Runnable() { // 将response返回给用户的同时,将request放进network queue进行刷新       @Override       public void run() {        try {         mNetworkQueue.put(finalRequest);        } catch (InterruptedException e) {         // Not much we can do about this.        }       }      });     }    } catch (Exception e) {     VolleyLog.e(e, "Unhandled exception %s", e.toString());    }   }  } }  

接下来看一下mDelivery.postResponse这个方法。

ExecutorDelivery

从上文得知,mDelivery是一个ExecutorDelivery的实例(在新建RequestQueue时传入)。

ExecutorDelivery的初始化代码如下所示:

public ExecutorDelivery(final Handler handler) {  // Make an Executor that just wraps the handler.  mResponsePoster = new Executor() { // java.util.concurrent.Executor;   @Override   public void execute(Runnable command) {    handler.post(command);   }  }; }  

关于java.util.concurrent.Executor可以看[这篇文章](),这里就不展开了。

postResponse代码如下所示:

@Override public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {     request.markDelivered(); //标记为已分发     request.addMarker("post-response");     mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable)); // 在初始化时传入的handler中执行ResponseDeliveryRunnable } 

ResponseDeliveryRunnable是ExecutorDelivery的一个子类,负责根据request的不同结果调用对应的listener方法:

@SuppressWarnings("rawtypes") private class ResponseDeliveryRunnable implements Runnable {  private final Request mRequest;  private final Response mResponse;  private final Runnable mRunnable;  public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {   mRequest = request;   mResponse = response;   mRunnable = runnable;  }  @SuppressWarnings("unchecked")  @Override  public void run() { // 在主线程中执行   // If this request has canceled, finish it and don't deliver.   if (mRequest.isCanceled()) {    mRequest.finish("canceled-at-delivery"); // 会调用 RequestQueue的finish方法    return;   }   // Deliver a normal response or error, depending.   if (mResponse.isSuccess()) {    mRequest.deliverResponse(mResponse.result); //调用 listener的onResponse(response)   } else {    mRequest.deliverError(mResponse.error);   }   // If this is an intermediate response, add a marker, otherwise we're done   // and the request can be finished.   if (mResponse.intermediate) {    mRequest.addMarker("intermediate-response");   } else {    mRequest.finish("done");   }   // If we have been provided a post-delivery runnable, run it.   if (mRunnable != null) {    mRunnable.run();   }    } }  

接下来我们回头看看NetworkDispatcher对network queue的处理。

NetworkDispatcher

NetworkDispatcher的源码如下所示:

public class NetworkDispatcher extends Thread {  private final Network mNetwork; // BasicNetwork实例  ...  private final BlockingQueue<Request<?>> mQueue; // network queue    ...  public void quit() {   mQuit = true;   interrupt();  }  @TargetApi(Build.VERSION_CODES.ICE_CREAM_SANDWICH)  private void addTrafficStatsTag(Request<?> request) { // 方便统计Volley的网络流量   ...  }  @Override  public void run() {   Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);   Request<?> request;   while (true) {    long startTimeMs = SystemClock.elapsedRealtime();     // release previous request object to avoid leaking request object when mQueue is drained.    request = null;    try {     // Take a request from the queue.     request = mQueue.take(); //1. 堵塞读取network queue中的request    } catch (InterruptedException e) {     // We may have been interrupted because it was time to quit.     if (mQuit) {      return;     }     continue;    }    try {     request.addMarker("network-queue-take");     // If the request was cancelled already, do not perform the     // network request.     if (request.isCanceled()) {      request.finish("network-discard-cancelled");      continue;     }     addTrafficStatsTag(request);     // Perform the network request.     NetworkResponse networkResponse = mNetwork.performRequest(request); //2. 在network对象中堵塞执行request     request.addMarker("network-http-complete");     // If the server returned 304 AND we delivered a response already,     // we're done -- don't deliver a second identical response.     if (networkResponse.notModified && request.hasHadResponseDelivered()) { // 304表示资源未被修改      request.finish("not-modified");      continue;     }     // Parse the response here on the worker thread.     Response<?> response = request.parseNetworkResponse(networkResponse); //3. 将NetworkResponse转成Response     request.addMarker("network-parse-complete");     // Write to cache if applicable.     // TODO: Only update cache metadata instead of entire record for 304s.     if (request.shouldCache() && response.cacheEntry != null) {      mCache.put(request.getCacheKey(), response.cacheEntry);// 4. Response放到cache中      request.addMarker("network-cache-written");     }     // Post the response back.     request.markDelivered();     mDelivery.postResponse(request, response);//5. 通过Delivery回调结果    } catch (VolleyError volleyError) {     volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);     parseAndDeliverNetworkError(request, volleyError);    } catch (Exception e) {     VolleyLog.e(e, "Unhandled exception %s", e.toString());     VolleyError volleyError = new VolleyError(e);     volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);     mDelivery.postError(request, volleyError);    }   }  }  private void parseAndDeliverNetworkError(Request<?> request, VolleyError error) {   error = request.parseNetworkError(error);   mDelivery.postError(request, error);  } }  

NetworkDispatcher的处理流程和CacheDispatcher差不多,见注释。TrafficStats的介绍可以看 这里 。

关于Network方面的细节较多,本文不做解析。了解清楚Volley的主线后,其余部分只是基于主线各模块的功能扩充。

总结

综上,Volley的大致框架如下所述:

  1. 一个RequestQueue中包含两个内部queue,分别是cache queue和network queue。还有一个cache dispatcher和n个network dispatcher,它们都继承成于Thread,分别负责执行缓存和网络请求。还有一个delivery,负责分发请求结果。

  2. cache dispatcher在独立的线程上运行。cache dispatcher循环等待、取出并执行cache queue中的request。把结果交给delivery。

  3. N个network dispatcher分别在独立的线程上运行。network dispatcher循环等待、取出并执行network queue中的request。把结果交给delivery和添加到cache中。

  4. delivery负责在主线程上将结果传给相应的listener回调。

正文到此结束
Loading...