(X) Hide this
    • Login
    • Join
      • Generate New Image
        By clicking 'Register' you accept the terms of use .

Writing an AsyncLoader to enqueue long running operations

(5 votes)
Andrea Boschin
>
Andrea Boschin
Joined Nov 17, 2009
Articles:   91
Comments:   9
More Articles
0 comments   /   posted on Mar 10, 2010
Tags:   asyncloader , wcf , wcf-ria-services , andrea-boschin

I think some of you may have developed an application that requires a lot of roundtrips on the server to retrieve data to be displayed to the user. Every time your application goes to the server it may have to wait for long running query to end its works, perhaps because the data are extracted from an huge database. Then it have to download the data and finally display them onto the screen.

If you have already deal with this kind of interaction you should know that the two connection limit of the web browser can become evident. For some of you that are not aware of this limitation you have to know that due to the RFC 2616 specification, the compliant browsers have not to hammer the network and are limited to make only two simultaneous connections to the server (per domain). Probably this limitation comes from the ancient times, when the broadband still not existed, but during the current days it may be annoying to deal with the limitation.

The main concern, when you are in this kind of situation, do not come from the slow download of our data from the server, simply because waiting a bunch of second more than the time required to extract the data from the database is not a real trouble. The problem comes when you have to leave the user free of interacting with the interface because this means to have other queries that usually do not take a long time, triggered by the user and be enqueued with the long running queries.

Download source code

How to deal this?

In my experience, when you run multiple queries from the browser through a network stack, them are automatically enqueued by the browser and executed two by two. It is usual to have multiple queries at least when you download an HTML page with multiple images. And you are aware that all of them are downloaded asynchronously, probably honouring the two connection limit. So when you have a short-query invoked after two long-queries you have to wait at least one of the long-queries to be completed before the short one starts.

The best thing to me would be to reserve one of the two channels available for the long running queries and leave the other free to ensure the short queries are satisfied as soon as possible. I finally figured out that keeping them separated is really possible if I'm sure the long running queries are called one by one - with a private queue - so they keep busy one of the two channels but the other is always available.

For this task I've created an ApplicationService that I can attach to the LifetimeObjects collection of the application. This kind of services were introduced in Silverlight 3.0 and they can be notified by the plugin runtime about the Start and Stop events of the application. An Application Service simply implements the IApplicationService interface. Here is the interface itself:

 public interface IApplicationService
 {
     void StartService(ApplicationServiceContext context);
     void StopService();
 }

As you can understand, the runtime will call the StartService method when the application starts and the StopService method just before the application ends his life. To install the service you need to add it to the LifetimeObjects collection into the App.xaml file.

 <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
              xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
              xmlns:code="clr-namespace:SilverlightPlayground.MultiAsyncLoad.Code"
              x:Class="SilverlightPlayground.MultiAsyncLoad.App">
     <Application.ApplicationLifetimeObjects>
         <code:AsyncJobsManager />
     </Application.ApplicationLifetimeObjects>
 </Application>

The AsyncJobsManager service

The idea to solve this problem is having a thread started when the application start. This thread is responsible of checking items enqueued in a local queue and trigger one by one the works they have to accomplish. When someone needs to make a call to the server it has to call the "Enqueue" method of the service and it is added at the end of the waiting queue. When there is room for the call the service starts it and then waits the end before running another call. Obviously during these stages the service has to notify the status changes to the caller so it can give a feedback to the user and handle the returning value if there is one.

So the first thing to do is building the skeleton of the service. For this purpose I've created a ThreadedService abstract class that is responsible of starting a thread and controlling it to end gracefully when the application terminates. The ThreadedService use an instance of ManualResetEvent to control the main loop of the Thread procedure. This event is set when the thread must stop and close. We will see in a few why I've used this class.

 public abstract class ThreadedService : IApplicationService
 {
     private ManualResetEvent ExitEvent { get; set; }
     protected WaitHandle[] ExitHandles { get; set; }
  
     public ThreadedService()
     {
         this.ExitEvent = new ManualResetEvent(false);
         this.ExitHandles = new WaitHandle[] { this.ExitEvent };
     }
   
     protected abstract void ThreadProc();
  
     public virtual void StartService(ApplicationServiceContext context)
     {
         ThreadStart listenThreadStart = new ThreadStart(ThreadProc);
         Thread thread = new Thread(listenThreadStart);
         thread.Start();
     }
   
     public virtual void StopService()
     {
         this.ExitEvent.Set();
     }
 }

The main body of the thread is the ThreadProc abstract method that has to be overriden by the inheriting class. Tipically this procedure makes a WaitAny() on the ExitEvent for a short timeout (about 100 ms) then tries to do its job and then returns to the WaitAny. When the ExitEvent is set the procedure quits. For this service I've decided not to use this pattern because it may happen that the service is mostly busy to make calls instead of waiting for the exit event so the quit may not being immediate. I've instead introduced another event, the WorkCompleted that notify the main procedure that a call has been completed and there is room to have another call running. Here is the main body:

 protected override void ThreadProc()
 {
     while (true)
     {
         int index = WaitHandle.WaitAny(this.ExitHandles);
         WaitHandle handle = this.ExitHandles.ElementAt(index);
  
         if (!handle.Equals(this.WorkCompleted))
             return;
   
         this.TryExecuteAction();
     }
 }

The TryExecuteAction method is called also when an item is enqueued. So it starts trying to set a flag before starting the work. If it is able to set the flag then the work can start else it waits again. The operation to set the isRunning flag has to be atomic. You have not to forget we are running in a multithread environment so checking the value of a variable may not mean it value is still the same a millisecond after the reading. For having this read and set atomic we have to use the Interlocked class provided by the framework. The Exchange method of this class is able to swap the value of a variable with another value and return the original and these operations are made in an atomic way. We cannot simply use a lock here because we need a flag set to indicate our channel (the one reserved to the long running queries) is busy and while it work is running we can continue to accept other requests or wait to the exit event. Differently we use a lock to control the concurrency on the queue because we can have another thread enqueuing something while we are dequeuing and the Queue object is not thread safe. And here is another snippet of code (propably the most complex):

 private void TryExecuteAction()
 {
     if (Interlocked.Exchange(ref _isRunning, 1) == 0)
     {
         QueuedItem action;
  
         lock (_lockObj) // <<== lock on the queue
         {
             if (this.Queue.Count == 0)
             {
                 Interlocked.Exchange(ref _isRunning, 0);
                 return;
             }
  
             action = this.Queue.Dequeue();
  
             // the work runs asyncronously in the UI Thread
             Deployment.Current.Dispatcher.BeginInvoke(
                 () => action.Action(succeded =>
                 {
                     Interlocked.Exchange(ref _isRunning, 0);
                     this.WorkCompleted.Set();
                 }, action.ID));
         }
     }
 }

This snippet opens the last question of the article: what code have to be run? The correct response to the question is: I don't know. If I predict what is the call the client can make I'm writing a vertical solution to a single problem and the service will be not reusable across different projects. It is here where delegates and lambdas come into the scene. What I have to schedule is an arbitrary piece of code and the best way to make it is using a delegate, particularly the Action delegate provided by the framework. My Action will be quite complex:

Action<Action<bool>, Guid> myAction;

This means: a method that has an argument of type Action<bool> (another delegate) and another of type Guid (the id of the item in the queue). The Action<bool> delegate is passed by my service and needs to be called by the code when the task is complete. The bool parameter indicate the success (true) or failure (false). Here is how the method need to be made:

 AsyncJobsManager.Current.Enqueue(
     (callback, guid) =>
     {
         callback(true); // this means success
     });

The Guid parameter is a generated identifier useful to have a reference to the action that is running. In my sample I use this id to maintain a collection binded to the user interface to show the status of the operations. Thanks to this guid I'm able to know the source of the event and the previous and current status.

Working with the service

After adding the service to the LifetimeObjects we are ready to start using the service without any concerns about starting and stopping its activity because the service will be managed by the runtime. We have only to enqueue operations and wait them to be ended. I've already shown a sample call but in the next box I will show you a more realistic example:

 void EnqueueButton_Click(object sender, RoutedEventArgs e)
 {
     AsyncJobsManager.Current.Enqueue(
         (callback, id) =>
         {
             DummyDomainContext dc = new DummyDomainContext();
  
             dc.GenerateRandom(
                 result =>
                 {
                     if (!result.HasError)
                     {
                         this.SetResult(id, result.Value);
                         callback(true);
                     }
                     else
                         callback(false);
                 }, null);
         });
 }

This sample enqueue an operation started against a WCF Ria Service. The GenerateRandom method is a fake operation that waits for a random timeout to simulate a long running query then generates a random number and returns it. In this snippet I take advantage of the callback operation of the Ria Service to call the completing callback after the conclusion of the network operation. More, I use the HasError property to notify the success or failure. In a more evoluted version you may pass the Exception generated from the call to the network to the service to gracefully handle errors.

Also if the service is quite complex in its internal working I found it very useful to let the user able to continue his work without unwanted waits. But the most curious thing is that you can use it not always for network related activities but also for some other kind of long-running background work.


Subscribe

Comments

No comments

Add Comment

Login to comment:
  *      *