This article is compatible with the latest version of Silverlight.
Note from SilverlightShow: this article topic has been requested by Raymond Monette. Thanks Raymond!
There is a control, in the HTML markup, that has not been changed since its first introduction, probably with the first version of the Prince of the markup languages. As you have understand for sure, I'm referring to the <input type="file" /> element that, from many years, is able to implement the file upload process in HTML, but that is so far from being perfect and from give a friendly user experience.
Of course many third party libraries and vendors have built its own controls, to override the problems of this important element of the markup, and many of them have succeeded using AJAX, Flash, and other Rich Internet technologies. In this article I would want to cover this matter, creating a simple reusable control that let the developer to implement a better file-uploading experience using Silverlight.
Download Source Code
Let play some creativity
Watching to the ancestor of the file-upload controls we can see many limitations: first of all, it is able to upload only a single file at a time, and this is a problem since too often there is the need of select many files and send them together to the server (e.g. think at multiple email attachments). Another important concern is about the progress of the upload. When we are sending huge files there is not any feedback about the progress of the work, so the user is not sure the upload is going to end or not. It may seems strange, but it is the main source of pain to me.
Using Silverlight it is possible to create a first-class control with advanced features. So, imagine to implement the control as an area where someone can drag files, many at a time, and then the files are automatically enqueued for the transfer to the server. Of course there is the need to deal some limitations of Silverlight. The plug-in can only make two concurrent call to the server, so the control have to manage this limitation avoiding to overload the network. Another problem is that Silverlight has not any way to be notified of the progress of the upload. Since this is an important feature the only available solution is to break a single file in multiple chunks of data and send them one at a time, updating the progress percentage every time a chunk has been received from the server. This feature is for sure a big complication to the work that has to be done because the server has to receive the separate chunks and rebuild the original file.
With these ideas in mind, now it is time to start the work, beginning from a simple control, to catch the file to upload, going down to the lower parts of the logic, until arriving to the server script.
Make things natural with Drag & Drop
Silverlight 4.0 introduced a new set of useful features, and one of them is the capability of dragging files from the client filesystem to the plug-in surface, letting the developer to detect the drop of these files and accessing an object to get information about the file and a Stream to read its content. The result is pretty similar to the selection of multiple files using the OpenFileDialog control, but from the point of view of the user the result is more natural.
The control, that I'm about to explain is a simple area that contains a ListBox. When someone drop some files onto the area, they are added to the ListBox and a ProgressBar control is displayed side by side with the file name. For this purpose I've created a templated-control. All the work starts when the AllowDrop property is set to true on the control. The code handles three events, to change the state of the control to Normal or Hover and to detect when the user finally drop the file. The state is used to give a feedback to the user about the dragging of files.
this.AllowDrop = true;
this.DragOver += (s, e) => VisualStateManager.GoToState(this, "Hover", true);
this.DragLeave += (s, e) => VisualStateManager.GoToState(this, "Normal", true);
this.Drop += new DragEventHandler(RootElement_Drop);
The dropped files are wrapped by an UploadFile class and added to an internal component we will discuss in the next paragraph. The ListBox is binded to the a collection of the files so it can display the elements and properties exposed by the UploadFile class. One of them is the Percentage of the upload that will be changed according with the progress. To add the files the code scans the FileInfo collection returned by the GetData method:
private void RootElement_Drop(object sender, DragEventArgs e)
{
if (e.Data.GetDataPresent("FileDrop"))
{
VisualStateManager.GoToState(this, "Normal", false);
FileInfo[] files = (FileInfo[])e.Data.GetData("FileDrop");
foreach (UploadFile file in from f in files
where f.Exists
select new UploadFile(f, this.ChunkSize))
this.UploadManager.Add(file);
}
}
The control itself is really straightforward, with no surprises. All the needed logic infact, is implemented by an internal component called UploadManager. Having all the logic incapsulated in this component grants the developer the capability of use it without the need of using my Drag-and-Drop control. So if you need to use a traditional OpenFileDialog it is possible for sure.
The UploadManager
As you may have understand, the big part of the work is done by the UploadManager. The principle of it work is simple: It manages a queue of files to upload. When something is added to the queue it takes the files two by two, breaks them in chunks, and sends the chunks to the server. Every time a file has been completely uploaded it moves to another file and so on, until the end.
When I created this component I wanted to avoid to have a thread monitoring the internal queue of files. Probably this would be the simplest way to implement the component, but I think if I start a thread for every instance of the component it may easily become greedy of system resources. So I tried to take advantage of the threads started by the runtime to make asynchronous calls to the server and every time a completion callback is called, I test if other files are waiting to be uploaded. So when an item is added to the collection it starts a chain of upload and callbacks that end only when the queue is again empty. S,o in the Add method, I call the ProcessQueue method for the purpose of starting this process if it is idle:
public void Add(UploadFile file)
{
if (!this.Files.Contains(file))
{
file.Reset();
this.Files.Add(file);
this.ProcessQueue();
}
}
Then in the ProcessQueue method the two files are selected from the queue using a LINQ query. The query exclude the completed files, and the ones that are still uploading. Then for each file selected the UploadChunk method starts the chunked send process:
private void ProcessQueue()
{
if (this.UploadingFiles.Count < 2)
{
foreach (UploadFile file in (from f in this.Files
where f.Percentage < 100.0 & !this.UploadingFiles.Contains(f)
select f).Take(2 - this.UploadingFiles.Count))
{
this.UploadingFiles.Add(file);
this.UploadChunk(file);
}
}
}
Finally the UploadChunk method reads a block from an UploadFile instance and uses the HttpWebRequest to call the server. The UploadFile class is responsible of tracking the progress of the upload. The callback calls again the UploadChunk method to continue the chain of operations:
private void UploadChunk(UploadFile file)
{
byte[] chunk = file.ReadChunk();
if (chunk == null)
{
this.UploadingFiles.Remove(file);
this.ProcessQueue();
}
else
{
HttpWebRequest rq = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(this.UploadHandlerUri);
rq.Method = "PUT";
rq.BeginGetRequestStream(
result => this.DoPutUpload(result, file, chunk), rq);
}
}
The sole showstopper to this chain is the appearing of an Exception, perhaps caused by a network problem or something similar. I've accurately trapped the exceptions and I put the related file in a Fail state to avoid it from breaking the process. The UploadFile class has many properties, someone can use to display the current state of the file, the upload progress, the presence of an error, the current index of the chunk that is uploading.
Going to Server side
When I started to write the server side part of the UploadManager, I wanted to find the simplest way to make this operation without some sort of infrastructure. Since Silverlight is a cross platform technology, I decided to avoid the use of WCF, Ria Services or every other windows specific technology. My choice goes to the use of the PUT method, that is probably one of the primitive kind of file-uploading strategy.
Starting from Silverlight 3.0 the new Client networking stack supports some other methods like PUT, and DELETE and not only the usual GET and POST. With the PUT method a developer is able to access the raw input stream, and in the case of an ASP.NET server it is required to write only few row of code for a generic HTTP handler (ashx). But this implementation is simple also with different technologies like Java, PHP and other.
The sole trouble is that the UploadManager needs to send some other information, that the server side handler require to reconstruct the chunks into the original file. But the solution to this problem is also simple. I’ve made use of some custom Http Headers, where I put some vital information like filename, chunk index and chunk max. From the client side the code is the following:
private void DoPutUpload(IAsyncResult result, UploadFile file, byte[] chunk)
{
HttpWebRequest rq = (HttpWebRequest)result.AsyncState;
using (Stream stream = rq.EndGetRequestStream(result))
stream.Write(chunk, 0, chunk.Length);
rq.Headers["File-Name"] = file.File.Name;
rq.Headers["Chunk-Index"] = file.ChunckIndex.ToString();
rq.Headers["Chunk-Max"] = file.ChunkMax.ToString();
rq.BeginGetResponse(
r =>
{
try
{
WebResponse rs = rq.EndGetResponse(r);
rs.Close();
Deployment.Current.Dispatcher.BeginInvoke(
() => this.UploadChunk(file));
}
catch (WebException ex)
{
Deployment.Current.Dispatcher.BeginInvoke(
() => file.Fail(ex));
}
}, rq);
}
As you see after opening the request stream of an HttpWebRequest, I write the entire chunk bytes into it. Then using the Headers collection I create the File-Name, Chunk-Index and Chunk-Max headers. On the server side the filename will be used to match every chunk with previous received chunks. The index and the max instead are used to determine if the file is new (index == 1) or if the file has been fully received (index == max).
Finally, here is the server side code. It is all included in the ProcessRequest method of an HttpHandler, where I read the headers and start accumulating the incoming chunks. When the first chunk arrive I create a temporary file using Path.GetTempFileName(). This file name is retained in a Cache variable, called with the name of the file I’m receiving.
public void ProcessRequest(HttpContext context)
{
string filename = context.Request.Headers["File-Name"];
if (string.IsNullOrEmpty(filename)) throw new InvalidOperationException();
int chunkIndex = int.Parse(context.Request.Headers["Chunk-Index"] ?? "-1");
if (chunkIndex == -1) throw new InvalidOperationException();
int chunkMax = int.Parse(context.Request.Headers["Chunk-Max"] ?? "-1");
if (chunkMax == -1) throw new InvalidOperationException();
using (Stream stream = this.GetStream(context, filename, chunkIndex))
{
byte [] data = new byte[context.Request.ContentLength];
context.Request.InputStream.Read(data, 0, data.Length);
stream.Write(data, 0, data.Length);
}
if (chunkIndex == chunkMax)
{
string destination = Path.Combine(context.Server.MapPath("~/Upload"), filename);
if (File.Exists(destination)) File.Delete(destination);
File.Move((string)context.Cache[filename], destination);
context.Cache.Remove(filename);
}
context.Response.End();
}
When the last chunk arrives, I write it to the temporary file, then I use the File.Move() method to move the file itself to the final destination on the server. In my case it is a folder in the project, but you probably will use a configuration key to determine your location. What I would recommend is not to pass the destination folder as a parameter on the query string or in the headers. This will be a great vulnerability in the security that leave an hacker free of saving everything on your server hard disk.
What may we improve?
The sample I provided attached to this article is far to be perfect. I already said about the path where to save the files, but there is a couple of thing that really need to be improved. The first is the problem that you have when someone stop the upload process in the middle of a transfer. In this case the server is not aware of the missing parts, so the temporary folder may grow over the capacity of your hard disk. This may open the way to DoS attacks. In my sample I've used the Cache removed callback that notify me when the element is not updated after a minute. In this case I delete the temporary file.
Another problem comes from the integrity of the file. In a real world solution you will have to use an hashing algoritm to verify that the sent file is equal to the received one. I hope my sample will be a good starting point.