I have an application with heavy IO operations such as file copying, zipping and moving the files around the file system, copying to backup servers.
I build this program as single threaded. It runs in 2 minutes.
I built another version of this program with Parallel extensions and using Task, which runs almost in 2 minutes as well.
In other words I didnt see a performance gain by using Parallels due to heavy IO.
Would I get the same results if i deploy the application to a blade server?
Does blade servers process IO faster/ on multi channels than my workstation?
There is no benefit of using Parallels with IO bound applications?
If all you're doing is copying or moving files across the system then the parallelism provided by the TPL isn't going to do you much good. Moving for example really doesn't use any CPU it simply changes the files location in the disk's directory record structure.
File compression is a different story. Here you're loading data and using the CPU to compress it before saving it out to disk. You might be able to use a pipeline or parallel loop to load/compress/save the data in a more efficient way. Instead of having one thread work on compressing each file you could have multiple threads working on different files.
The following code compresses a load of files sequentially and then in parallel. I get the following times on an i7 920 and with a intel X25 SSD compressing 329 JPG images totalling 800Mb of data.
Sequential: 39901ms
Parallel: 12404ms
class Program
{
static void Main(string[] args)
{
string[] paths = Directory.GetFiles(@"C:\temp", "*.jpg");
DirectoryInfo di = new DirectoryInfo(@"C:\temp");
Stopwatch sw = new Stopwatch();
sw.Start();
foreach (FileInfo fi in di.GetFiles("*.jpg"))
{
Compress(fi);
}
sw.Stop();
Console.WriteLine("Sequential: " + sw.ElapsedMilliseconds);
Console.WriteLine("Delete the results files and then rerun...");
Console.ReadKey();
sw.Reset();
sw.Start();
Parallel.ForEach(di.GetFiles("*.jpg"), (fi) => { Compress(fi); });
sw.Stop();
Console.WriteLine("Parallel: " + sw.ElapsedMilliseconds);
Console.ReadKey();
}
public static void Compress(FileInfo fi)
{
using (FileStream inFile = fi.OpenRead())
{
if ((File.GetAttributes(fi.FullName)
& FileAttributes.Hidden)
!= FileAttributes.Hidden & fi.Extension != ".gz")
{
using (FileStream outFile =
File.Create(fi.FullName + ".gz"))
{
using (GZipStream Compress =
new GZipStream(outFile,
CompressionMode.Compress))
{
inFile.CopyTo(Compress);
}
}
}
}
}
}
For the compression code see How to: Compress Files
If you're moving files around on one physical device, you're not going to see much performance benefit from making multiple parallel IO requests to the same one device. The device is already operating many orders of magnitude slower than the CPU, so multiple requests made in parallel will still line up to be handled one by one on the device. Your parallel code is being serialized because it's all accessing the same device that can't really handle more than one request at a time.
You might see a tiny perf improvement with parallel code if your disk controller implements "elevator seeks", "scatter-gather", or other out-of-order operations, but the perf difference will be relatively small.
Where you should find a more rewarding perf difference for file I/O is when you're moving files between many different physical devices. You should be able to move or copy a file on disk A to some other location on disk A while also copying a file on disk B to disk C. With many physical devices, you don't have all the parallel requests stacking up waiting for the one device to fill all the requests.
You'll probably see similar results with network I/O: If everything is going through one ethernet card / network segment you're not going to realize as much parallelism as when you have multiple ethernet cards and multiple network segments to work with.
I think the advantage of Parallel extensions could be significant on CPU operations. Donnu how it's supposed to affect IO tho.
It all depends on whether you are CPU bound or IO bound. I would suggest doing some performance testing to see where you bottle necks are.
If you find you are moving and compressing a lot of files (to different disks, as a move on the same disk is just a FAT table change) you might want to look at implementing a streaming file mover that compresses as it moves. This can save the extra IO of re-reading the files after moving them. I have done this with moving and checksumming and in my case was a huge performance bump.
Hope this helps.
I have an application that is implemented in WinForms that processes ~7,800 URLs in approximately 5 minutes (downloads the URL, parses the content, looks for specific pieces of data and if it finds what its looking for does some additional processing of that data.
This specific application used to take between 26 to 30 minutes to run, but by changing the code to the TPL (Task Parallel Library in .NET v4.0) it executes in just 5. The computer is a Dell T7500 workstation with dual quad core Xeon processors (3 GHz), running with 24 GB of RAM, and Windows 7 Ultimate 64-bit edition.
Though, it's not exactly the same as your situation this too is extremely IO intensive. The documentation on TPL states it was originally conceived for processor bound problem sets, but this doesn't rule out using it in IO situations (as my application demonstrates to me). If you have at least 4 cores and you're not seeing your processing time drop significantly then it's possible you have other implementation issues that are preventing the TPL from really being efficient (locks, hard drive items, etc.). The book Parallel Programming with Microsoft .NET really helped me to understand "how" your code needs to be modified to really take advantage of all that power.
Worth a look in my opinion.
来源:https://stackoverflow.com/questions/5347801/parallel-extensions