What's new

How To Build a Really Fast NAS - Part 6: The Vista (SP1) Difference

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

rogerbinns

Occasional Visitor
It is important to distinguish between what happens in user space vs what the kernel and network filesystem redirectors can do. This applies to Windows and the other operating systems such as Linux.

There is a read call which points to where the resulting data will be placed, how much to read and an implicit or explicit file offset. Traditionally the read call is used synchronously - control is not returned to the program until all the data has been read.

Any number can be supplied for how much to read from one byte up to the file size (if there is sufficient memory to place the results). The kernel then has to decide what to do to satisfy the read request. Typically a large request will be broken into smaller ones and small requests will be turned into larger ones to perform readahead. With local disks this is all easy. With a network filesystem there are tradeoffs to be made. If data is readahead then it may be (undetectably) stale by the time the program reads it. If a large request was decomposed into smaller ones, they could all be sent over the wire at once but this could saturate the network and overwhelm the server on the other end. You also have to use kernel memory to buffer up data coming back and kernel memory is considered very expensive in Windows up to and including XP (NT was optimized to run in 4-8MB of memory, XP defaults to a 10MB file cache!)

What this means is that what you see in process monitor are requests to the kernel. The kernel can then satisfy those requests using any amount of behind the scenes asynchronousity (is that even a word?), breaking down into smaller requests, readahead etc. You can only see what really happened by using a network sniffer. Window's kernel implementations tended to try and conserve kernel memory and not overwhelm servers, all of which changed in Vista when they recognised that more memory is available. Everything that is done in Vista could have been in earlier versions of Windows and in earlier versions of SMB. They simply chose not to. (I can give you the long boring story if you want).

If you really want to do a good benchmark, use read buffer sizes of something like 64MB (or even larger) from user space. The kernel can then break that up into pieces of its own choosing. (The CopyFile api is trying to pick the sizes in user space.) Using one byte sizes will tell you how well it does it read ahead as well as the overhead of user to kernel to user context switches. SMB has opportunistic locks so it is possible to do any amount of readahead and know if the data become stale.

Also look at the flags to CreateFile (the file open function) where you can give additional hints about whether the file will be accessed sequentially (turns on readahead for Windows), if the cache should be bypassed etc.

For historical reasons Windows programs have second guessed the kernel to improve performance which has led to the kernel trying to second guess the programs into an unvirtuous cycle. Raymond Chen's blog has lots of amusing stories about this sort of thing. The Linux/Unix user space apis are considerably simpler (opening a file takes 3 parameters not 7!) and programs don't try to optimize so the kernel does get to do the right thing.

Write requests have a similar set of issues. If you do a 64MB write from user space, the kernel has to break that up and do them concurrently or sequentially. It can return before data is on platters or wait (they usually return as soon as possible before all data is actually written since most user space write apis are used synchronously).

There is a rather vibrant WAN optimization industry where the number one protocol that customers care about is SMB/CIFS. The really funny thing is that the appliances optimise the requests for maximum performance and all the techniques used could be done by Microsoft's kernel code, but generally aren't.
 
Thanks for your comments, Roger. But I'm unclear about what you are saying/suggesting about using Vista SP1 filecopy as a new NAS Chart benchmark. Yea or nay?

I know that a lot goes on behind the scenes. But in the end, many users look at the file copy speed reported by Vista and use it.
 
Congrats Tim on finally breaking the 100 MB/s barrier! :)

But this does leave me with this question: I do get that the Vista SP1 speed improvements are because of optimizations with the SMB2 protocol as opposed to the regular SMB protocol.

But does this mean this speed is only achievable with a Vista 1 SP1 to another Vista SP1 system (or other with the same basic kernel - Server 2008)? And (currently?) not with Linux? Does Samba for Linux support SMB2 yet? Will it ever? (knowing Linux a bit: yes, but when?)

If it doesn't, this may be an excellent reason for me to consider using Vista and Server 2008 for my home network instead of Linux like I had planned on. (and I had already read that the next version of Windows Home Server will also be based on the Vista kernel, so that's good news there also. :))
 
But this does leave me with this question: I do get that the Vista SP1 speed improvements are because of optimizations with the SMB2 protocol as opposed to the regular SMB protocol.
Not entirely correct. SMB2 is responsible for only part of the speed improvement. The use of larger record / block sizes is a large contributor to improved performance.

But does this mean this speed is only achievable with a Vista 1 SP1 to another Vista SP1 system (or other with the same basic kernel - Server 2008)? And (currently?) not with Linux? Does Samba for Linux support SMB2 yet? Will it ever? (knowing Linux a bit: yes, but when?)
As noted in the article, SMB2 is currently implemented in Vista SP1 and Windows Server 2008. I do not know the status of SMB2 implementation in other OSes. But again, SMB2 is only part of the reason for improved performance.

If it doesn't, this may be an excellent reason for me to consider using Vista and Server 2008 for my home network instead of Linux like I had planned on. (and I had already read that the next version of Windows Home Server will also be based on the Vista kernel, so that's good news there also. :))
I'm sure that's what Microsoft would like you to do. Server 2008 ain't cheap though...
 
Thanks for your comments, Roger. But I'm unclear about what you are saying/suggesting about using Vista SP1 filecopy as a new NAS Chart benchmark. Yea or nay?

I know that a lot goes on behind the scenes. But in the end, many users look at the file copy speed reported by Vista and use it.

If you are using file copy speed then you are really testing the interaction of the FileCopy api and whatever Microsoft are doing in the kernel redirector this week. The resulting number is only peripherally related to what the underlying hardware is capable of.

If you use something like iozone and give it large sizes then the Microsoft redirector has the opportunity to optimize for performance.

The line about SMB2 being technically required for larger reads is nonsense. SMB1 supports up to 64kb reads. If a program does a 1MB read, there is a trivial difference between it being split into 16 reads of 64kb using SMB1 versus one read of 1MB using SMB2. (This is because in both cases the data returned spans many ethernet packets.) It was just easier for them to use larger sizes in the new codebase supporting SMB2 than the old crufty code doing SMB1.

It is trivial to make a faster SMB1 stack and that is exactly what the WAN optimizer people do, and Microsoft choses not to do.

As for benchmarks, measuring relative performances tells you what the overhead of various layers of indirection is. iperf should be able to tell you the maximum throughput of the link, limited by TCP stack and device driver performance. iozone run locally should give local disk performance and then run remotely shows how much the SMB code and kernel networking plus network traffic hurts things. There are some nice file benchmarks such as filebench and smbtorture but they only work on Unix like systems.

The real question is what you are hoping to get with a "really fast NAS". Generally the purpose is to serve multiple clients without their normal traffic adversely affecting each other. The defunct netbench benchmark used to test that very well and smbtorture does a good approximation.
 
Last edited:
But this does leave me with this question: I do get that the Vista SP1 speed improvements are because of optimizations with the SMB2 protocol as opposed to the regular SMB protocol.

That is not correct. Microsoft could make the kernel redirector perform equally well (well maybe a percent or two difference) for both SMB1 and SMB2. They chose not to touch SMB1 and only make improvements when using SMB2.

But does this mean this speed is only achievable with a Vista 1 SP1 to another Vista SP1 system (or other with the same basic kernel - Server 2008)?

You need the same basic post-XP kernel.

And (currently?) not with Linux? Does Samba for Linux support SMB2 yet? Will it ever? (knowing Linux a bit: yes, but when?)

Samba 4 has support for SMB2. Microsoft were also forced to publish the specs so there is less costly costly reverse engineering and guesswork going on. Samba 4 was more of a fork than a follow on, so its code is slowly being merged back into Samba 3.

If it doesn't, this may be an excellent reason for me to consider using Vista and Server 2008 for my home network instead of Linux like I had planned on. (and I had already read that the next version of Windows Home Server will also be based on the Vista kernel, so that's good news there also. :))

If you are using Windows Vista (or more recent) clients and your file server copy performance is the most important thing to you then you will get best performance using Vista (or more recent) servers. If you are just doing general work (eg using Office, maybe software development etc) then you will find that Samba using SMB1 outperforms on the same hardware running Windows especially when there is concurrent access.

Another datapoint: When using Ubuntu as SMB client (kernel cifs redirector) talking to Ubuntu server (samba) I get 54 megabytes per second with the filesystem on both ends being tmpfs.
 
Not entirely correct. SMB2 is responsible for only part of the speed improvement. The use of larger record / block sizes is a large contributor to improved performance.
So this means that, if you were to re-do your Linux benches again, this time also altering the record/block size, you would reach comparable results?

I'm sure that's what Microsoft would like you to do. Server 2008 ain't cheap though...
I know. Don't think I'd buy Server 2008 for home use, unless I could use a student licence or something. If not, I'll set my hopes for Home Server 2008, which is said to be using the same kernel.

Thanks for the clarification!
 
That is not correct. Microsoft could make the kernel redirector perform equally well (well maybe a percent or two difference) for both SMB1 and SMB2. They chose not to touch SMB1 and only make improvements when using SMB2.
Ah. So SMB2 is evloutionary rather than revolutionary. :) I find it strange Microsoft chose to develop a new 'standard' rather than to optimize the existing one, especially if you say they were forced to publish the specs. (and they could have got the same result by just optimizing SMB1). Strange... What could have been the reason for them developing SMB2 then?

Samba 4 has support for SMB2. Microsoft were also forced to publish the specs so there is less costly costly reverse engineering and guesswork going on. Samba 4 was more of a fork than a follow on, so its code is slowly being merged back into Samba 3.
So in time, the SMB2 optimizations will also end up in Linux. Good.

If you are just doing general work (eg using Office, maybe software development etc) then you will find that Samba using SMB1 outperforms on the same hardware running Windows especially when there is concurrent access.
My main use (for now) is file serving/streaming. More in particular I'm putting together a 'movie jukebox', meaning I'm currently ripping all my DVD's (1000+) and putting them all on harddisk. This obviously means I'm always copying files in excess of 4 GB across my network (and this will only increase once I start buying Blu ray disks). That's why the 4 GB barrier matters so much to me. I'm constantly moving very large files on my network, and these kind of differences in speed could save me a lot of time in the end.

Thanks for the explanation Roger!
 
So this means that, if you were to re-do your Linux benches again, this time also altering the record/block size, you would reach comparable results?

There are two block sizes involved. One is the block size used by the program (or the CopyFile api). The second are the block sizes used by the kernel redirector.

As an example the program could request 32kb, and the kernel could read two 16kb blocks to satisfy that. Or the kernel could read 64kb (ie read more than requested) hoping the program would request the next 32kb anyway and so saving some network traffic. If the program requested 128kb and the protocol happened to have a limit of 64kb as SMB1 does then the kernel has to make two 64kb requests to satisfy it. It could send one, wait till it has the result and then send the second request and wait for that result. This interpolates network latency into the reads. Or it could send both read requests at the same time (as the SMB1 and 2 protocols both allow) and not get that extra network latency in there.

Currently Microsoft claims that SMB2 has a larger maximum request/response size than SMB1 therefore they can get better performance. This claim is false as you can send multiple SMB1 requests and responses concurrently in order to transfer the same amount of information.

In practise however the SMB2 code is all new and only supported when both ends are Vista (or above). It was easier to make this code perform better, versus modifying the existing SMB1 code. It also gave an angle for forcing upgrades on customers. Finally SMB2 is 100% Microsoft design and implementation (SMB1 originated with IBM) so they could scare people with intellectual property FUD.

Micosoft's SMB client just isn't very good. When I worked on WAN optimizers we frequently got better than LAN performance over a WAN due to various simple techniques such as how you broke requests into pieces, coalescing them, caching, readahead etc.
 
Ah. So SMB2 is evloutionary rather than revolutionary.

A bit of history will help explain it. SMB1 was originally designed by IBM and used in XENIX (a Microsoft flavour of UNIX!), DOS and OS/2. It later ended up in 16 bit Windows (aka for workgroups) and in 32 bit Windows (aka NT). Over time things accumulated in the protocol. For example files ended up having long and short names. Text was "OEM" and Unicode. Authentication went through plain text, pitiful encryption, better encryption and kerberos. Many of the requests had 'infolevels'. For example asking for information about a file (stat call to C programmers) would return information like size, creation and modification dates, extended attributes (an OS/2 ism) size etc. The infolevel determined what the returned structure contained. In general Microsoft had a habit of taking whatever the normal structure in the kernel was and sending/receiving that. Then the structure would change in the next Windows release (hey what about compressed size?) and that became a new infolevel, rinse and repeat (hey what about encrypted size, what about offline storage etc) They also liked to add new requests over time. For example the way Windows NT opened files didn't really map onto how they did it in the past. Each new operating system led to new read and write methods because they wanted the redefine the semantics as to how memory could end up being used - remember that Microsoft's client and server were both kernel based. Repeat this for 20 years and you have a rather large mess. It means that an SMB server has to support a lot of stuff that could be thrown at it. For example depending on the age of the client, different info levels would be requested. (Windows 98 used to be schizophrenic sometimes doing unicode and sometimes not!) Clients were simpler since they would request what was normal for them, and then have some fallbacks if the server didn't support it.

With SMB2 Microsoft did two things. One was a clear claim of intellectual property. Since they specified the protocol, they could also try to get patents on implementing it too, thereby shutting out Samba and similar projects. FUD could be used with customers to scare them away from non-Microsoft implementations. Fortunately the European Union and public opinion didn't take too kindly to this kind of thing and Microsoft ended up publicly licensing the protocol information.

The second thing was they didn't have to worry about backwards compatibility and so could cut out all the older crud in the protocol. For example there are only around 20 different requests in SMB2 versus over 100 in SMB1 and no infolevels. (It will be interesting to see how long it stays that way.)

SMB1 and SMB2 are not binary compatible but they are substantially similar and run over the same port. The client sends an SMB1 initial request indicating it supports SMB2 and the server can choose to continue speaking SMB1 (one of the 12 dialects) or switch to SMB2.

I find it strange Microsoft chose to develop a new 'standard' rather than to optimize the existing one, especially if you say they were forced to publish the specs. (and they could have got the same result by just optimizing SMB1). Strange... What could have been the reason for them developing SMB2 then?

The two reasons were because of their intellectual property FUD campaign (remember them signing vague agreements with various weak Linux vendors) or how Linux violates their patents which somehow they forgot to prove. The second was having new code which they could use to force people to upgrade (similar to how DirectX 10 would) and that developers prefer working on new code than crufty old stuff with almost 30 years history in it.

So in time, the SMB2 optimizations will also end up in Linux. Good.

The important point is that performance limitations are not a limitation of the protocol but mostly of the code in the kernel redirector client and to a lot lesser extent the actual running program. However Microsoft's code is not open source so we can't see it, and in any event we don't have the freedom to change it.

My main use (for now) is file serving/streaming. More in particular I'm putting together a 'movie jukebox'

Playing a DVD peaks at around 8mbits/sec. You could do that over wireless B! More modern codecs can use a lot less bandwidth. Bluray seems to have a normal maximum of 36mbits/sec (fits in wireless G) although data could in theory peak at over 400mbits/sec.

That's why the 4 GB barrier matters so much to me.

That is a filesystem limitation. NTFS and the SMB1/2 protocols have supported larger file sizes for many years.

I'm constantly moving very large files on my network, and these kind of differences in speed could save me a lot of time in the end.

By far the best file transfer protocol especially for large files is ftp. It opens the TCP socket and just funnels the file through without any protocol overhead or other forms of framing (other than TCP obviously). Make sure you are using jumbo frames so that there are fewer network interrupts.

My preference is to use synchronization tools and just let them run unattended in the background. They will handle network disconnects, machines being put to sleep and resumed etc. A good Windows centric one is robocopy while rsync started on Linux but works everywhere, although it has appeared slower to me on files bigger than 4GB.
 
A bit of history will help explain it. (...)
Wow! Thanks for the extensive history lesson. I always like to read about these kind of facts. I understand better now.

Playing a DVD peaks at around 8mbits/sec. You could do that over wireless B! More modern codecs can use a lot less bandwidth. Bluray seems to have a normal maximum of 36mbits/sec (fits in wireless G) although data could in theory peak at over 400mbits/sec.
I know. The streaming part is not the problem here. It's the ripping part. See, to save time, I've opted to rip all the DVD's to an ISO file, and not to compress the movies to DivX. (otherwise this takes about one hour per title - at 1000+ titles... phew...). This leaves me with ISO files of about 4.6 to 8 GB per movie, depending on the length of the movie. Once the movie is put on the server harddisk, and as long as it stays in place and doesn't get moved around, I only need bandwidth for streaming and that doesn't take much, as you very correctly stated. My main problem is I'm still experimenting with the ideal folder structure, and because of this a lot of large files get moved not only from disk to disk but also over the network. (because sometimes I also copy back and forth to other computers in the house). The reason I do this is mostly, but not exclusively, for backup purposes. My main point though is it saves me a lot of time if the file transfer is as fast as possible.

That is a filesystem limitation. NTFS and the SMB1/2 protocols have supported larger file sizes for many years.
This I knew. :)

By far the best file transfer protocol especially for large files is ftp. It opens the TCP socket and just funnels the file through without any protocol overhead or other forms of framing (other than TCP obviously).
Tell me about it. That's why I'm so pissed at Microsoft because they chose NOT to implement FTP in Windows Home Server. And because of the disk access system they created especially for this OS (Drive Extender), there's also no easy or good way of installing an FTP server additionally either. :(

I wished there was a way to disable this Drive Extender and simply work with Drive letters again. Sort of like having a 'mini 2003 server for home use' because that would allow me to use FTP. I have seriously considered buying Windows 200x Server in the past (and still do actually) because of this 'flaw' (cause that's how I see it, as a flaw! - MS will claim it's a feature. :rolleyes:) because I really would like to be able to use FTP. I know for my specific use it's ideal, and it bugs me immensely this protocol was scrapped from the WHS features list. Had I known this beforehand when I bought WHS...

Make sure you are using jumbo frames so that there are fewer network interrupts.
About 90% of my network supports jumbo frames, but because of that remaining 10% (media extenders like Popcorn Hour) I chose not to enable this feature. An article Tim wrote some time ago (two years?) reported on the use of jumbo frames and didn't recommend it if not all devices on the network supported it. Maybe I should try it nonetheless and see what gives...

My preference is to use synchronization tools and just let them run unattended in the background. They will handle network disconnects, machines being put to sleep and resumed etc. A good Windows centric one is robocopy while rsync started on Linux but works everywhere, although it has appeared slower to me on files bigger than 4GB.
Agreed. I was also planning on using rsync or a similar program once I had put everything in place on the server. Unfortunately this 'sorting of movies'-task takes A LOT more time that I have ever bargained for. Thanks a lot for your suggestions though!
 
An article Tim wrote some time ago (two years?) reported on the use of jumbo frames and didn't recommend it if not all devices on the network supported it. Maybe I should try it nonetheless and see what gives...
Not sure what article that was. A more recent and comprehensive article on Jumbo Frames is here. The important thing is that the switch(es) you use support jumbo frames. Otherwise the frames will be dropped.
As the article notes, mixing jumbo and non-jumbo capable devices will work ok for TCP, but not UDP.
 
Ah yes, that was exactly the article I meant. I guess time doesn't fly as fast as it sometimes seems. :)

In my specific case Tim, do you think using jumbo frames would cause me problems? Every device on my Lan is capable of jumbo frames (D-link DIR-655 router and HP Procurve 1400 switch), except for the popcorn hour media player. Doesn't media streaming use UDP?
 
Last edited:
Tim, first of all thanks for your kind references in the article :) I had pretty much concluded that Vista SP1 with SMB2 were at the heart of my speedy results, but had not had the time to go any deeper into my findings. What you've done here will likely stand as the best clinical overview, presented in an understandable format, of the advantages of SMB2 and SAMBA over SMB1. I've been incredibly busy with a few product launches over at cinevate.com but I'll be checking in here more regularly in the new year. One of the things I really like about your reviews is that you don't attempt to represent yourself as a guru on the subject, but rather share your explorations from the perspective of your curiosity. Excellent read!

Now if I can convince you to explore load balancing and 802.3ad behaviour on a few switches and NAS units....

Cheers,
 
Last edited:
In my specific case Tim, do you think using jumbo frames would cause me problems? Every device on my Lan is capable of jumbo frames (D-link DIR-655 router and HP Procurve 1400 switch), except for the popcorn hour media player. Doesn't media streaming use UDP?
I don't think so. But why not try it yourself? You'll know if you have problems soon enough...
 
Tim, thanks for taking a look at this stuff.

One thing I wanted to mention was I had found when using the Process Monitor program you can see more information using the Advanced Output option under the filter menu. This actually breaks down the I/O requests more and shows how they interact with the system. Like Roger mentioned about large requests being broken down into smaller requests.

Looking at the Performance Monitor plots you provides seems to show that you are actually saturating the network at many points during the file transfers. I saw similar results when testing with my 2 drive RAID 0 arrays here at home.

00Roush
 
Roger, thanks for all of the extra information. I do have a few things that I wanted to bring up/clarify...

You mentioned that Vista has no improvements to the SMB1.0 implementation. I sort of disagree with this due to what I have found in my testing. When testing on my home network between my Win XP Pro server and Vista SP1 client I found that Vista actually issues concurrent 32K requests when doing file copies to/from my Win XP machine. I posted a bit more info about that here... http://forums.smallnetbuilder.com/showthread.php?p=4840#post4840

Also you mentioned a post-XP kernel would be needed to see the higher speed. If you were talking about a client PC I would agree but when used on the server I think XP works just fine. I have tested with both XP Pro and Ubuntu as my server and found that with Vista as the client, both were capable of 80-100 MB/sec or higher for large file copies. So far I have seen better performance from XP but I think that might have to do with Vista using 32k requests. Just going off what I have seen with my home network.

From my understanding Vista will only use SMB2 to communicate when the other machine is running either Win Server 2008 or Vista. Anything else and Vista defaults to SMB1. So for the majority of the NAS units out there running SAMBA, SMB1 would be used to communicate.

00Roush
 
You mentioned that Vista has no improvements to the SMB1.0 implementation.

I should be clear that I was using network monitoring programs and was only interested in what was sent over the wire. I didn't see any appreciable difference in behaviour although some programs changed behaviour (eg Explorer did some things in parallel which resulted in parallel SMB1 requests).

I sort of disagree with this due to what I have found in my testing.

It is nice that you are seeing more concurrency and see that waiting for responses before sending next requests is what kills throughput. However if you were using process monitor then you were seeing changes in user space, not in the kernel. Also relevant is that Vista has a new device driver model for network cards. In prior Windows versions a network card was only handled by one cpu no matter how many cpus were present. They also added compound TCP which roughly translates into faster retransmissions on packet loss. (Old school TCP used to halve bandwidth usage and slowly climb back up on packet loss).


When testing on my home network between my Win XP Pro server and Vista SP1 client I found that Vista actually issues concurrent 32K requests when doing file copies to/from my Win XP machine.

You have to determine what changed (which is rather tedious):
  • The Explorer (or equivalent) program
  • Libraries used by the program
  • Microsoft provided APIs such as CopyFileEx
  • The apis that map from Win32 to Windows (NT) Native API (did you know that Win32 is not the native api!). This where there is a transition from user space to kernel space and back. It is also mostly what process monitor is showing you.
  • The kernel filesystem layer
  • The kernel redirector that generates the actual SMB traffic

It is the last component I claim that I saw no changes in. See the diagrams in Mark's article where you can see K and U showing what is in user space and what is in kernel space.

Also you mentioned a post-XP kernel would be needed to see the higher speed.

It was that you had to use post-XP kernel on both ends of the connection to use SMB2. There were several things being conflated in the discussion. In particular the most egregious is Microsoft propaganda that SMB performance could only be improved by using SMB2. There are many places that performance can be improved as shown in the list above. SMB1 has supported concurrency since the very beginning. Certainly Explorer appeared to have changed code when I looked.

From my understanding Vista will only use SMB2 to communicate when the other machine is running either Win Server 2008 or Vista. Anything else and Vista defaults to SMB1. So for the majority of the NAS units out there running SAMBA, SMB1 would be used to communicate.

That is correct, until the stable Samba version ships with SMB2 support.
 
Last edited:
Tim, I know as you do that this area of performance is very frustrating and counter-intuitive at times. Congratulations on finally getting the sort of performance one initially hopes for when going to gigabit, in approaching the limits of gigabit, and for organizing and presenting those findings.

I reported such behavior with Vista pre-SP1 on Tom's forums here:

http://www.tomshardware.co.uk/forum/page-21405_17_100.html#t90206

I observed significant differences in OSs, with the following pattern:

Vista or later, with large access size, is needed on the client side for great performance. (And by "great" I mean > 100 MB/s, and even > 115 MB/s, sustained for very large files far exceeding cache.)

Windows OSs of 2003 or later vintages (including XP-64, but not XP-32) could have great performance on the server side, but only for file pushes, not file pulls from the client.

The "LargeSystemCache" registry setting on XP or W2K can make a big improvement in its performance, but still not bring it up to the 2003 or later level.

Vista as a server could have a great performance for file pushes and pulls.

Vista and its drivers could have very erratic performance.

And of course the situation is different when you don't use Windows file transfers and use for example a third-party ftp implementation.
 

Latest threads

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top