Memory Mapped File Performance

In my last post I introduced Memory Mapped Files, and discussed how you could mark them as sparse.  Today, I would like to talk a little about performance.  Using LINQPad (my favourite tool!), I created a rough and ready test script which can be downloaded at the end of the article.

The idea was to check the raw read performance of getting an array of bytes from a memory mapped file.  As a control I used Array.Copy to copy the same number of bytes from one
byte[]
to another.  I also was only interested in accessing the same block of memory each time, and took measurements after an initial read, the idea being that I’d expect the data to be in memory and not on disk.

There are two ways of accessing data on a memory mapped file – using a MemoryMappedViewStream and an MemoryMappedViewAccessor.  The former is optimised to provide sequential access (e.g. when reading or writing a block of data) and the later is optimised for random access reading and writing.

Another area of concern for me was to determine how important it was to hold onto the stream/accessor.  That is, what is the cost of creating the stream/accessor on demand, or keeping hold of the disposable object and using it repeatedly.  Clearly, it was highly likely that re-using the same stream/accessor was going to be faster, but doing so could complicate designs so quantifying the relative cost was useful.

You can run the code yourself, but here is the summary of my findings, where each figure represents a factor of how much slower each operation is.  ‘Single’ indicates that multiple loops made use of the same view or accessor, whereas multiple indicates that a new view or accessor was created for each iteration:

Block Size
(bytes)
View Stream Accessor Accessor/Stream
vs. Array Multiple/
Single
vs. Array Multiple/
Single
Single Multiple
Single Multiple Single Multiple
32 3.3 400 121 12 400 33 4 1.01
1024 1.75 127 73 120 243 2 69 1.9
32768 1.03 9.7 9 163 172 1 158 17.6

All these figures are approximate, and you’re results may vary but they can allow us to draw some general conclusions:

  1. Accessors are always slower than streams when reading/writing data sequentially, the difference gets worse as the size of data increases.  Therefore, performance can be greatly improved by always writing data sequentially and in large chunks, however the choice is less important for small amounts of data (less than a kilobyte).
  2. There is a cost to creating and disposing accessors, this cost becomes acceptable for accessors when the size of the data being read is in the order of kilobytes, and quickly becomes negligible over 32K.  The benefit of keeping hold of a stream remains significant.
  3. When reading a large stream the performance is equivalent to raw memory access.
  4. The proportional cost of creating an accessor increases as the size of the buffer is increased, this is the opposite of creating streams – where the cost is fixed and therefore the relative cost decreases.
  5. All of this needs to be taken alongside the reality that the slowest operation, reading 32K with a freshly created accessor, took a total of 0.49ms, that’s slow compared to the ~3ns required to copy a 32K
    byte[]
    , but it’s certainly not slow in the grand scheme of things.

These overall conclusions will hopefully help you when designing any systems that leverage memory mapped files.

The code is availabvle as a Gist, or can be downloaded using the button below:

Leave a Reply

%d bloggers like this: