mikeash.com: just this guy, you know?

Posted at 2007-08-25 00:00 | RSS feed (Full text feed) | Blog Index
Next article: Don't use strnstr
Previous article: Subtle Bugs
Tags: cocoa objectivec performance
Performance Comparisons of Common Operations
by Mike Ash  

We all know that premature optimization is the root of all evil. But a recent conversation brought to mind that we often don't really know the runtime costs of the code we write. While we should be writing foremost for correctness and clarity, having an idea of these speeds is good, especially when we get it into our heads that some operation is much more costly than it really is. With that in mind, I compiled a list of common Cocoa operations and how much time they require at runtime.

I put together a Cocoa program to compute timings of a bunch of different operations. You can download that program here.

I ran the program on my 2.66GHz Mac Pro with 3GB of RAM and the stock hard drive. I didn't bother to shut down any other apps because I'm lazy and because I have lots of idle CPU. Disk timings may have been affected by background activity. This produced the following chart:

NameIterationsTotal time (sec)Time per (ns)
IMP-cached message send10000000000.90.9
C++ virtual method call10000000001.41.4
Integer division10000000002.32.3
Objective-C message send10000000005.05.0
Float division with int conversion1000000000.99.2
Floating-point division1000000000.99.3
16 byte memcpy1000000002.929.5
16 byte malloc/free1000000005.252.5
NSInvocation message send100000001.6160.7
NSObject alloc/init/release100000001.9186.6
NSAutoreleasePool alloc/init/release100000003.0300.0
NSButtonCell creation10000005.25219.2
16MB malloc/free1000001.010211.3
Read 16-byte file1000002.019905.4
Zero-second delayed perform1000003.030374.1
NSButtonCell draw1000007.676167.0
pthread create/join100001.1114887.3
1MB memcpy100001.2124217.6
Write 16-byte file100005.0503798.5
Write 16-byte file (atomic)100009.9989662.0
NSTask process spawn10005.55504646.1
Read 16MB file1002.929116230.5
Write 16MB file3010.0334185067.2
Write 16MB file (atomic)3010.0334293782.2


All file operations use NSData. The APIs used by the rest are hopefully obvious.

In general few of these results are surprising. However, some of them are still instructive.

IMP cached messaging is the fastest. This is no shock: it's just a call through a C function pointer. C++ virtual dispatch is close behind, about 50% slower, which is to be expected since it just incurs one additional array lookup before calling through a C function pointer.

Objective-C message sends are slower as one would think, but still very reasonable. At 5 nanoseconds each, this comes out to an average of just over 13 cycles per message send, which is fantastically fast.

Malloc/free is considerably more expensive as expected. There is a good amount of bookkeeping which takes place there. However, at 53ns per allocation, this isn't something which must be slavishly avoided.

NSInvocation is ridiculously slow, but once again it comes as no surprise. It has to do a lot more and the extra indirection it provides comes with a price, namely taking about 30 times longer than a straight message send.

The creation and destruction of an NSObject is significantly more expensive than a malloc/free but still just peanuts in the grand scheme of things. NSAutoreleasePool takes little more time to create and destroy, but you already knew that.

Allocating large chunks of memory is considerably more expensive than small chunks. This is because once you hit a certain threshold, every allocation goes straight to the kernel and you pay for that in the form of syscall overhead. Ten microseconds is still pretty fast but you might consider caching huge blocks of memory if you use a ton of them in a tight loop.

As expected, anything which hits the disk is tremendously slow. The fastest one, a 16-byte file read which no doubt stayed entirely in RAM cache for the entire test, still took 20 microseconds per read on average.

Delayed performs come in as surprisingly costly at 30 microseconds. Even so, this will support around 30,000 of them per second, hopefully you won't need anywhere near that many.

Creating threads is slow, over 100 microseconds. There's a reason we have thread pools.

The large memcpy works out to about 8GB/sec. I believe the Mac Pro has a theoretical max memory bandwidth of about 20GB/sec, but of course the memcpy hits the bus with the data in both directions, so I am willing to call this extremely impressive performance.

Filling in the end of the table are all of the remaining file operations. We can see the cost of using the "atomic" flag for NSData writes; since the amount of data for a 16-byte file is trivial, the write can be thought of as a single operation and the atomic swap adds a second operation, which doubles the time needed for the write. As expected, the cost for "atomic" becomes basically invisible for large files. The large write test comes in at 48.7MB/s which is quite respectable. The large read test comes in at 550MB/s which is obviously a result of OS caching.

A surprise entry near the end of the table is the process spawn test. At over 5 milliseconds per spawn, this is a very expensive operation. This is definitely not one for tight loops.

Soundbite lessons to take away:

Did you enjoy this article? I'm selling whole books full of them! Volumes II and III are now out! They're available as ePub, PDF, print, and on iBooks and Kindle. Click here for more information.

Comments:

I stand corrected, Mike. For what it’s worth, I discovered we were doing ObjC allocations in a similar situation anyway—looks like it was just my own preconceived notions and premature optimization at work.
I intended to run the tests on a PPC last night, but forgot on account of brain not working at 2 am. (It’s probably a good thing you distracted me from coding in that state.) Anyway, I’ve posted them at http://jens.ayton.se/blag/performance-co.. for space and formatting reasons. Spoiler: Objective-C message dispatch is not twice as fast as FP divide on PPC.

I’m not entirely satisfied about the accuracy of the smaller numbers, but the order is probably right.
Colin, thanks for the update. I hope you didn’t take my jab badly, it aroused legitimate curiosity and the results were a bit surprising, both in the allocation being slower and the drawing being faster than I expected. I had thought there would be about two orders of magnitude between them, not just one.

Ahruman, thanks for the comparison. For the record, I compiled the code above with -O0 because -O3 screws up the do-nothing loops as you noted, and -O0 doesn’t seem to hurt anything for this particular test. I also tried accelerated Objective-C method dispatch and to my surprise, there was no effect on the results. I didn’t try non-nil receivers because I believe that it’s not a useful measurement for real code.
Mike, accelerated dispatch is not implemented for x86. The private function rtp_init() (objc4/runtime/objc-rtp.m) does nothing if !defined(ppc). Looking into what rtp_init() actually does, the reason would seem to be that it’s easy to find a blr in (fixed-instruction-size) PPC code, but fiddly to do the equivalent for x86 code. Presumably the flag is quietly ignored on x86 so people don’t have to fiddle about setting up GCC_FAST_OBJC_DISPATCH_ppc.

Also, the export file has the non-nil entry points commented out, for both PPC and x86, with the helpful comment “non-nil entry points disabled for now”. I was really only interested in non-nil sends in the hope that they wouldn’t actually be much more efficient. Still, if they were, there might conceivably be an advantage in putting performance-sensitive code which can make that assumption in a category in a separate file with custom build flags. Possibly.

(Code extracts removed because Textile sucks.)
were the divisions by constants or non-constants? it makes a big difference, on both x86 and ppc. see here for details: http://elliotth.blogspot.com/2007/07/sti..
Elliot, in your case compiler optimizations where probably converting the remainder operation to less costly operations. A quick checks shows that x % 82 compiles to mulhwu r5,r31,r30; srwi r5,r5,6; mulli r5,r5,82; subf r5,r5,r31 (multiply short unsigned, shift right; multiply short, subtract) on PPC, while x % y compiles to divwu r5,r31,r30; mullw r5,r5,r30; subf r5,r5,r31 (divide unsigned long, multiply long, subtract). The costly division is the explanation there.

However, as you can see from the PPC assembler listing on my entry (linked above), the (unoptimized) benchmark code was generating division instructions for both the floating-point and the integer cases. I fully expect this was also the case in Mike’s original x86 case.
Nice summary information! I would disagree somewhat on the conclusions, though: allocating objects often can be the make-or-break factor in terms of performance. See: http://www.metaobject.com/blog/2007/08/h..
Accelerated Objective-C dispatch on PPC is faster because it avoids the dyld stub (which is otherwise used for all cross-library calls). On i386 the dyld stub is much faster than the ppc equivalent, so we didn’t bother doing extra work to bypass it.
Marcel, anything can be the make-or-break factor in theory. Certainly in very specific types of code this can be the make-or-break factor in practice. But this falls solidly on the side of the line that says you shouldn’t even bother to think about it until after you have written an entire working program, determined it to be slow, and profiled it to find out why. Contrast with something like, say, spawning a new process in a loop which you know must execute at least 1000 times per second, where you should definitely be reconsidering your design from the start.

Greg, thanks for the information on the dyld stub. That certainly explains why I saw no difference.
Presumably large malloc()s are slow not only (and possibly not even mainly) due to “every allocation goes straight to the kernel and you pay for that in the form of syscall overhead” but also every allocation requires a large number of pages to be created, (zeroed? Not sure whether that happens because I never assume it does) and potentially other pages need to be added to the laundry list to let those new pages become activated. Solaris on SPARC has some ridiculously large page sizes (like 4M) in order to avoid a number of cache miss and paging operations happening so heavily in Oracle^Wapplications with large working data sets.

Nice investigation.
I’m not sure what happens in the malloc itself exactly but I do know that the OS will put off a lot of the work until you actually start touching the pages. Since my test just does a malloc followed by an immediate free without ever writing to the pages, a great deal of that overhead is skipped.
OK, so now I stop talking from a position of ignorance and refer to my Paper Singh, specifically section 8.15 “Memory Allocation in User Space”. It seems that OS X does have scalable allocation categories, not quite the same as but analogous to Solaris large pages (because they’re not backed by large pages in the hardware – they just adapt the bookkeeping). Your 16b malloc()s are handled by a special case called the tiny region allocator, the quantum for which is exactly 16 bytes. Your 16M malloc()s are on the border of the large and huge allocations, I think they’re huge though. I’m slightly interested in finding out whether creating a 16M zone and doing the large malloc()s into that zone improves anything, but I currently have no use for that information as I have no reason to optimise malloc() calls anywhere.

BTW you’re correct of course that pages are created/activated by faults, not by allocation. My mistake.
My experience is that performance needs to be considered and designed for from the start. Not the “small efficiencies” whose pursuit has been characterized as the “root of all evil”, but the design choices that make good performance possible. Object-allocation patterns are definitely among the latter, not the former. This is not theory or corner cases, but hard won experience. YMMV.
Marcel, obviously personal experiences will differ. I’ve written my share of high-performance software and never had memory allocation of any kind be an important factor. I would assume that individual programming styles will influence these things a lot, since if you subconsciously write code that does something less often, then you will not run into that particular trouble as frequently.

In general I would say that if you’ve run into a problem a few times you should be experienced enough to know when to deal with it, and if you haven’t then you should follow the general principle of avoiding premature optimization, writing good, clean code, and waiting until you can profile to determine where the problems lie.
It’d be nice to see some Leopard numbers here. My tests on a 2.4 GHz iMac give about the same numbers for most things, but NSInvocation is twice as fast and objc message sends are slower. It’d be more interesting with numbers from the same computer.
Is it possible for you to also provide a compiled executable, simply for disassembly? (or attached the disassembled intel-syntax assembly code)

Just to allow us to see what's actually being executed, and if you provide an O3 and O0 version (I have no obj-c compiler atm) I'm sure we can fix it up so that O3 doesn't break any of the loops.

Comments RSS feed for this page

Add your thoughts, post a comment:

Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.

Name:
The Answer to the Ultimate Question of Life, the Universe, and Everything?
Comment:
Formatting: <i> <b> <blockquote> <code>.
NOTE: Due to an increase in spam, URLs are forbidden! Please provide search terms or fragment your URLs so they don't look like URLs.
Hosted at DigitalOcean.