mikeash.com: just this guy, you know?

NSBlog
"A failure in the hot air department"
RSS feed (full text feed) - Show Tag Cloud
Showing entries tagged "performance". Full blog index.

by Mike AshTags: fridayqna hardware performance
Apple's newest mobile CPU, the A11, brings a new level of heterogeneous computing to iOS, with both high and low performance cores that are always on. With the release of the iPhone X, I set out to see if I could observe these heterogeneous cores in action.

by Mike AshTags: fridayqna performance cocoa objectivec iphone
Back in the mists of time, before Friday Q&A was a thing, I posted some articles running performance tests on common operations and discussing the results. The most recent one was from 2008, running on 10.5 and the original iPhone OS, and it's long past time to do an update.

by Mike AshTags: fridayqna gcd performance sourcecode
Welcome back to another Friday Q&A. I'm off to C4 today (hope to see you there!) but I've prepared this in advance so everyone stuck at home (or worse, work) can at least have something interesting to read. Over the past four weeks I've introduced Grand Central Dispatch and discussed the various facilities it provides. In Part I I talked about the basics of GCD and how to use dispatch queues. In Part II I discussed how to use GCD to extract more performance from multi-core machines. In Part III I discussed GCD's event dispatching mechanism, and in Part IV I took care of various odds and ends that I hadn't covered before. This week I'm going to examine a practical application of using GCD to speed up the production of thumbnails for a large quantity of images, a topic suggested by Willie Abrams.

by Mike AshTags: fridayqna gcd performance
It's that time of the week again. Over the past three weeks I've introduced you to the major pieces Grand Central Dispatch, an exciting new API for parallel processing and event handling in Snow Leopard. The first week I covered basic concepts and dispatch queues. The second week I discussed how to use dispatch queues for parallel processing on multi-core computers. The third week I covered GCD's event handling system. This week I'm going to cover various odds and ends which I didn't get to before: dispatch queue suspension and targeting, semaphores, and one-time initialization.

by Mike AshTags: fridayqna gcd performance
Welcome back to another Friday Q&A. This week I continue the discussion of Grand Central Dispatch from the past two weeks. In the last two weeks I mainly focused on dispatch queues. This week I'm going to examine dispatch sources, how they work, and how to use them.

by Mike AshTags: fridayqna gcd performance
Welcome back to Friday Q&A. Last week I discussed the basics of Grand Central Dispatch, an exciting new technology in Snow Leopard. This week I'm going to dive deeper into GCD and look at how you can use GCD to take advantage of multi-core processors to speed up computation. This post assumes that you've read last week's edition, so be sure to do that if you haven't already.

by Mike AshTags: fridayqna gcd performance
Welcome back to Friday Q&A. This week's edition lines up with Apple's release of Snow Leopard, so I'm going to take this opportunity to open up the discussion on previously NDA'd technologies and talk about some of the cool stuff now available in Snow Leopard. For this week I'm going to start what I plan to be an ongoing series on Grand Central Dispatch, a topic suggested by Chris Liscio.

by Mike AshTags: fridayqna memory performance
Welcome back to another Friday Q&A. Now that WWDC is behind us, I'm back on track to bring you more juicy highly-technical goodness. Maybe I can even get back to doing one a week.... This week I'm going to take André Pang's suggestion of discussing process memory statistics (the stuff you see in Activity Monitor or top) in Mac OS X.

by Mike AshTags: fridayqna chemicalburn performance threading
Welcome to another Friday Q&A, where all the women are strong, all the men are good-looking, and all the programmers are above average. This week, Phil Holland has suggested that I dissect an interesting piece of code from one of my screensavers, so we're going to take a look at ChemicalBurn's multithreaded routing code.

by Mike AshTags: fridayqna rant performance
Welcome back to Friday Q&A, a bit early this week since I won't be around to post it at the usual time. This week I'm going to cheat a little bit and use a topic that I "suggested" myself. I'll be talking about what I like to call "holistic optimization", which is essentially how to look at optimization within the context of your entire project, rather than bit-swizzling, loop unrolling, and other micro-optimizations.

by Mike AshTags: fridayqna performance cocoa ipc
Welcome back to another Friday Q&A. This week I'm going to take Erik's (no last name given) suggestion from my interprocess communication post and expand a bit on Distributed Objects, what makes it so cool, and the problems that it has.

by Mike AshTags: fridayqna performance nsoperationqueue
Welcome back to Friday Q&A, which this week is also Friday the Thirteenth! Be especially careful, as this is the first of two consecutive Friday the Thirteenths. For this first Friday the Thirteenth I'm going to talk about parallel software design using an "operations" approach (think NSOperation), as suggested by Nikita Zhuk way back when I first started this whole thing.

by Mike AshTags: fridayqna shark performance
Welcome back to Friday Q&A.; This week I'm taking Jeff Johnson's idea to discuss optimization and profiling tools.

Friday Q&A 2008-12-19 at 2008-12-20 01:28
by Mike AshTags: fridayqna threading parallelism performance
Great response last week. This week I'm going to merge Sam McDonald's question about how I got into doing multithreaded programming and Phil Holland's idea of talking about the different sorts of parallelism available.

by Mike AshTags: performance cocoa objectivec iphone
I finally got a chance to run my performance comparison code on an iPhone, so we can see just how much horsepower this little device has. I still am not able to load my own code onto the device myself, so I want to thank an anonymous benefactor for adapting my code to the new environment and gathering the results for me.

by Mike AshTags: performance cocoa objectivec leopard
By popular demand, I have re-run my Performance Comparisons of Common Operations on the same hardware but running Leopard.

by Mike AshTags: performance c++ stl casestudy
Those who know me from a programming standpoint know that I am a big opponent of needless optimization. But sometimes optimization is necessary, and when that comes I'm a big proponent of examining algorithms over twiddling low-level code. I recently had a good opportunity to perform algorithmic optimizations in a somewhat unconventional scenario, and this post will describe what I did.

by Mike AshTags: cocoa garbagecollection performance
The move to garbage collection in Cocoa has been interesting. People have said that it's impossible, or impractical, or a bad idea, or doomed to failure, and one of the most common things trotted out is that GC is inevitably slow. However, I think that enabling garbage collection in your Cocoa app could actually be a good way to increase performance under the right conditions.

by Mike AshTags: performance objectivec cocoa
We all know that premature optimization is the root of all evil. But a recent conversation brought to mind that we often don't really know the runtime costs of the code we write. While we should be writing foremost for correctness and clarity, having an idea of these speeds is good, especially when we get it into our heads that some operation is much more costly than it really is. With that in mind, I compiled a list of common Cocoa operations and how much time they require at runtime.

Autorelease is Fast at 2006-06-07 00:00
by Mike AshTags: autorelease performance cocoa objectivec
If you've done much Cocoa programming, you've probably run into a situation where you needed to create a local autorelease pool because of some sort of loop. And you've probably run into advice telling you not to create and destroy the pool for every iteration, because that would be slow. I never believed that it could be significant, and I finally took the time to test it today. What's the verdict? Just as I thought, making autorelease pools is really fast.
Hosted at DigitalOcean.