Advanced programming tips, tricks and hacks for Mac development in C/Objective-C and Cocoa.

An Asteroids-style game in CoreAnimation, Part Two.

How would you write an arcade-style 2D game in CoreAnimation? I'll show you how to write a resolution independent, high-speed, model-view-controller designed, Asteroids-style arcade game using CoreAnimation as the screen renderer. In this second of four parts, I'll create basic objects in the game and their corresponding CoreAnimation Layers on screen.

We have a window

In the previous post I proposed an Asteroids-style game in CoreAnimation and explained the basic design of the Quartzeroids2 game

I showed the code to construct and scale the window. It has a blue gradient background and can switch between fullscreen and windowed modes.

Now, we need to put something in that window.

objectsonscreen.png

Objects placed on screen in this part.

Image Layers

Drawing in the window is done using CoreAnimation layers. If you looked closely at how the window background was drawn in Part One, you'll notice that it was a CALayer, drawn using a single image:

NSImage *image = [NSImage imageNamed:imageName];
[image
    drawInRect:NSRectFromCGRect([self bounds])
    fromRect:[image alignmentRect]
    operation:NSCompositeSourceOver
    fraction:1.0];

Since this single image is a PDF made from vector (not bitmapped) components, this means that the layer can be drawn at any resolution without aliasing effects from resizing. In fact, the background PDF isn't even the right aspect ratio and Cocoa happily reshapes it for us. The added processing time to render a PDF, relative to bitmap, doesn't really matter since CoreAnimation only renders the CALayer once, then reuses the existing texture.

Game Objects and Layers

Placing a game-related object on screen will require two different components: the GameObject and the GameObjectLayer.

GameObject

The GameObject is the version of the object as handled in the GameData. Since the game logic is responsible for deciding the size, positioning, speed, trajectory and in some cases the image and angle of rotation of the object, these properties will all be properties of the GameObject.

The GameObjects are held by the GameData object. It tracks all of the GameObjects in a dictionary, so all GameObjects can be accessed at any time by their unique key in the GameData's gameObjects dictionary.

Resolution independence:
The biggest quirk about how I decided to implement the GameObject is that it is totally resolution independent. All coordinates and sizes are measured in units where 1.0 is the height of the game window. So the coordinates (0, 0), (0.5 * GAME_ASPECT, 0.5) and (GAME_ASPECT, 1.0) are the bottom-left corner, center and top-right corners of the screen respectively (GAME_ASPECT is the window aspect ratio: width of the window divided by the height).

With the GameObject being just a long list of Objective-C properties, most of the code in GameObject exists to set, modify or update those properties. The biggest common "update" that needs to be performed is to move the object according to its speed and trajectory and "wrap" the object if it goes off the edge of the screen:

- (BOOL)updateWithTimeInterval:(NSTimeInterval)timeInterval
{
    x += timeInterval * speed * cos(trajectory);
    y += timeInterval * speed * sin(trajectory);
    
    if (x > GAME_ASPECT + (0.5 + GAME_OBJECT_BOUNDARY_EXCESS) * width)
    {
        x = -0.5 * width;
    }
    else if (x < -(0.5 + GAME_OBJECT_BOUNDARY_EXCESS) * width)
    {
        x = GAME_ASPECT + 0.5 * width;
    }
    
    if (y > 1.0 + (0.5 + GAME_OBJECT_BOUNDARY_EXCESS) * height)
    {
        y = -0.5 * height;
    }
    else if (y < -(0.5 + GAME_OBJECT_BOUNDARY_EXCESS) * height)
    {
        y = 1.0 + 0.5 * height;
    }
    
    return NO;
}

This method returns "NO" to indicate that it was not deleted during the update. This won't be used until next week when we add more of the game logic.

The objects are allowed to exceed the edge of the bounds by GAME_OBJECT_BOUNDARY_EXCESS. This ensures that they don't feel like they disappeared with a tiny portion still onscreen.

GameObjectLayer

The GameObjectLayer is a subclass of ImageLayer, using that class' code to render a single image to a CALayer. A GameObjectLayer contains a key that identifies its corresponding GameObject in the GameData. It observes the GameData's gameObjects dictionary for changes on that key and when any of the observed GameObject properties change, the GameObjectLayer will update itself accordingly.

The result is that the only substantial work required in the GameObjectLayer is updating itself when a change in the GameObject is observed.

The GameObjectLayer's observeValueForKeyPath:ofObject:change:context: method is smart enough to realize when the GameObject changes to NSNull (i.e. is deleted) and autodeletes.

- (void)update
{
    GameObject *gameObject = [[[GameData sharedGameData] gameObjects] objectForKey:gameObjectKey];
    double gameHeight = [[GameData sharedGameData] gameHeight];

    NSString *gameObjectImageName = gameObject.imageName;
    double x = gameObject.x * gameHeight;
    double y = gameObject.y * gameHeight;
    double width = gameObject.width * gameHeight;
    double height = gameObject.height * gameHeight;
    double angle = gameObject.angle;
    BOOL visible = gameObject.visible;

    self.imageName = gameObjectImageName;
    self.bounds = CGRectMake(0, 0, width, height);
    self.position = CGPointMake(x, y);
    self.transform = CATransform3DMakeRotation(angle, 0, 0, 1.0);
    self.hidden = !visible;
}

Notice that the GameObjectLayer is not resolution independent, so the GameObject coordinates are multiplied through by the gameHeight to convert them to coordinates in the layer hierarchy.

Controller code to bind them

The final element required to make GameObjects and GameObjectLayers work together is controller code to construct the GameObjectLayer for each GameObject as it appears.

I chose to do this by making the GameData send a GameObjectNewNotification every time a new GameObject is added to the gameObjects dictionary. The GameController observes this notification with the following method:

- (void)createImageLayerForGameObject:(NSNotification *)notification
{
    NSString *gameObjectKey = [notification object];
    
    GameObjectLayer *newLayer =
        [[[GameObjectLayer alloc]
            initWithGameObjectKey:gameObjectKey]
        autorelease];

    [CATransaction begin];
    [CATransaction
        setValue:[NSNumber numberWithBool:YES]
        forKey:kCATransactionDisableActions];
    [backgroundLayer addSublayer:newLayer];
    [CATransaction commit];
}

Transactions are disabled so the layer doesn't fade in, it appears immediately.

Letting the view reinterpret the data

If you ran the program with the above code, the asteroid would not look like the slightly soccerball texture shown in the screenshot at the top and it would not spin. The asteroid would be a smooth gradient circle. This is because the the asteroid shown in the screenshot is made of two components: the non-rotating "asteroid-back" which provides a consistent lightsource-like effect and the "asteroid-front" which is a spinning second layer on top of the back layer.

asteroid.png

The GameData only contains the basic bounds information, which is mapped onto the "asteroid-back" by default. How can we add the second spinning layer for the purposes of display?

We could add the second layer as another object in the game but since I want this layer for the purposes of display (it has no real game-logic impact), I decided to handle it a different way.

After the [CATransaction commit]; line in the previous code sample, I include the code:

if ([gameObjectKey rangeOfString:GAME_ASTEROID_KEY_BASE].location == 0)
{
    AsteroidFrontLayer *asteroidFrontLayer =
        [[[AsteroidFrontLayer alloc]
            initWithGameObjectKey:gameObjectKey]
        autorelease];
    
    [CATransaction begin];
    [CATransaction
        setValue:[NSNumber numberWithBool:YES]
        forKey:kCATransactionDisableActions];
    [backgroundLayer addSublayer:asteroidFrontLayer];
    [CATransaction commit];
}

So I look to see if the new GameObject is added to the gameObjects dictionary using a key that starts with GAME_ASTEROID_KEY_BASE. If it does, then I create a second layer that tracks the same underlying GameObject. This second layer is an AsteroidFrontLayer instead of the generic GameObjectLayer. This AsteroidFrontLayer class is a subclass of GameObjectLayer that overrides the imageName to be "asteroid-front" and applies a rotation to the layer on each update.

Conclusion

You can download the Quartzeroids2 Part 2 project (225kB) which demonstrates the classes presented in this post.

The project for this part shows the GameObject, GameObjectLayer and AsteroidFrontLayer in a simple, non-interactive display. To show everything on screen, the GameData class contains a newGame method which constructs some sample objects and then starts a timer running to call the updateWithTimeInterval: methods repeatedly.

Now that we can draw our objects on screen, Part 3 will add user-interaction and game logic.

Read more...

An Asteroids-style game in CoreAnimation, Part One.

How would you write an arcade-style 2D game in CoreAnimation? Over the next few weeks, I'll show you how to write a resolution independent, high-speed, model-view-controller designed, Asteroids-style arcade game using CoreAnimation as the screen renderer. In this first of four parts, I'll detail the concept for the overall application and show you how to create a resolution independent window for the game.

quartzeroids2.png

A screenshot of the finished game

CoreAnimation as a game renderer

Drawing on the Mac is normally done through either CoreGraphics or OpenGL. OpenGL is fast, supports 2D and 3D and is already used in many games. However, OpenGL is a low-level, pure-C API, so even simple rendering and animating of 2D surfaces can require a whole library of code to accomplish.

CoreGraphics and the NSView hierarchy are a whole library of well-written code. They are clean to use and support a wide range of features without needing to write them yourself. Sadly, CoreGraphics is not ideal for games since it does not easily render 30 fullscreen frames per second (the minimum standard for action games) — even with Quartz Extreme and the other improvements Apple has added since Mac OS X 10.0.

This is where CoreAnimation comes in. Apple introduced CoreAnimation in Mac OS X 10.5 (Leopard) to dramatically accelerate the animation of 2D rectangular textures. Since it integrates so well into the existing CoreGraphics and NSView hierarchy on the Mac, it suddenly gives CoreGraphics a way of delivering 30 (or more) fullscreen frames per second and makes a writing a game possible using nothing more than the Foundation, AppKit and QuartzCore libraries.

Quartzeroids One

The application I'll present over the next few weeks will be similar to an experiment I wrote about 8 years ago. Back then, the application was named Quartzeroids and it was an experiment to see if NSViews were fast enough to use for a 2D Asteroids game. The code was not good — if you find it out in the wild, keep your distance and call animal control.

Were NSViews fast enough for this type of animation? No, not really. At 640 by 480, the game played at 15 frames-per-second on my iMac G3 500Mhz. Computer and operating-system improvements have improved this (newer Intel Macs can now render this old 640 by 480 screen size at more than 30 frames per second) but even newer machines cannot maintain good frame rates at proper fullscreen resolutions of 1920 by 1200 using this approach.

Quartzeroids Two

Since Apple released CoreAnimation 15 months ago, I had been wanting to revisit the idea. To further increase the difficulty, I decided the project should also contain the following features:

  • Resolution independence (all vector graphics and completely resizeable).
  • Proper model-view-controller implementation where the "game-state" is the model and the view is free to reinterpret the state for the purposes of display.
  • Fullscreen or windowed mode for display.

Rough design

A rough block diagram of the program would look like this:

rect3155.png

The decisions I think were most significant are:

  • There is not a one-to-one relationship between objects in the Game Data and Layers in the Window.
  • The updates to the Game Data are triggered by the Game Data's own timer, not directly by the controller (which can start and stop but nothing further).

The second point is most significant when the Game Data is compared to a normal application document. A normal document never updates itself — it only changes when instructions are sent from a controller. In a game however, choosing when to update is a part of the game logic (which I have included in the model) so the controller shown here is left with only the simplest level of control.

I spent some time deliberating about whether to make the GameData a normal document object or a singleton. A normal document style object can be swapped with another document or recreated. A singleton is lazier and simpler since it doesn't need to be connected to every object which needs to access it (since it is globally accessible). Ultimately, the convenience of a singleton won over the hypothetical flexibility of a normal document object since the game will only ever be strictly single instance.

A resolution independent, constant aspect-ratio view

To get the code started for the project, I'll show you how the top level of the CALayer hierarchy (named backgroundLayer in the application since it shows the background for the game) is implemented inside the contentView for the window.

The two hardest constraints for the backgroundView are maintaining a constant aspect ratio and maintaining a constant internal coordinates when the surrounding window resizes or reshapes.

- (void)updateContentViewFrame:(NSNotification *)notification
{
    double gameWidth = [[GameData sharedGameData] gameWidth];
    double gameHeight = [[GameData sharedGameData] gameHeight];
    
    NSSize contentSize = [contentView bounds].size;

    NSSize aspectSize = contentSize;
    double scale;
    if ((aspectSize.width / aspectSize.height) > (gameWidth / gameHeight))
    {
        scale = aspectSize.height / gameHeight;
        aspectSize.width = aspectSize.height * (gameWidth / gameHeight);
    }
    else
    {
        scale = aspectSize.width / gameWidth;
        aspectSize.height = aspectSize.width * (gameHeight / gameWidth);
    }
    
    [CATransaction begin];
    [CATransaction
        setValue:(id)kCFBooleanTrue
        forKey:kCATransactionDisableActions];
    backgroundLayer.transform = CATransform3DMakeScale(scale, scale, 1.0);
    backgroundLayer.frame =
        CGRectMake(
            0.5 * (contentSize.width - aspectSize.width),
            0.5 * (contentSize.height - aspectSize.height),
            aspectSize.width,
            aspectSize.height);
    [CATransaction commit];

    [contentView becomeFirstResponder];
}

The GameController adds this method as an observer of the contentView's NSViewFrameDidChangeNotification. The gameWidth and gameHeight methods on the GameData objecct are programmed to return values with a 16:10 ratio (matching common widescreen computer aspect ratios). The aspectSize calculated here is then the largest size that obeys this ratio and can fit inside the contentView.

Finally, the CATransform3DMakeScale transform, applied to the backgroundLayer before the frame is set, causes the backgroundLayer to always treat its internal coordinates as though they are gameWidth by gameHeight. This will mean that any layers our game adds to the backgroundLayer's subview hierarchy will always render at a constant fraction of the backgroundLayer's size. Order is important here: apply the transform after the frame is set and it will behave differently.

The other required behavior of the window is switching to fullscreen and back. This is another task which became extremely simple in Mac OS X 10.5:

- (IBAction)toggleFullscreen:(id)sender
{
    if ([contentView isInFullScreenMode])
    {
        [contentView exitFullScreenModeWithOptions:nil];
    }
    else
    {
        [contentView
            enterFullScreenMode:[[contentView window] screen]
            withOptions:nil];
        
        for (NSView *view in [NSArray arrayWithArray:[contentView subviews]])
        {
            [view removeFromSuperview];
            [contentView addSubview:view];
        }
    }
}

The "for" loop is an annoying necessity: if you don't do this for a CoreAnimation layer-enabled NSView, then all child NSViews will fail to update correctly after a switch to fullscreen.

Conclusion

You can download Quartzeroids2 Part 1 (20kb) which shows the full implementation of the window, contentView and backgroundLayer.
A simple as this game is, it is too big to describe in one post — this is as far as I'll get this week. I've presented a few goals, a rough design and the window management code for the game.

In the next post, I'll start to put objects in the game and layers in the window.

Read more...

Breadth-first traversal of a graph of Objective-C objects

If you have a collection of interconnected objects in Objective-C, what is the best way to traverse them all in order from nearest to furthest? Are the NSMutableSet methods good for tracking already-visited nodes? How much overhead does an NSMutableArray FIFO queue impose relative to a depth-first search (which doesn't require a FIFO queue)? Does NSMutableArray perform better if objects are pushed onto the front, or the back? In this post, I present answers to these questions and more.

Graph theory

In programming, a graph is any collection of connected data objects.

If that information was new to you, then I suggest you read this Wikipedia page: Graph (computer science), then read every page to which it links, then for each of those pages, also read every page to which they link. Come back to this page once the irony is apparent.

The two common means of processing nodes in a graph are depth-first and breadth-first traversal. Breadth-first graph traversals are normally slightly harder to write than depth-first traversals because we have the added requirement of a maintaining a FIFO queue to handle the nodes to be processed.

When all the nodes in the graph are Objective-C objects and an NSMutableArray is used to maintain the FIFO queue, what is the fastest way to traverse? I tested a few different approaches to see which would work best.

The graph used for testing

All my tests were on graphs where each node of the graph was connected to two more nodes until the center layer of the graph, at which point pairs of nodes were subsequently connected to a single node until the graph converged back to a single node.

An example of this graph structure (shown horizontally so layers are aligned in columns) with two increasing layers and two decreasing layers is:

graph.png

Since there are two increasing layers and two decreasing layers, I called this a "half-depth" of 2. I will refer to the size of all subsequent graphs by their "half-depth".

All nodes are objects of the following class:

@interface Node : NSObject
{
    NSSet *linkedNodes;
}
@property (nonatomic, retain) NSSet *linkedNodes;
@end

These classes don't store any useful information but that's not the point of these tests: I'm testing traversal performance only.

Initial approach

The basic algorithm is:

  1. create a set to track already-visited nodes
  2. create an array to queue nodes-to-process
  3. for every node (in order) in the nodes-to-process array:
    1. get the set of nodes linked to the current node
    2. exclude nodes that we've already visited from the linked set
    3. add non-excluded, linked nodes to the nodes-to-process array

The first code I wrote implementing this algorithm was:

NSMutableSet *visitedNodes = [NSMutableSet setWithObject:startingNode];
NSMutableArray *queue = [NSMutableArray arrayWithObject:startingNode];

while ([queue count] > 0)
{
    NSMutableSet *newNodes =
        [[((Node *)[queue lastObject]).linkedNodes mutableCopy] autorelease];
    [newNodes minusSet:visitedNodes];
    
    [visitedNodes unionSet:newNodes];
    [queue
        replaceObjectsInRange:NSMakeRange(0, 0)
        withObjectsFromArray:[newNodes allObjects]];
    
    [queue removeLastObject];
}

This code uses NSMutableSet's own set-operators to exclude already visited nodes. It also pushes new objects onto the front of the queue and pops each node for processing off the end.

So, how did this approach perform? In a word: terrible — and it's the fault of -[NSMutableSet minusSet:].

Lesson 1:
Only ever use [setOne minusSet:setTwo] if setOne is bigger than setTwo. This method runs in O(n) time, where n is the size of setTwo. In the above code, where visitedNodes can be orders of magnitude bigger than newNodes it is much faster to iterate over newNodes and exclude nodes found in visitedNodes ourselves. That way we run in O(m) time, where m is the size of newNodes (and is actually small and constant for this test case).

The method minusSet: really should iterate over the smaller set between the receiver and the parameter. However, since it doesn't we must avoid it.

Front-to-back or back-to-front

The next problem to address: is it faster to push objects onto the front of an NSMutableArray and pop them off the end, or to push them onto the end and pop them off the front?

The following is the push onto front:

while ([queue count] > 0)
{
    NSSet *newNodes = ((Node *)[queue lastObject]).linkedNodes;
    for (Node *newNode in newNodes)
    {
        if (![visitedNodes containsObject:newNode])
        {
            [visitedNodes addObject:newNode];
            [queue insertObject:newNode atIndex:0];
        }
    }

    [queue removeLastObject];
}

This is the push onto back:

while ([queue count] > 0)
{
    NSSet *newNodes = ((Node *)[queue objectAtIndex:0]).linkedNodes;
    for (Node *newNode in newNodes)
    {
        if (![visitedNodes containsObject:newNode])
        {
            [visitedNodes addObject:newNode];
            [queue addObject:newNode];
        }
    }

    [queue removeObjectAtIndex:0];
}

The result, testing over graph half-depths of 8, 12, 16 and 20 (from 766 to 3145726 nodes) is that push onto end (the second one) is consistently 5% faster.

Lesson 2:
An add to the end of an NSMutableArray is the only operation that works quickly — most other operations are consistently slower. If the number of operations is equal, favor algorithms that add to the end of the array.

Other questions tested

Will local NSAutoreleasePools help small loops like this?

Wrapping the body of the above loops in an NSAutoreleasePool to release storage locally resulted in a 25% increase in time taken.

The inside of these loops don't autorelease any memory (NSMutableArray and NSMutableSet maintain their own memory manually) so the autorelease pool is wasted effort.

Is an NSOperationQueue for the FIFO queue faster?

Moving the content of the loop into an NSOperation object and pushing each operation into an NSOperationQueue to traverse the graph made the overall traversal approximately 100 times slower.

The extra work involved in creating each operation object, then having NSOperationQueue distribute the jobs over different CPUs (I have a 2 x PPC G5 CPU machine) and waiting for each thread to start and end is a lot of overhead.

Additionally, while NSOperationQueue is a FIFO queue, the jobs only run in-order if we set the maxConcurrentOperationCount to 1, so we're not really gaining anything from using the NSOperationQueue.

NSOperationQueue is intended for independent (threadable) operations of a non-trivial size. The inside of this loop doesn't meet that expectation.

How much overhead does the FIFO queue impose?

The only way to examine this is to run a depth-first search using a recursive algorithm.

void RecursivelyTraverse(Node *node)
{
    NSSet *newNodes = node.linkedNodes;
    for (Node *newNode in newNodes)
    {
        if (![recursiveSet containsObject:newNode])
        {
            [recursiveSet addObject:newNode];
            RecursivelyTraverse(newNode);
        }
    }
}

The result, over half-depths of 8, 12, 16 and 20 is that the recursive algorithm is consistently twice as fast.

Lesson 3:
Using NSMutableArray as a FIFO queue imposes a constant additional time per node. In this trivial test, the additional time amounted to half the computation for the node but if you have significant additional computations to perform per node, the overhead of the FIFO queue may be insignificant relative to the remainder of your algorithm.

Conclusion

You can download the test code I used: FIFOQueues.zip (46kB)

Even in such a simple algorithm, there are clearly some lessons to learn. The biggest surprise for me was that I couldn't trust minusSet: to work in the most efficient manner. I think I'll need to submit a bug for this to Apple.

NSMutableArrays are not symmetric front-to-back, although the difference is minor enough that it may won't always matter.

Tools like NSOperationQueue are there to help multi-threading (to an extent) but don't work well on tiny snippets of code like this. Small loops like this would vectorize better than parallelize (SSE or Altivec) — but you can't vectorize loops with Objective-C method invocations so that still isn't an option here.

Read more...

Interprocess communication: snooping, intercepting and subverting

When Xcode is running, Interface Builder seems to magically know which classes, methods and variables are available. Where does Interface Builder get this information? How can you recreate this effect when editing source files in an external editor without Xcode running? This is the story of how I investigated the communication between Xcode and Interface Builder, so that I could recreate it for myself.

Xcode and Interface Builder, sitting in a tree...

Prior to Mac OS X 10.5 (Snowless Leopard), if you made a change to your classes, IBActions or IBOutlets, you needed to manually instruct Interface Builder to re-read the relevant header files before these changes were visible in Interface Builder.

With Xcode 3 and Interface Builder this has changed. If you add a new class to a project in Xcode, it is immediately available in class lists when you switch to Interface Builder. Similarly, change an IBAction or IBOutlet in Xcode, save the header file and switch to Interface Builder: your changes are immediately visible.

And yet, if you quit Xcode and edit the same files in an external editor, Interface Builder doesn't automatically detect the changes. Clearly, Interface Builder is using something other than basic file monitoring to detect the changes.

NSDistributedNotificationCenter

The simplest way for applications in Mac OS X to communicate is through the distributed notification center, NSDistributedNotificationCenter in Foundation or CFNotificationCenterRef (initialized with CFNotificationCenterGetDistributedCenter) in CoreFoundation. This allows applications to broadcast dictionaries of objects to any application interested in listening.

To see if Xcode is communicating to Interface Builder using a NSDistributedNotificationCenter, I needed to configure the process "distnoted" (the daemon which manages the NSDistributedNotificationCenter) to log notifications so I could see if anything relevant is broadcast.

Apple's Technical Note TN2124: Mac OS X Debugging Magic explains some of what's needed. On the command line:

sudo touch /var/log/do_dnserver_log

Apple's documentation leaves out the next (required) steps:

sudo touch /var/log/dnserver.log
sudo chown daemon /var/log/dnserver.log

and then restart.

Sadly for my investigation, this isn't how Xcode communicates to Interface Builder. Looking at the log when Xcode and Interface Builder are started, reveals nothing relevant from either program.

NSPortNameServer

I observed the next clue to how Xcode and Interface Builder communicate by starting two copies of each. In this setup, a new class created in the first instance of Xcode will be visible in both instances of Interface Builder but a new class created in the second instance of Xcode will not be detected by either copy of Interface Builder.

This type of behavior is typical of named port connections.

A quick Mac OS X lesson: almost all inter-process communication on the Mac is built on Mach Ports underneath. Mach Ports are a way to pass messages (blocks of data) between processes.

The easiest way to for a process advertise a Mach Port so that other processes may connect to it, is to give it a name. Network names are registered using NSNetService (Bonjour) but on a single host (as is far more likely in this case) network names are registered through NSPortNameServer.

NSPortNameServer is, in turn, handled by "launchd" (the daemon that Apple created to replace init, cron, inetd and others). So to see if the NSPortNameServer supposition was correct, I needed to get "launchd" to output log information about Port Names.

Again, I followed Technical Note TN2124: Mac OS X Debugging Magic to learn how to do this:

sudo launchctl log level debug

And once again, the information contained in the Tech Note turned out to be insufficient. Apple really need to update this Tech Note.

I wrote a program to send thousands of port name lookups on random names but no logging was output. However, I could see that the "syslog" process (the process that records logging information) was very busy but it wasn't recording any information anywhere.

This is a typical "syslog.conf" problem: you must direct "syslog" information for the relevant "facility" and "level" to a destination. Unfortunately, the "facility" name that Apple gives for "launchd" in Tech Note TN2124 is wrong. Instead, I appended a "*.debug" line to my "syslog.conf" file and sent the output to /var/log/debug.log instead.

This finally worked: "launchd" information (and debug information from other processes) found its way to the debug log file.

Finding the correct piece of information in this haystack looked like an impossible task until I started Interface Builder without Xcode running:

Mach service lookup failed: PBXProjectWatcherServerConnection-3.1.2

It doesn't follow Apple's own naming policy for Port Names (which would be something more like "com.apple.Xcode.projectwatcherserverconnection.3.1.2") but at least it is clear about its function.

NSConnection

At this point, the data sent over the Mach Port could be anything. How do you work out the format? I hoped that Apple used an NSConnection since the Cocoa libraries will handle this automatically for me.

Fortunately, running the following line of Cocoa Objective-C:

id rootObject =
    [NSConnection rootProxyForConnectionWithRegisteredName:
        @"PBXProjectWatcherServerConnection-3.1.2" host:nil];

cleanly returned an object of type PBXProjectWatcherManager, revealing that yes, this is a regular NSConnection served over the Mach Port.

Recreating PBXProjectWatcherManager

The final step in replacing Xcode's role in keeping Interface Builder up-to-date with changes was to recreate PBXProjectWatcherManager. The easiest way to get this working, is to use class-dump to tell us what methods PBXProjectWatcherManager implements.

Running class-dump directly on Xcode.app was no real help (Xcode.app doesn't directly contain most of Xcode's functionality — it's in shared libraries). Using Activity Monitor (or lsof on the command-line) to inspect the Open Files of Xcode reveals all the shared libraries that Xcode uses. Most of these libraries live in /Developer/Library/PrivateFrameworks, so I ran the following script in that directory:

foreach f (*.framework)
  class-dump $f > ~/Desktop/`basename $f .framework`.h
end

to generate header file descriptions of each framework on the desktop. "DevToolsInterface.h" turned out to contain the description of the PBXProjectWatcherManager class.

Then it was a matter of working out which methods of PBXProjectWatcherManager are invoked and what they needed to return. So I created two projects: one to advertise its own PBXProjectWatcherManager under the name "PBXProjectWatcherServerConnection-3.1.2" and listen as Interface Builder connected and one to connect to Xcode's version of the same and work out the correct responses.

Final solution

The result is that Interface Builder invokes the following:

Method invokedRequired response
openProjectsContainingFile:An NSString containing a URI to any Projects that contain a file with the specified path.
pathOfProject:An NSString containing the file path for the specified Project URI.
nameOfProject:An NSString containing the project name for the specified Project URI.
targetsInProject:An NSArray of NSString containing the UUIDs for all targets in the specified Project URI.
nameOfTarget:inProject:An NSString containing the target name for the specified target UUID.
filesOfTypes:inTarget:ofProject:Types is normally nil so the response is an NSArray of NSString containing file paths to all files contained in the target.

Apparently, Interface Builder gets all the header file paths for all targets that use a given XIB file and monitors these files itself for changes. Experimental testing indicates that all that is required to replace Xcode's role in this case is to return these Project, Target and File values and Interface Builder will handle the rest (parsing the header files for information it requires).

Interface Builder re-requests these values every time it is brought to the front, so it polls the "PBXProjectWatcherServerConnection-3.1.2" server, rather than implementing any automated form of observation (despite observation methods on PBXProjectWatcherManager). My guess is that this is more robust if either end of the NSConnection crashes.

Conclusion

I hope I've been informative about interprocess communication on the Mac and techniques for monitoring and intercepting these communications.

These techniques presented in this post are only useful for intercepting standard Cocoa communication techniques.

Obviously, implementing your own "PBXProjectWatcherServerConnection-3.1.2" server would require care. It isn't sending complicated information but you would need to update it every time Xcode is updated and there are dozens of other methods in the PBXProjectWatcherManager that may play a role I didn't uncover in this brief investigation.

Read more...