Every few weeks, some tech company is in the fire over changes to their rules. This week it’s Instagram, but who knows who it’ll be next week. They put out some change to their terms of service that claims new or changed rights over what they can do, someone notices, bloggers and headline-hungry tech reporters find it, and suddenly we have us a news cycle. In 2012, the truth is not the actual truth, but that which is tweetable. People circulate headlines speculating on what the new terms mean, a few rounds of telephone go by at the speed of light, and pretty soon the company in question is the most evil entity on earth for the next two or three days.

This nuclear chain reaction cascades, and eventually people get mad; so mad, they decide to pull off a move that could never have existed in the pre-Internet era: the ragequit. A ragequit consists of three parts – backing up your account data (usually), deleting your account, and then talking very loudly about it on social media. Usually this decision is made within hours of the change going viral public. Its intent is to send a message that says that these changes are not OK, and if you’re going to make them, I’ll just take my ball and go home, so you should fix them.

In a way, the ragequit is fascinating to observe in human nature. In just a few hours, someone can go from ignorance to apathy to fear to anger, and let this rush of emotions dictate a permanent decision. We’ve now moved to a point where software is so disposable that we will spend months and years putting our life into it and throw it away at the first sign of perceived injustice against ourselves. It’s equally curious how people think a few scattered deleted accounts will end up persuading the company to see the error in their ways, as opposed to all the monstrous bad press being simultaneously thrown at them.

One of the most infamous incidents of the ragequit happened in 2010 when Facebook announced a number of changes to their privacy options and policies. As with all things Facebook and privacy (hence Instagram and privacy), people got mad and deleted their accounts en masse. Did it work? Well, no. Facebook didn’t even bother to dignify the effort with a response. They likely picked up more new users that day than they lost from ragequitters. That was two and a half years ago, and it’s not like Facebook’s privacy controls have gotten any better. The whole thing was a futile effort that made some people feel good, and effected no change.

Nobody has ever been called noble or admirable for knee-jerk removing part of their online presence. Those who do it are never celebrated for it beyond the moment, and many times end up crawling back, tail between the legs, and resuming their use of the service. So remember, if you’re thinking of pulling off the ragequit, it probably won’t do anything but make you feel better in the moment. The company might end up backpedaling, the story ends, and suddenly you’re looking for a new photo-sharing app.

And yes, I am entirely guilty of the ragequit in the past.

In the last few years, we’ve seen a pretty significant shift in how we use computers. We’ve gone from primarily using one Internet-enabled device (the PC) to using two (PC + phone) to using three (PC + phone + tablet), and who knows what else we’ll add in the next couple years. Not only are we looking up our data and documents on all these devices, we’re creating data and documents on them, and the time we’re spending to do it on the PC is getting smaller. Effortless and ubiquitous access to data is increasingly important to people.

If your app deals with user’s data, building cloud sync into your app should not be a feature you bolt on to an app – it is the feature. It’s why you will beat competitors or lose hard to them. It’s what will make your app feel effortless, thoughtless, and magical. It’s what will gain a user’s trust, and once you have that, they will sing your app’s praises and never give it up. But to earn that trust, you have to account for sync at every step of the design and engineering of your app.

Developers have a number of choices as to how to build an app around sync. You can use iCloud, you can use a hosted service like Parse, or you can build a custom sync service for your app. Each solution has trade offs. So what should you optimize for?

Read More

When I got my first-gen iPad, I stopped using it regularly within a few weeks. It was just too heavy, too big, too thick to really consider using as a replacement for a laptop, or to bring with me places. It’s too heavy to hold for a sustained period of time. In many ways, the iPad mini is what I really wanted the iPad itself to be, and how I want to use it. Smaller, thinner, and lighter than a laptop. Easy to carry everywhere. More immersive than an iPhone. It’s much better suited for the couch, bed, hammock, bus, or car. It’s the size of a book but the weight of a pad of paper.

Today, most iPhone apps are meant to be used in portrait (if not exclusively, then at least primarily). The OS goes out of its way to enforce this; the home screen is in portrait, and locking the orientation restricts you to portrait (even in cases like video and the camera where it makes no sense). On iPad, you can orient the device any way you like, including for the homescreen and orientation lock, but I’d wager that most people use it primarily in landscape. The narrower edge design of the iPad mini seems to encourage more portrait use, which means there may be an awkward early adopter period of apps that aren’t as useful on the mini because they are optimized for landscape over portrait. One possible benefit of the smaller size and the portrait emphasis is that maybe, just maybe, scaled-up iPhone apps won’t look as comically bad on the mini (and don’t scoff, as there are hundreds of thousands of apps that aren’t optimized for iPad). Who knows.

Last week I said I wasn’t going to buy one until I tried it out and felt the size. Oops. I guess we’ll see how it feels when I get mine on Friday.

Dealing with dependencies in Objective-C has always been a tedious process. You typically do some git submodule stuff, import their Xcode project into yours, add a dependency, add a linker target, set some compiler flags, etc., or you include the project’s .h and .m files manually. Then you end up running into problems because the header paths are wrong, or you forgot to add some linker flags that include categories, or some other problem. If that project requires ARC or iOS 6, you have to figure that out and set it up to be consistent with your project. Then, when you need to upgrade the library, you need to make sure all these steps still work, and hope nothing new got added that might break. It’s a very error prone process. Now, being a stubborn developer that’s always done it this way, I’ve been wary of any tools to automate this process, as I usually think I can handle it myself, and I’m usually wrong. Other languages have had package managers to solve this problem, so why not Objective-C?

CocoaPods tries to solve this problem by automating the process of fetching dependencies (and recursively fetching their subdependencies), adding them to an Xcode project, managing paths for everything, adding any extra compiler or linker flags, copying in any resources (images, nibs, sounds, or whatever else), and building it into your project. The end result is a very simple process of defining your dependencies in a file (called a Podfile), running a command line process, and then just building your app and referencing those dependencies. If you need to update dependencies or add new ones, just add them to the Podfile and run the command line process again. It’s very simple, and a far cry from managing all this stuff yourself. And, as of this writing, there are over 600 projects you can include in your app.

Under the hood, CocoaPods is creating an Xcode project which builds a static library, libPods.a, consisting of all your dependencies. It adds this project to an Xcode workspace and makes your project dependent on libPods.a using an Xcode config file. It then rewrites your Xcode project to link libPods.a and copy resources, and set some paths to variables included from the config file. It even detects if your project uses ARC, and sets flags appropriately. The result is that the majority of changes to your project are minimal, but instead reference a project that is under the control of CocoaPods, and as such it can be changed while rarely affecting your project. It’s a well thought out system.

To get started, you need to install the CocoaPods gem with a gem install cocoapods at the command line. Then, in the root of your Xcode project, add a Podfile that lists your dependencies and your deployment target. For this example, we’ll target an iOS 6 app that depends on the AFNetworking and FormatterKit projects. You can search for more projects on CocoaPods.org.

platform :ios, '6.0'
pod 'AFNetworking', '~> 1.0'
pod 'FormatterKit', '1.0.1'

Note: CocoaPods uses semantic versioning to determine how to handle version numbers. The version string can either be a specific version, or can include an operator that tells CocoaPods to pick a version for you. The ~> operator says, for version X.Y.Z, “use any version matching X.Y.*”, but you can also use >, >=, <, or <= which do what you expect.

Once you have this in place, run pod install. This command will:

  • download the podspec (a manifest listing instructions on the project’s requirements and build instructions) for each dependency you list, and those for any subdependencies
  • check the requirements for each podspec to ensure that your project meets the minimum requirements (so a Mac project won’t be added to an iOS app, or a project that only works on iOS 6 will not work on iOS 5)
  • set up a new xcodeproj with a static library target for all the source files in the dependency tree
  • set up an xcworkspace if you don’t already have one
  • add the Pods xcodeproj to this new xcworkspace
  • create an xcconfigfile that includes header paths for all dependencies
  • change your xcodeproj to use the xcconfig file for header and linker paths
  • add the libPods.a library to the Link Bundle With Libraries phase of your ‘xcodeproj’
  • add a new Copy Pods Resources script phase to copy any resources to your bundle

Once this is in place, you can build and run. Unless there are any problems with the dependencies, Xcode will compile all the dependencies and link them into your app. It’s very important that you use the xcworkspace, so Xcode knows how to build the Pods project correctly. You can then #include <afnetworking/AFNetworking.h> to begin using the code. That’s it!

I’ve started using CocoaPods on a project and have been enjoying using it over managing dependencies myself. I haven’t seen any reason to believe this would be more problematic than doing it all myself, but there are plenty of benefits. Dependencies can be kept up to date much more easily, and their inclusion process is much more strictly defined (and automated). For many projects, it’s far more likely to get the setup process right than I am, and it’s faster to get set up. I recommend checking it out for your projects.

The iPad mini is basically a small iPad 2. It has an upgraded camera, improved wireless, and a 15% higher density screen. But the screen is only as good as the original iPhone, and it’s running the same 19-month-old A5 processor (which is no slouch, but is hardly state-of-the-art). This is the same chip used in the latest iPod touch, but has more pixels to drive. I wouldn’t be surprised if, even with the non-Retina display, this device feels a little sluggish compared to an iPhone 5, or even a 4S.

The mini certainly fills a need; the current iPad is too large to be truly portable, but is smaller than every notebook you can buy. The iPad has definitely been the dominant player in the 10-inch tablet market, but the 7-inch tablet market has been growing. The leading competition in the 7-inch tablet space is the Nexus 7 (which is a very capable tablet), which will probably end up in a respectable #2 place by the end of 2012 in the area of several million units. It makes sense that Apple would want to try to hold on to the top seat.

The $329 base price point, however, is a strange and awkward place to start the lineup. Not only is this $130 more expensive than the Nexus 7, it misses the psychological barrier of getting under $300. This propagates through the upgraded models as well, and causing a weird staggering effect. In fact, adding in the iPad 2’s and the iPad 4’s price points, we get this pricing chart of 13 prices spread out over 14 models:

Price Model Storage Cell Data
$329 iPad mini 16 GB None
$399 iPad 2 16 GB None
$429 iPad mini 32 GB None
$459 iPad mini 16 GB 4G
$499 iPad 4 16 GB None
$529 iPad mini 64 GB None
$529 iPad 2 16 GB 3G
$559 iPad mini 32 GB 4G
$599 iPad 4 16 GB 4G
$629 iPad 4 16 GB 4G
$659 iPad mini 64 GB 4G
$699 iPad 4 64 GB None
$729 iPad 4 32 GB 4G
$829 iPad 4 64 GB 4G

While there are some overarching rules (e.g. if you want more space, or you want 4G data, you’re paying more), there’s no consistency when you move up or down by one price point. If you were thinking of spending an extra $30, you suddenly have a lot more variables to consider. Perhaps Apple did this to maybe get a few extra dollars out of the customer, but my hunch is that it’ll have the opposite effect. Say you walk into the Apple Store to buy a base model iPad 4 at $499. If you wanted to spend a little more, you could get a slower iPad with 3G, or a smaller iPad with a lot of space you don’t know if you need. On the other hand you could get the iPad mini with the exact same storage, a smaller screen, and 4G data, all while walking out of the store with $40 in your pocket. It’s not a hard conclusion to draw.

In the end, Apple will sell a zillion of them, and they’ll work fine. In a year, Apple will announce the next iPad mini, which will probably include a Retina display, a more modern chipset, and probably a price drop to $299 as well. It just feels like they’re holding some of that stuff back from this version, and it doesn’t seem like price is the motivating factor.

Personally I’m waiting to get one until I actually hold it and try to fit it into my large-but-not-iPad-large jacket pocket. The true test of a device like the iPad mini is its portability. The Nexus 7 fits my jacket, but barely. Hopefully the iPad mini fits as well.

When Twitter’s mobile apps were still Tweetie, they had a screen which let you change the API root. So if your API method is named 1/statuses/update.json, you add that to the end of the API root, giving you a URL that looks like https://api.twitter.com/1/statuses/update.json. If you change it to http://foo.com/bar/, then the API’s URL becomes http://foo.com/bar/1/statuses/update.json. You could use this if you were in a network where Twitter’s API was blocked, but you had a proxy server which wasn’t, you could still connect. Soon after, WordPress and Tumblr built versions of their API which supported the Twitter API, so you could use those services from within Tweetie. Then Twitter bought Tweetie and moved everyone to OAuth.

A couple weeks ago, I noticed that this was still present in Twitter’s official apps. I’ve been a big fan of App.net since it came out, whose API is different than Twitter’s, but not terribly so. I thought it might be interesting to try to build an “API translator” which pulled the App.net streams and posts into the Twitter app. The team behind App.net had a hackathon this weekend, and I had my project.

Today I shipped the first alpha of Apparchy, which turns Twitter’s official iOS apps into App.net clients. You sign up for a free account on apparchy.net, add your app.net account, and then log into the Twitter app with your Apparchy username and password. Then, the Twitter app will start loading data from app.net through the Apparchy API. You can view your stream, your mentions, your profile, your followers, and your friends, as well as post, reply, star, and repost. It’s not entirely complete, and some parts of the app will have no data or return nothing, but the core experience is pretty good.

Apparchy implements Twitter’s OAuth security, and sends all data over HTTPS, so the process is as secure as any other call through Twitter. Apparchy doesn’t touch the Twitter API at all, and so it’s not bound by any of Twitter’s terms of service, should they be applicable. The only way this will get shut down is if Twitter removes the ability to change the API root in an update to their app (which doesn’t sound likely, from what I’ve heard).

Apparchy is what is possible when you have open APIs like App.net’s and standards for how to handle server communication. It took a lot of research and trial/error to get it to work, but it works very well. I had a blast building this, and hope that it can be used for a long time. If you have an App.net account, give it a try for free at Apparchy.net.

The Wii U includes an unusual controller, the GamePad, that looks and acts like a small tablet with physical controls (or a large PlayStation Vita). Besides the conventional array of game controls like two analog sticks and a bunch of hardware buttons, the controller includes a microphone, speakers, a headphone port, a screen, and a front-facing camera. There is also a more conventional, Xbox-like controller called the Pro controller which has none of those inputs.

Kyle Orland of Ars Technica wrote this piece on Nintendo’s “solution” for in-game voice chat in their upcoming Wii U console. Nintendo decided to add in-game chat to the Wii U, which is something you’ve been able to do for almost a decade on other gaming platforms. But here’s the catch: those ports on the GamePad won’t work with it. You have to buy a standalone headset and plug it in to your GamePad to get it to work. Even stranger, the Pro controller doesn’t have the port you need to even use it. Furthermore, unlike Xbox and PlayStation, this support is not baked into the system as a whole, but will be opt-in for whatever games choose to spend the time, money, and energy to support it.

When you design a feature into anything, some percentage of people will use it, some won’t. The more barriers you place between the person and what they try to do, the more of them will give up. Design involves removing the barriers between the person and the solution to their problem. I reach for my iPhone over my Vita because my iPhone is usually closer. I reach for my Vita over my Xbox because the Vita is self-contained and doesn’t make me change my TV’s inputs. I reach for my Xbox controller over my Mac laptop with Windows on it because my Xbox doesn’t make me log out of everything I’m doing and restart. These barriers may be small and subtle, but people choose the path of least resistance to solve their problems, and barriers act as resistance.

Frankly, this kind of half-assed solution for a voice chat feature – voice chat, mind you, being an integral part of mutliplayer gaming for many – just increases our concern that Nintendo is still struggling to get online functionality right this time around.

Inexplicably, Nintendo chose to add all kinds of barriers to this one solution – how to talk to your friends while playing games. I can’t tell if this was done intentionally or as just a gaffe in design. Either way, what does it say of the rest of the Wii U? And what other features are going to suffer as a result of focusing on something they don’t care so much about?

This afternoon, Matthew Panzarino and Ken Yeung of The Next Web posted about a potential acquisition of the poster childs of startup excess, Color. A gut reaction of stunned disbelief is not unreasonable here, after a string of flopped products and tales of the CEO splitting for Maui. But after the shock comes intrigue. The Next Web rarely posts rumors of acquisitions unless they’ve triple checked everything. So if we assume it’s true, the question remains, why?

$41 million means you can make extremely compelling offers to the best engineers. Daniel Jalkut found that some of that cash went to paying for a number of tech patents as well. These patents relate to their technologies in grouping people together by their location and sharing content between them. These engineers spent the last year and a half tuning these algorithms, even if they weren’t used by people very much. Part of the reason Color was such a flop was because everyone had to use it for you to want to use it. That’s not an easy sell, especially in an environment like iOS where a person has to actively be using your app in order for it to provide value to others.

But let’s imagine a world where this stuff is built into the heart of iOS. There may not have been a lot of people who used Color, but there are a lot of people who use iOS devices, and suddenly Apple has solved the chicken-and-egg problem of availability. The solution Color offered can become much more useful if offered by Apple, who can break any of the rules they impose on 3rdparty developers. If you go to a barbeque or a concert, and everyone’s taking pictures and video, your iPhone will know that all these photos relate to the same event, and can group things together. It can tie in data from your address book to determine who the people around you are, and if you know them. You can create Photo Streams of events with everyone’s (or just your friends’) photos. Maybe this would integrate with the calendar, or even Facebook, to automatically associate photos and videos with events. (It’s worth noting that Google recently introduced a very similar feature to Google+ and Android.) And who knows, maybe in some weird way, this could become an aid to Apple’s troubled Maps, providing some kind of functionality like Street View or Microsoft’s Photosynth.

So maybe it makes sense that Apple might acquire this company for their expertise. Sure, they could do it all themselves, but Apple tends to buy companies with expertise in areas Apple wants to do better in. And buying the company outright gets you the engineers and saves you from the patent lawsuits. But if Color managed to “succeed”, it’s for many of the wrong reasons. Making it via a ton of money, a few unused apps, a pile of patents built on stale prior art, and a pool of developers to focus on a niche set of knowledge is a role of research departments within companies like Apple. If this were a model for the industry, we’d be looking at apps that have no real utility to people built by companies that focus on compartmentalizing knowledge and locking it off for others to use, all in the hopes that a cell phone maker has a bunch of cash to throw at you for the next new feature. That’s not a bright future. So while this may make sense for Apple, Color, and for iOS users, it sets an uncomfortable precedent. Hopefully it won’t change the idea of the overfunded startup into a model to be emulated.

Blocks in Objective-C are super useful for making your object-oriented code a bit more functional. But as blocks are an extension to the C language, they have to play by the rules of C, so the syntax is a little obscure, and the documentation can be a little hard to find. So here’s a guide on how to declare blocks so you can use them in various scenarios.

Read More

Update: It didn’t work out, but all the content from that blog has been merged back here.

Starting over is often a difficult, but necessary, way to revitalize yourself. When I was in middle school, I wrote a column in my school’s student newspaper talking about video games. I’ve been writing since before I was coding, but lately the coding has superceded the importance of writing in my life. I haven’t been content with this for a long time, and have been trying a variety of strategies to make myself write more. My personal blog, SteveStreza.com, has acted somewhat as the outlet for this, and it has succeeded in getting me to write long-form articles, but it has largely failed at producing shorter and more frequent content. But the opposite side of that is a lack of focus and a burden of further long-form content. I love writing, but the hole in what I have been writing has been bothering me for awhile.

So today, I’m beginning a new experiment, Informal Protocol. This new blog is focused around the topics of development, design, tech, and culture. The goal is to keep most articles at 5 paragraphs or fewer, and to have at least one new post a day. But as the name implies, this is an informal protocol, and won’t always be followed. Quantity and quality will have peaks and valleys, and the focus may skew one way or another. It’s wholly possible the direction may drift and this becomes something else entirely. But hey, sometimes you just have to give it a shot.

Informal Protocol is an experiment. Like all experiments, it may fail. But sometimes you have to just jump face first into a new adventure and start over. If you would like to join me on this adventure, you can follow new posts at Informal Protocol via RSS, App.net, or Twitter.