In September 2018, the Verge posted a video that was designed to show people how to build a PC, which was full of errors and mistakes. Some were inconsequential or considered bad practice, like having bad cable management which might impede airflow but wouldn’t necessarily impact performance. Some would cause performance problems but not damage, like putting the GPU in the wrong PCI-e slot. And some issues could cause irreversible damage, like using the wrong screws on the radiator, which could potentially penetrate the radiator tubing and cause coolant leaks. The internet quickly began criticizing this video for its flaws, making parodies and reaction videos, and the Verge disabled the comments on the video before ultimately taking it down, amending the accompanying article noting that the video wasn’t up to their standards. Paul’s Hardware did a very good summary of the video and the reaction to it. The internet made fun of it for awhile, and everyone largely moved on. Until this week.

On Tuesday evening, Kyle from the YouTube channel Bitwit tweeted that the Verge had used YouTube’s copyright strike system to take down his reaction video. The Verge did not issue a statement or public comment to this, but about a day later, the claim was reversed after being disputed. According to Bitwit, YouTube disputed that the video fell under fair use for transformative purposes (which will go on to be disputed by the Verge later). They also took down a video from channel ReviewTechUSA which broke the original video down and added a lot of commentary to it. Before the videos were reversed, several large tech YouTube channels posted videos about the Verge’s actions, which appeared to outsiders like the Verge was trying to censor criticism, as the videos were both transformative, critical, and highly viewed.

This morning, editor-in-chief Nilay Patel finally issued a statement on behalf of the Verge. In it he says that the legal team at Vox Media (the parent company of the Verge) found these videos and decided that they were not fair use, and issued copyright strikes to YouTube under their own purview. Later, when he was notified of these strikes, he had them rescinded despite believing that the legal team was correct in thinking that they did not fall under fair use. He then spent the morning responding to tweets about the issue, including my own, which were almost entirely negative.

Now, I’ve generally liked the Verge and Nilay Patel’s work, and have defended him and his position strongly when I agree with him. And after thinking about it, in some ways I can understand where they’re coming from. If we assume they’re being truthful in their public statement, they saw some videos, they felt they were not fair use, they tried to take them down. But their process failed in a few fundamental ways.

Read More

The New York Times has written a great dive into mobile apps that harvest data off your device, such as location data. Many of these companies feel entitled to harvest and store your data for things like location when you give consent for location access, and are in the business of selling that data to advertisers.

The book ‘1984,’ we’re kind of living it in a lot of ways.

Bill Kakis, a managing partner at Tell All

I’ve been removing a lot of the native apps I’ve relied on recently in favor of mobile web apps. I won’t let Facebook run code natively on any device I own, precisely because I know they go out of their way to capture every scrap of data they can. Running Instagram in a mobile web browser provides a much stronger sandbox, limiting the amount of data they can steal dramatically.

Apple and Google have largely destroyed any real marketplace for paid apps that don’t need to rely on selling data, and app review mechanisms have been unwilling or unable to protect customers from it. They deserve a huge share of blame for the status quo being what it is.


The new iPad Pro is out and the review are pretty consistent. The hardware is amazing, but held back by the limitations of the software, and that software limitation prevents certain workflows from being viable. Every year that list seems to get a little shorter; Dom Esposito was able to produce his iPad Pro review for YouTube on it (in 4K, no less), Shawn Blanc is doing production photography work on an iPad and a Leica camera, and of course there’s Federico Viticci’s ever-evolving list of workflows to get the most out of the iPad’s multitasking capabilities. With Apple’s silicon team doing some of the best work in the industry, and with GeekBench scores rivaling laptops in bursts, it’s not hard to see why people want to replace desktops with these things; I’ve argued for three years (to the day, apparently) that the iPad Pro needs Xcode.

But there is one type of workflow that, for 8 years, has been difficult to hit on an iPad. Building software.

You don’t want to be limited by the availability of pre-programmed cartridges. You’ll want a computer, like Apple, that you can also program yourself.

Apple print ad, 1978

In many ways, this is a foundational part of the definition of a computer. Apple’s said as much in their ads. The Macintosh has always been an open, developer-friendly platform. And Apple has an excellent and compatible web engine in WebKit that developers can run web apps on. Apple’s history was one that helped small and large companies build Macintosh software, and with Cocoa helped many new developers (including me) build amazing apps for its general purpose computers. But in 2018, it’s an unsolved problem on iPads, one that is viable on competitor tablets like Microsoft’s Surface line and Google’s Pixel tablets. What’s holding it back?

Read More


Yesterday Abby and I picked up this beaut, the 2018 Honda Clarity Touring-class in crimson red. It’s a plug-in hybrid electric vehicle (PHEV), which means it can run on traditional gasoline as well as on electrical power. But where most hybrids only get energy from the engine and the kinetic energy of the car itself, this can be plugged into the wall and charged directly from the electrical grid. Many owner reports have said that it’s very common to use only electric power when driving without the gas engine kicking on at all, which has been true for us so far. And where we live, our energy utility gets most of its power from green sources like nuclear 1 and hydroelectric power, which makes it pretty clean to drive when the gas engine is off.

So why go with a plug-in hybrid, rather than go all in on electric? Primarily because of the limitations of electric, both in infrastructure and in speed. We go on a fair number of road trips, and while electric charging stations are becoming more ubiquitous every day, it is still far from perfect. Gas stations are ubiquitous even in the most secluded places. But crucially, a gas tank can be filled in minutes, vs the hour that even the fastest Tesla Supercharger can recharge a battery. That’s a big delay in a trip, and it assumes that there will be a Supercharger where you need one. (Interestingly, there is a Clarity model that is available with hydrogen fuel cell technology, which has many of the environmental benefits of EVs with the filling time of gas tanks, but those are exceptionally hard to find outside of California, which is the only place in the US you can get one.) As we’re only a one-car home, having a single car that is optimized for pure electric driving day-to-day plus a gas engine for 10+ hour road trips was a perfect fit for our needs.

This car is a delight to drive, with an emphasis on comfort of the driver and the passengers. The interior is spacious and open, with huge windows and pretty small pillars. The back seat has plenty of room to fit 3 adults, even with the seats all the way back, and even has some handy phone pockets in the seats. At over 4,000 pounds empty, it’s heavy and has a low center gravity that makes it really grip the road well, holding you in your seat without jostling you about over small bumps. While it’s not supposed to be a sports car, it has a little kick to it when you really hit the gas accelerator. And it’s remarkably quiet and free of engine and air noise, even on the highways.

Technology wise, this isn’t the most sophisticated car for a gadget guy like me, but it does have plenty of toys. The gauges cluster is all screens, but even on a super sunny afternoon, there was no problems with visibility. The center console entertainment center is definitely a little slow and light on functionality, but you can fix that by plugging in your phone and using Android Auto or CarPlay. It has a back-up camera and a blind spot camera on the right side, but not one on the left or in the front, both of which I would’ve upgraded for if they were available.

But the tech in this car holds some smart driving features that I am loving beyond anything else. The first one is a simple thing called Brake Hold. When it’s on, if you come to a stop at an intersection, it holds the brake in place until you accelerate again, so you can take your foot off the brake pedal. Small thing, but it can really reduce driving fatigue (especially when you use the regenerative braking to slow before intersections). The second is lane-keeping assistance, which will automatically keep you in your lane on highways, even around some curvy stretches. The third is adaptive cruise control, which lets you set a target speed and distance you want to keep behind the car in front of you, slowing all the way to a complete stop. When we were driving home from the dealership in rush hour traffic, the car was doing literally all the work of managing stop-and-go traffic with people merging in front of you. It was delightful, and made the usually-stressful experience of traffic jams almost relaxing.

This car has been great so far, and I’m looking forward to pushing it further. If you’re a one-car house and need both electric performance plus road trip capacity without having to stress about electric infrastructure or hours-long charging, a plug-in hybrid is a pretty great choice. The Clarity is probably the best implementation of PHEV on the road today, and you should consider checking it out.

He takes the MacBook to Apple for repairs. They immediately claim it’s water damaged and the entire logic board (and, for some reason, the top of the case) need to be replaced at a cost of $1200. When taken to a board repair expert, he spends a few minutes nudging a pin into place, fixing the issue. This kind of practice from Apple has been an open secret for years, but it’s good to see pressure from a news organization on it.

Apple’s been charging people huge repair bills for years, which usually ends up converting someone into just replacing the machine outright. At the same time, they’ve been fighting hard against our right to repair our own machines, exploiting law enforcement or loopholes in copyright laws to interfere with repair shops who fix the machines Apple refuses to. Apple has massive leverage and needs to be checked through legislation. Luckily there are many Right-to-Repair bills being proposed in state legislatures, and if even only one or two pass, they would force companies like Apple to provide resources to these shops. And while Apple is a huge offender, they’re not alone, as more companies emulate their model and lock down their devices in order to sell you new products when yours become prematurely obsolete.

Consider supporting one of these Right to Repair bills in your state or country.

Owen Williams:

Microsoft, it seems, has removed all of the barriers to remaining in your ‘flow.’ Surface is designed to adapt to the mode you want to be in, and just let you do it well. Getting shit done doesn’t require switching device or changing mode, you can just pull off the keyboard, or grab your pen and the very same machine adapts to you.

It took years to get here, but Microsoft has nailed it. By comparison, the competition is flailing around arguing about whether or not touchscreens have a place on laptops. The answer? Just let people choose.

This coherency is what I had come to expect from Apple, but iPad and MacBook look messier than ever. Sure, you can get an iPad Pro and Apple Pencil, but you can’t use either of them in a meaningful way in tandem with your desktop workflow. It requires switching modes entirely, to a completely different operating system and interaction model, then back again.

The Surface lineup is super compelling now, and Windows continues to get better and better through minor feature updates every few months. Microsoft under its new CEO is cleaning up its act and actually conveying and executing a vision for how the personal computer fits into a modern lifestyle in 2018. At a time when Apple is struggling to remember that it’s creator audience exists, Microsoft is capitalizing on it and giving people what they want.

That said, it’s really silly that the Surface Studio 2, their iMac equivalent, is using a 7th generation CPU when Intel’s 8th generation has been out for months, and some of these are missing USB-C and Thunderbolt 3. There is definitely more work to do to bring these machines to peak performance.


If you’re reading this, then that means I’ve finished upgrading my website to Gatsby 2. Gatsby is a static site generator that uses React and GraphQL to build the entire website as a set of static HTML files, which I used to build this version of my website. Version 2 has a number of really promising improvements like a component for querying GraphQL from anywhere and improved Webpack and Babel support (which will hopefully let me start trickling in some TypeScript).

The well documented migration process was not as smooth as I’d hoped, but that was to be expected. Gatsby does require some comfort with debugging Webpack and React apps before you can really use it well, and this was no different. The biggest reason for this was that I had a .babelrc file at the root of my project which was causing some difficult to debug problems (and ones with no search results). Ultimately the most important thing I did was to just throw that file out and replace it with the default, a step that is not emphasized enough in the docs. It certainly could have gone much worse and once I discovered this source of my problems, it was much smoother sailing.

Overall the migration took about 8 hours. This included time spent following the migration guide and debugging problems, as well as modernizing some of the new tools in Gatsby 2. Namely, the use of the new StaticQuery API for inlining queries. An example of this is that lovely little photo of me that’s on every page. Previously in Gatsby 1, each page had a single GraphQL query that could be run, so everything had to be shoved in there, including for images like that photo. That meant each page had duplicated query logic for fetching that image. Now, that has been rolled into a component that uses StaticQuery to fetch the image, which simplifies the page-level queries quite a bit. There’s a few ways I use that kind of pattern to clean up the site. You shouldn’t notice anything, but it makes working on the site here much simpler, especially if I want to add something that relies on a query.

I’ve been avoiding ripping this bandage off for awhile, but now it looks like it’s done and I can start using the new features. Gatsby’s got a rich plugin library, including some powerful integrations with service workers. I had this enabled originally but shut it off because it couldn’t detect cache changes very well with it, but Gatsby 2’s updated version of Webpack should make that more viable. There’s also some new query tracing tools which will help me get build times down; right now it takes about 3 minutes to build on the (admittedly very slow) server I run it on, and I’d like to get that to be under a minute. And I am dying to start moving stuff to TypeScript, which Babel 7 now supports.

Congratulations to the Gatsby team on shipping and doing such a huge release!

Separating Apple Watch from iPhone as a Public Health Good

At their September event, Apple spoke of their annual upgrades to their iPhone and Apple Watch lines. While the iPhone update was mostly limited to the processor and camera, the Apple Watch had some more significant improvements, notably including the capability to capture an electrocardiogram, atril fibrillation detection, fall detection, and an emergency SOS feature.

While the original pitch for the Apple Watch included health features, it was more concerned with being a workout accessory and general activity tracker. Over the years, it has grown more sophisticated at being not just an accessory, but a true guardian of the wearer’s health. Features like ResearchKit are making it possible for medical research to be conducted on people on a daily basis. It’s clear Apple is going to continue moving the Apple Watch in this direction.

This has potential to be transformative to public health, but there’s a problem: this device is limited to people who have iPhones, which makes up about 2 in 5 US phones and 1 in 5 phones worldwide. That means these features are not available to the vast majority of smartphone users, a market that is currently starved for a comparable product. And while there are finally some signs of life for an alternate smartwatch platform, Apple’s actively working with the FDA on some of their features; they’re simply better positioned to have accurate results.

When the iPhone launched, it was tethered to a computer running iTunes, but with its fifth release, the iPhone went PC-free and became fully self-sufficient. While it’s unlikely that the Apple Watch could ever be completely run without a connection to some other device, surely there will be many features that aren’t dependent on an iPhone. The health care features alone would be transformative for many people; there are absolutely people who would buy an Apple Watch just to have a modern health guardian device. Are they obligated to make the Watch work without the iPhone? Of course not. But it would dramatically expand the market for the device, and provide a marked improvement to the health and lives of people who can’t get one today.