Steve Streza

Software developer, comic book writer, video creator, hobbyist located in Seattle, WA.

He takes the MacBook to Apple for repairs. They immediately claim it’s water damaged and the entire logic board (and, for some reason, the top of the case) need to be replaced at a cost of $1200. When taken to a board repair expert, he spends a few minutes nudging a pin into place, fixing the issue. This kind of practice from Apple has been an open secret for years, but it’s good to see pressure from a news organization on it.

Apple’s been charging people huge repair bills for years, which usually ends up converting someone into just replacing the machine outright. At the same time, they’ve been fighting hard against our right to repair our own machines, exploiting law enforcement or loopholes in copyright laws to interfere with repair shops who fix the machines Apple refuses to. Apple has massive leverage and needs to be checked through legislation. Luckily there are many Right-to-Repair bills being proposed in state legislatures, and if even only one or two pass, they would force companies like Apple to provide resources to these shops. And while Apple is a huge offender, they’re not alone, as more companies emulate their model and lock down their devices in order to sell you new products when yours become prematurely obsolete.

Consider supporting one of these Right to Repair bills in your state or country.

Owen Williams:

Microsoft, it seems, has removed all of the barriers to remaining in your ‘flow.’ Surface is designed to adapt to the mode you want to be in, and just let you do it well. Getting shit done doesn’t require switching device or changing mode, you can just pull off the keyboard, or grab your pen and the very same machine adapts to you.

It took years to get here, but Microsoft has nailed it. By comparison, the competition is flailing around arguing about whether or not touchscreens have a place on laptops. The answer? Just let people choose.

This coherency is what I had come to expect from Apple, but iPad and MacBook look messier than ever. Sure, you can get an iPad Pro and Apple Pencil, but you can’t use either of them in a meaningful way in tandem with your desktop workflow. It requires switching modes entirely, to a completely different operating system and interaction model, then back again.

The Surface lineup is super compelling now, and Windows continues to get better and better through minor feature updates every few months. Microsoft under its new CEO is cleaning up its act and actually conveying and executing a vision for how the personal computer fits into a modern lifestyle in 2018. At a time when Apple is struggling to remember that it’s creator audience exists, Microsoft is capitalizing on it and giving people what they want.

That said, it’s really silly that the Surface Studio 2, their iMac equivalent, is using a 7th generation CPU when Intel’s 8th generation has been out for months, and some of these are missing USB-C and Thunderbolt 3. There is definitely more work to do to bring these machines to peak performance.

Link

If you’re reading this, then that means I’ve finished upgrading my website to Gatsby 2. Gatsby is a static site generator that uses React and GraphQL to build the entire website as a set of static HTML files, which I used to build this version of my website. Version 2 has a number of really promising improvements like a component for querying GraphQL from anywhere and improved Webpack and Babel support (which will hopefully let me start trickling in some TypeScript).

The well documented migration process was not as smooth as I’d hoped, but that was to be expected. Gatsby does require some comfort with debugging Webpack and React apps before you can really use it well, and this was no different. The biggest reason for this was that I had a .babelrc file at the root of my project which was causing some difficult to debug problems (and ones with no search results). Ultimately the most important thing I did was to just throw that file out and replace it with the default, a step that is not emphasized enough in the docs. It certainly could have gone much worse and once I discovered this source of my problems, it was much smoother sailing.

Overall the migration took about 8 hours. This included time spent following the migration guide and debugging problems, as well as modernizing some of the new tools in Gatsby 2. Namely, the use of the new StaticQuery API for inlining queries. An example of this is that lovely little photo of me that’s on every page. Previously in Gatsby 1, each page had a single GraphQL query that could be run, so everything had to be shoved in there, including for images like that photo. That meant each page had duplicated query logic for fetching that image. Now, that has been rolled into a component that uses StaticQuery to fetch the image, which simplifies the page-level queries quite a bit. There’s a few ways I use that kind of pattern to clean up the site. You shouldn’t notice anything, but it makes working on the site here much simpler, especially if I want to add something that relies on a query.

I’ve been avoiding ripping this bandage off for awhile, but now it looks like it’s done and I can start using the new features. Gatsby’s got a rich plugin library, including some powerful integrations with service workers. I had this enabled originally but shut it off because it couldn’t detect cache changes very well with it, but Gatsby 2’s updated version of Webpack should make that more viable. There’s also some new query tracing tools which will help me get build times down; right now it takes about 3 minutes to build on the (admittedly very slow) server I run it on, and I’d like to get that to be under a minute. And I am dying to start moving stuff to TypeScript, which Babel 7 now supports.

Congratulations to the Gatsby team on shipping and doing such a huge release!

Separating Apple Watch from iPhone as a Public Health Good


At their September event, Apple spoke of their annual upgrades to their iPhone and Apple Watch lines. While the iPhone update was mostly limited to the processor and camera, the Apple Watch had some more significant improvements, notably including the capability to capture an electrocardiogram, atril fibrillation detection, fall detection, and an emergency SOS feature.

While the original pitch for the Apple Watch included health features, it was more concerned with being a workout accessory and general activity tracker. Over the years, it has grown more sophisticated at being not just an accessory, but a true guardian of the wearer’s health. Features like ResearchKit are making it possible for medical research to be conducted on people on a daily basis. It’s clear Apple is going to continue moving the Apple Watch in this direction.

This has potential to be transformative to public health, but there’s a problem: this device is limited to people who have iPhones, which makes up about 2 in 5 US phones and 1 in 5 phones worldwide. That means these features are not available to the vast majority of smartphone users, a market that is currently starved for a comparable product. And while there are finally some signs of life for an alternate smartwatch platform, Apple’s actively working with the FDA on some of their features; they’re simply better positioned to have accurate results.

When the iPhone launched, it was tethered to a computer running iTunes, but with its fifth release, the iPhone went PC-free and became fully self-sufficient. While it’s unlikely that the Apple Watch could ever be completely run without a connection to some other device, surely there will be many features that aren’t dependent on an iPhone. The health care features alone would be transformative for many people; there are absolutely people who would buy an Apple Watch just to have a modern health guardian device. Are they obligated to make the Watch work without the iPhone? Of course not. But it would dramatically expand the market for the device, and provide a marked improvement to the health and lives of people who can’t get one today.

Firefox is going to start being more aggressive about blocking slow and invasive trackers by default. This is a great move to speed up the web and make things more secure and private by default. And there’s a way to enable it today.

Long page load times are detrimental to every user’s experience on the web. For that reason, we’ve added a new feature in Firefox Nightly that blocks trackers that slow down page loads. We will be testing this feature using a shield study in September. If we find that our approach performs well, we will start blocking slow-loading trackers by default in Firefox 63.

In the physical world, users wouldn’t expect hundreds of vendors to follow them from store to store, spying on the products they look at or purchase. Users have the same expectations of privacy on the web, and yet in reality, they are tracked wherever they go. Most web browsers fail to help users get the level of privacy they expect and deserve.

In order to help give users the private web browsing experience they expect and deserve, Firefox will strip cookies and block storage access from third-party tracking content. We’ve already made this available for our Firefox Nightly users to try out, and will be running a shield study to test the experience with some of our beta users in September. We aim to bring this protection to all users in Firefox 65, and will continue to refine our approach to provide the strongest possible protection while preserving a smooth user experience.

Deceptive practices that invisibly collect identifiable user information or degrade user experience are becoming more common. For example, some trackers fingerprint users — a technique that allows them to invisibly identify users by their device properties, and which users are unable to control. Other sites have deployed cryptomining scripts that silently mine cryptocurrencies on the user’s device. Practices like these make the web a more hostile place to be. Future versions of Firefox will block these practices by default.

Firefox got really good last year and you should be using it.

Link

In my last website post I talked about my plans for setting up website notifications on AWS Lambda and DynamoDB. The idea is that a function on AWS Lambda would get called when the site had an update, which would fetch all the site data, diff it against the previous state, and determine which pages actually changed. Those changes would get saved to AWS DynamoDB, which has a streaming feature that other AWS Lambda functions can be triggered by for each event. Multiple Lambda functions (one for each service) would get those updates and fire off whatever integration was necessary for each service.

This would put the burden of running the service and hosting the data to Amazon’s ops crew, which is undoubtedly better than what I would have set up. As long as I stayed within the limits of the AWS free tier, which looked pretty decent, I would be able to run this in perpetuity, right?

Read More