Finally got a chance to install the macOS 26 Tahoe Beta 1 on my laptop… which finished just as Beta 2 was released. Whee!

iPadOS 26 and Developers

A while ago, I suggested that the best iPad for a developer was the MacBook Air. At WWDC 2025, Apple revealed all sorts of power features for the new iPadOS 26 including a whole new windowing mode and support for long running processes like Final Cut exports. While these are fantastic improvements for some power users, these still don’t give developers what they need to make the iPad a great tool for writing code. Long story short – keep that MacBook Air around for on-the-go coding for a while longer.

What’s still missing that a dev needs? The knee jerk answer is obviously Xcode. But why can’t they port Xcode? It’s just a glorified text editor, surely less complex than Final Cut and editing ProRes 4K HDR video. The real answer is fork(), the system library function to create a new process. Folks tend to think about Xcode as a thing that builds apps, but it’s really just a graphical frontend that calls command line tools that builds apps. Most development IDEs work in a similar way. VS Code, JetBrains IDES and others are just coordinators that invoke command line tools (via fork()) to do the actual work. This sort of architecture has a long history in computing but is antithetical to the iPadOS security model that bans fork(). Without the ability to spawn a command line tool, IDEs on iPad remain dead in the water.

At WWDC 25, Apple introduced a new technology called Containerization, which works like Docker and lets you run Linux containers on your Mac. Apple’s take on running containers focuses on security, performance, and resource management. That’s great and all, but Containerization is a foundational technology – not something that is immediately useful for everyone, but provides the tools to build things that are. What if Xcode’s toolchain ran in a container, instead of directly on your Mac? It’s a relatively straightforward thing to encapsulate the compiler chain into a container and run it on your source code. And what if containers could run on iPad? Now you’ve got the potential for an Xcode UI on iPad that talks to a sandboxed container running Linux, where processes can be forked freely while securely sealed off in their container. Problem solved!

Apple is really good at surprising us with fantastic new features that are built on foundational technologies deployed over the course of years and many OS releases. It’s fun to think about how some of the new low level tech might come together in the future to finally make iPad a viable dev platform. In the meantime, that M1 MacBook Air is on sale for just $650! Or $600 for an M4 Mac mini!

Back in 2007, some weird stuff happened and I ended up with a VIP badge for WWDC so I got to see Steve do a keynote up close!

My 2007 WWDC badge with a VIP mini badge.

California Dreamin’ – WWDC 2023

It’s about a week before WWDC 2023 kicks off in sunny California, so here’s a list of things I’m hoping to see come out of Cupertino. I’m skipping the AR/VR stuff, since it’s been speculated ad nauseam. I’m sure it’ll be cool and awesome and weird and probably not cheap. And maybe it’ll make me dizzy

TLDR; Better Xcode, Better Siri, Better Mac Catalyst

Xcode

Mostly, I just want Xcode to be better. Don’t crash. Don’t be slow. Is that too much to ask?

SPM Handling

We’ve all seen it. Switch branches, hit build and be faced with

Build operations are disabled: Package loading in progress. Please try again later.

I spend a lot of time working on Callisto, which is kind of a big app. We have dependencies split out into frameworks and it can take SPM a while to resolve all the packages when switching branches. So I hit this quite a lot. Usually, Xcode does a great job of queueing up actions when it’s busy. Like if you do a clean build and then run unit tests, it will finish the clean and then do the testing. That’s all I want for SPM resolution.

Better SwiftUI Tooling

I haven’t written a lot of SwiftUI. We toyed with it early on building Callisto, but it wasn’t ready for a big project like that. A couple months ago, we thought about trying to build a screen here or there with SwiftUI, but ran into roadblocks. One of the key features of SwiftUI is the live preview. To make those work, Xcode compiles bits of your code behind the scenes and shows them in preview pane. Callisto is a Catalyst app but has an AppKit plugin for doing Mac system things. Xcode could not handle that when trying to make a SwiftUI preview. Xcode would try to compile the AppKit code with UIKit and become very upset when it didn’t work. That left our foray into SwiftUI dead in the water.

Siri for Xcode

I’ve dabbled a bit with ChatGPT as a coding assistant. It’s great for small tasks with fiddly parameters. For instance, I needed to get the timestamp of some files. That’s the kind of thing that’s straightforward, but you don’t do it often, so you have to look up the specific API to stat the file and which attributes correspond to the creation date / modification date, etc. ChatGPT spit that code right out and I could move on to other things. But there’s friction there and the opportunity for a bespoke Xcode experience. I’m cautiously optimistic Apple will do something in this space, but I’m afraid it won’t be groundbreaking. Because, you know… Siri.

Catalyst and iOS Updates

Since Callisto is built with Mac Catalyst, these are the sorts of updates we’d love to see as developers.

Dynamic Type on Mac

Dynamic Type on iOS has been around for 10+ years. That’s the bit in iOS Settings that lets you make the text on your device a little bigger or a lot bigger than standard. All the apps that support Dynamic Type will pick up the change and text across the whole device changes size. That’s great! A boon for aging eyes everywhere.

But it isn’t supported at all on macOS. This recent announcement about new accessibility features coming in iOS 17 mentions

For users with low vision, Text Size is now easier to adjust across Mac apps such as Finder, Messages, Mail, Calendar, and Notes.

That sounds a lot like Dynamic Text on the Mac, so fingers crossed.

Auxiliary Window Replacement

Back in the day, lots of Mac apps used auxiliary windows that served as things like inspectors and floating tool panels. Apps like Photoshop had a bunch of these – paintbrushes, color panels, etc. You can still see these in AppKit apps like Preview and Quicktime Player, where the inspector is an auxiliary window. When you switch apps, it disappears to cut down on clutter. Right now, there’s nothing like that for Catalyst. The UIScene API for managing windows doesn’t have that level of distinction. Hopefully, at WWDC we’ll see a more mature Stage Manager implementation and at least some way of distinguishing main content windows and helper windows.

Multiple Processes on iOS

Callisto includes an embedded Python distribution for running the Jupyter Python Kernel. On the Mac, we just spawn a separate process and run Python normally. On iPad, we use a heavily modified Python to run in the same process space as the app to comply with App Store requirements. In practice, that hamstrings Callisto on the iPad a great deal. With a renewed interest in Pro apps on the iPad, it’d be incredible if we were allowed to run multiple processes and level the playing field between an M1 iPad and an M1 MacBook.

Yeah, never going to happen.

Other Stuff

Passwords

A lot of folks have called for a dedicated Passwords app instead of burying password management in Settings. I’m all for that, but I’d also love to see password sharing via your Apple Family. That’s really the last feature I’d need to leave 1Password behind. 1Password has been fantastic, but they’ve really pivoted to a corporate focus, so they’re less of good fit when I just want to share the Netflix password with my family. (Only for persons in my immediate family, residing in the same household. Netflix, if you’re reading this, I promise.)

HKSV Streaming API

I’m pretty heavily invested in the HomeKit ecosystem and I’m mostly happy with it. (I’m looking at you Siri.) I’ve got several third party apps that will stream video from my cameras, but they can only show live feeds. HKSV (HomeKit Secure Video) records events from these cameras and saves to iCloud, but scrolling back through time is only available via Apple’s Home app. Third party apps aren’t able to access the history of clips because there’s just no API for it. With HomeKit (hopefully) maturing a little more this year, Apple should make that available to developers.

Better Siri for Mac

Yes, macOS has Siri. But as underwhelming as Siri is on iOS, it’s worse on Mac.

In my notes, I wrote down “Not Brain Dead”. Siri just fails at the most basic things sometimes. When I try to open an app in my Applications folder with the phrase “Open AppName”, it fails maybe 2 out of 3 times. I’m not sure why it’s so bad and I don’t know of any way to debug it.

I usually use my laptop with the lid closed with external monitor, keyboard, webcam, etc. That means I can’t use “Hey Siri” for security reasons. I agree being able to turn off an always-on microphone is important, but if Siri is a serious feature, why can’t I explicitly grant permission for Siri to listen with an external microphone? It is interesting to note that “Hey Siri” does work with the ($1600) Mac Studio Display. That’s attributed to the onboard A13 chip.

Another bit of low hanging fruit for Siri is menu commands. At least it seems low hanging to me. If I’m using an app like Xcode, that has a menu command called ‘Build’, I would expect Siri to understand if I said “Hey Siri, Build”. But like the double meat burger, Siri is strictly off the menu.

But the real dream is a conversational Siri. We’ve all seen Iron Man. We’ve seen Tony talking to Jarvis like a person as he works. Could Siri ever do that in the context of Xcode? Playing with ChatGPT has teased this kind of reality. “Write a method to delete all the files in a given directory.” That’s a thing ChatGPT can do in a web browser window. Why can’t Siri to do that in Xcode? Imagine an Apple trained LLM that especially knew about Swift, UIKit, and all the Apple technologies. Add in the context of the project you’re currently working on. Talk about a developer accelerant. But that’s a big leap, so it’s doubtful we’ll see that this year, but maybe the first step?

Things We Won’t See at WWDC

Apple Silicon Mac Pro

Apple announced Apple Silicon for Mac at WWDC of 2020, three years ago, and the first commercial M1 Macs shipped in November of 2020. At the time, Apple estimated that the transition to Apple Silicon would take about two years. It’s been two and a half years since that first M1 MacBook Air shipped but the Mac Pro still sports an Intel chip. Something has clearly run amok.

But the rumor mill is pretty quiet on the Mac Pro front. Usually at this point, there would be at least some mention of the debut of a high profile, flagship Mac. It seems like we’ll be waiting until later in the year to see what kind of behemoth you can build out of lots of iPhone chips.

HomePod + AppleTV

Years ago, HomePod ran its own fork of iOS called AudioOS. Apple merged AudioOS with tvOS several revisions back so now both these home devices run tvOS. Because they do similar things – played media and power HomeKit – that makes a lot of sense. But the two devices remained separate at a hardware level – a smart speaker and a TV streamer. I’ve been waiting for the hybrid devices for years now. There’s the HomePod with a screen, like an Amazon Echo Show, that does HomePod things, but augmented with a display. And there’s the AppleTV with a mic and speaker. The mic let’s you talk to the TV without holding the button on the remote and the speaker acts as a center channel for multiple HomePod sound stage.

Those are both consumer products, so not the kind of thing they’d squeeze into the WWDC Keynote, especially with the AR/VR announcement, but maybe this fall, just in time for Christmas.

Will we see any of this at WWDC? I certainly hope so. But there’s only one way to find out.

No sleep ‘til DubDub!