It’s about a week before WWDC 2023 kicks off in sunny California, so here’s a list of things I’m hoping to see come out of Cupertino. I’m skipping the AR/VR stuff, since it’s been speculated ad nauseam. I’m sure it’ll be cool and awesome and weird and probably not cheap. And maybe it’ll make me dizzy

TLDR; Better Xcode, Better Siri, Better Mac Catalyst

Xcode

Mostly, I just want Xcode to be better. Don’t crash. Don’t be slow. Is that too much to ask?

SPM Handling

We’ve all seen it. Switch branches, hit build and be faced with

Build operations are disabled: Package loading in progress. Please try again later.

I spend a lot of time working on Callisto, which is kind of a big app. We have dependencies split out into frameworks and it can take SPM a while to resolve all the packages when switching branches. So I hit this quite a lot. Usually, Xcode does a great job of queueing up actions when it’s busy. Like if you do a clean build and then run unit tests, it will finish the clean and then do the testing. That’s all I want for SPM resolution.

Better SwiftUI Tooling

I haven’t written a lot of SwiftUI. We toyed with it early on building Callisto, but it wasn’t ready for a big project like that. A couple months ago, we thought about trying to build a screen here or there with SwiftUI, but ran into roadblocks. One of the key features of SwiftUI is the live preview. To make those work, Xcode compiles bits of your code behind the scenes and shows them in preview pane. Callisto is a Catalyst app but has an AppKit plugin for doing Mac system things. Xcode could not handle that when trying to make a SwiftUI preview. Xcode would try to compile the AppKit code with UIKit and become very upset when it didn’t work. That left our foray into SwiftUI dead in the water.

Siri for Xcode

I’ve dabbled a bit with ChatGPT as a coding assistant. It’s great for small tasks with fiddly parameters. For instance, I needed to get the timestamp of some files. That’s the kind of thing that’s straightforward, but you don’t do it often, so you have to look up the specific API to stat the file and which attributes correspond to the creation date / modification date, etc. ChatGPT spit that code right out and I could move on to other things. But there’s friction there and the opportunity for a bespoke Xcode experience. I’m cautiously optimistic Apple will do something in this space, but I’m afraid it won’t be groundbreaking. Because, you know… Siri.

Catalyst and iOS Updates

Since Callisto is built with Mac Catalyst, these are the sorts of updates we’d love to see as developers.

Dynamic Type on Mac

Dynamic Type on iOS has been around for 10+ years. That’s the bit in iOS Settings that lets you make the text on your device a little bigger or a lot bigger than standard. All the apps that support Dynamic Type will pick up the change and text across the whole device changes size. That’s great! A boon for aging eyes everywhere.

But it isn’t supported at all on macOS. This recent announcement about new accessibility features coming in iOS 17 mentions

For users with low vision, Text Size is now easier to adjust across Mac apps such as Finder, Messages, Mail, Calendar, and Notes.

That sounds a lot like Dynamic Text on the Mac, so fingers crossed.

Auxiliary Window Replacement

Back in the day, lots of Mac apps used auxiliary windows that served as things like inspectors and floating tool panels. Apps like Photoshop had a bunch of these – paintbrushes, color panels, etc. You can still see these in AppKit apps like Preview and Quicktime Player, where the inspector is an auxiliary window. When you switch apps, it disappears to cut down on clutter. Right now, there’s nothing like that for Catalyst. The UIScene API for managing windows doesn’t have that level of distinction. Hopefully, at WWDC we’ll see a more mature Stage Manager implementation and at least some way of distinguishing main content windows and helper windows.

Multiple Processes on iOS

Callisto includes an embedded Python distribution for running the Jupyter Python Kernel. On the Mac, we just spawn a separate process and run Python normally. On iPad, we use a heavily modified Python to run in the same process space as the app to comply with App Store requirements. In practice, that hamstrings Callisto on the iPad a great deal. With a renewed interest in Pro apps on the iPad, it’d be incredible if we were allowed to run multiple processes and level the playing field between an M1 iPad and an M1 MacBook.

Yeah, never going to happen.

Other Stuff

Passwords

A lot of folks have called for a dedicated Passwords app instead of burying password management in Settings. I’m all for that, but I’d also love to see password sharing via your Apple Family. That’s really the last feature I’d need to leave 1Password behind. 1Password has been fantastic, but they’ve really pivoted to a corporate focus, so they’re less of good fit when I just want to share the Netflix password with my family. (Only for persons in my immediate family, residing in the same household. Netflix, if you’re reading this, I promise.)

HKSV Streaming API

I’m pretty heavily invested in the HomeKit ecosystem and I’m mostly happy with it. (I’m looking at you Siri.) I’ve got several third party apps that will stream video from my cameras, but they can only show live feeds. HKSV (HomeKit Secure Video) records events from these cameras and saves to iCloud, but scrolling back through time is only available via Apple’s Home app. Third party apps aren’t able to access the history of clips because there’s just no API for it. With HomeKit (hopefully) maturing a little more this year, Apple should make that available to developers.

Better Siri for Mac

Yes, macOS has Siri. But as underwhelming as Siri is on iOS, it’s worse on Mac.

In my notes, I wrote down “Not Brain Dead”. Siri just fails at the most basic things sometimes. When I try to open an app in my Applications folder with the phrase “Open AppName”, it fails maybe 2 out of 3 times. I’m not sure why it’s so bad and I don’t know of any way to debug it.

I usually use my laptop with the lid closed with external monitor, keyboard, webcam, etc. That means I can’t use “Hey Siri” for security reasons. I agree being able to turn off an always-on microphone is important, but if Siri is a serious feature, why can’t I explicitly grant permission for Siri to listen with an external microphone? It is interesting to note that “Hey Siri” does work with the ($1600) Mac Studio Display. That’s attributed to the onboard A13 chip.

Another bit of low hanging fruit for Siri is menu commands. At least it seems low hanging to me. If I’m using an app like Xcode, that has a menu command called ‘Build’, I would expect Siri to understand if I said “Hey Siri, Build”. But like the double meat burger, Siri is strictly off the menu.

But the real dream is a conversational Siri. We’ve all seen Iron Man. We’ve seen Tony talking to Jarvis like a person as he works. Could Siri ever do that in the context of Xcode? Playing with ChatGPT has teased this kind of reality. “Write a method to delete all the files in a given directory.” That’s a thing ChatGPT can do in a web browser window. Why can’t Siri to do that in Xcode? Imagine an Apple trained LLM that especially knew about Swift, UIKit, and all the Apple technologies. Add in the context of the project you’re currently working on. Talk about a developer accelerant. But that’s a big leap, so it’s doubtful we’ll see that this year, but maybe the first step?

Things We Won’t See at WWDC

Apple Silicon Mac Pro

Apple announced Apple Silicon for Mac at WWDC of 2020, three years ago, and the first commercial M1 Macs shipped in November of 2020. At the time, Apple estimated that the transition to Apple Silicon would take about two years. It’s been two and a half years since that first M1 MacBook Air shipped but the Mac Pro still sports an Intel chip. Something has clearly run amok.

But the rumor mill is pretty quiet on the Mac Pro front. Usually at this point, there would be at least some mention of the debut of a high profile, flagship Mac. It seems like we’ll be waiting until later in the year to see what kind of behemoth you can build out of lots of iPhone chips.

HomePod + AppleTV

Years ago, HomePod ran its own fork of iOS called AudioOS. Apple merged AudioOS with tvOS several revisions back so now both these home devices run tvOS. Because they do similar things – played media and power HomeKit – that makes a lot of sense. But the two devices remained separate at a hardware level – a smart speaker and a TV streamer. I’ve been waiting for the hybrid devices for years now. There’s the HomePod with a screen, like an Amazon Echo Show, that does HomePod things, but augmented with a display. And there’s the AppleTV with a mic and speaker. The mic let’s you talk to the TV without holding the button on the remote and the speaker acts as a center channel for multiple HomePod sound stage.

Those are both consumer products, so not the kind of thing they’d squeeze into the WWDC Keynote, especially with the AR/VR announcement, but maybe this fall, just in time for Christmas.

Will we see any of this at WWDC? I certainly hope so. But there’s only one way to find out.

No sleep ‘til DubDub!