Jonathan Vanian and Julia Boorstin, CNBC:

TikTok CEO Shou Zi Chew told employees on Thursday that the company’s U.S. operations will be housed in a new joint venture.

[…]

The U.S. joint venture will be 50% held by a consortium of new investors, including Oracle, Silver Lake and MGX, with 15% each. Just over 30% will be held by affiliates of certain existing investors of ByteDance, and almost 20% will be retained by ByteDance, the memo said.

Oracle is among the companies illegally supporting TikTok for the past year, along with Apple and Google. Instead of facing stiff legal penalties, Oracle will get to own a 15% piece of TikTok. It probably helps that co-founder Larry Ellison is a friend and donor to Donald Trump.

MGX will also get a 15% share. It is a state-run investment fund in the United Arab Emirates, even though I thought the whole point of this deal was a collective panic over foreign government interference. It probably helps that MGX used Trump’s family cryptocurrency to invest in Binance.

The deal is structured so these firms — and Silver Lake — actually have control. But, after all this, it seems like the single biggest shareholder in this new entity will be Bytedance, with 19.9%.

CNBC:

The new TikTok entity will also be tasked with retraining the video app’s core content recommendation algorithm “on U.S. user data to ensure the content feed is free from outside manipulation,” the memo said.

I do not think it is worth reading too much into TikTok’s CEO writing that its new suggestions will be “free from outside manipulation”, or the implications that statement indicates for TikTok’s operations elsewhere.

Bobby Allyn, NPR:

Yet the underlying algorithm will still be owned by Beijing-based ByteDance, with the blessing of American auditors, according to an internal TikTok memo reviewed by NPR and two sources familiar with the deal who were not authorized to speak publicly.

So if the underlying recommendations system still has connections to China, but it is retrained by a company run by a mix of far-right U.S. investors and a different foreign government, are those the ingredients for a social network with less state influence? Does this satisfy those who believe, without evidence, that TikTok is brainwashing people in the U.S. at the behest of the Chinese government?

Graham Cluley, writing on Bitdefender’s blog:

If you’re planning a cruise for your holidays, and cannot bear the idea of being parted from your Ray-Ban Meta smart glasses, you may want to avoid sailing with MSC Cruises.

The cruise line has updated its list of prohibited items, specifically banning smart glasses and similar wearable devices from public areas.

MSC Cruises is prohibiting “devices capable of covertly or discreetly recording or transmitting data” which, as written, is pretty vague without the subsequent “(e.g. smart glasses)”. Any wireless device is arguably “discreetly … transmitting data” all the time. I appreciate the idea. However, I fear this is the kind of rule that will be remembered as a relic of a transitional period, rather than an honest commitment to guest privacy.

I truly love end-of-year lists, and Stephen Hackett sure has a good one: what are the highs and lows from Apple’s 2025? This is not what you would call comprehensive — look out for the Six Colors report card early next year, I am sure — but it is better for being concise.

Congratulations to Hackett for the final high point on his list.

In August 2022, Kashmir Hill reported for the New York Times on two fathers who had, in separate cases, captured photos of their toddlers’ genitals for medical documentation on Android phones, and subsequently had their Google accounts locked. Both accounts were erroneously flagged for containing child sexual abuse materials, a heinous accusation that both fought — unsuccessfully, as of the article’s publication.

I wrote about what I learned from that article and a different incident affecting a Gmail account belonging to Talking Points Memo. But I never linked to a followup article from December of the same year, which I stumbled across earlier today as I was looking into Paris Buttfield-Addison’s Apple account woes, now apparently resolved.

Hill:

In recent months The Times, reporting on the power that technology companies wield over the most intimate parts of their users’ lives, brought to Google’s attention several instances when its previous review process appeared to have gone awry.

In two separate cases, fathers took photos of their naked toddlers to facilitate medical treatment. An algorithm automatically flagged the images, and then human moderators deemed them in violation of Google’s rules. The police determined that the fathers had committed no crime, but the company still deleted their accounts.

I do not know if either of these accounts were restored. I have asked Hill on Bluesky and I hope to hear back. (Update: Hill says neither parent recovered their account, though one was able to retrieve some account data that was turned over to police.)

Hill:

It took four months for the mother in Colorado, who asked that her name not be used to protect her son’s privacy, to get her account back. Google reinstated it after The Times brought the case to the company’s attention.

This is well after Google says all the account data should have been deleted, which raises more questions.

The ridiculous and maddening situation in which Paris Buttfield-Addison finds himself continues to rattle around my brain. The idea that any one of us could be locked out from our Apple devices because some presumably automated system flagged the wrong thing is alarming.

Greg Morris:

The scale of dependency is what makes this different from older tech problems. Losing your email account twenty years ago was bad. Losing your iCloud account now means losing your photos, your passwords, your ability to access anything else. We’ve built these single points of failure into our lives and handed them to corporations who can cut us off for reasons they won’t explain. That’s not a sustainable system.

Morris is correct, and there is an equally worrisome question looming in the distance: when does Apple permanently delete the user data it holds? Apple does not say how long it retains data after an account is closed but, for comparison, Google says it takes about two months. Not only can one of these corporations independently decide to close an account, there is no way to know if it can be restored, and there is little help for users.

Adam Engst, TidBits:

I’d like to see Apple appoint an independent ombudsperson to advocate for customers. That’s a fantasy, of course, because it would require Apple to admit that its systems, engineers, and support techs sometimes produce grave injustices. But Apple is no worse in this regard than Google, Meta, Amazon, and numerous other tech companies — they all rely on automated fraud-detection systems that can mistakenly lock innocent users out of critical accounts, with little recourse.

This is a very good idea. Better consumer protection laws would obviously help, too, but Apple could do this tomorrow.

There is one way the Apple community could exert some leverage over Apple. Since innocently redeeming a compromised Apple Gift Card can have serious negative consequences, we should all avoid buying Apple Gift Cards and spread the word as widely as possible that they could essentially be malware. Sure, most Apple Gift Cards are probably safe, but do you really want to be the person who gives a close friend or beloved grandchild a compromised card that locks their Apple Account? And if someone gives you one, would you risk redeeming it? It’s digital Russian roulette.

I cannot tell you what to do, but I would not buy an Apple gift card for someone else, and I would not redeem one myself, until Apple clearly explains what happened here and what it will do to prevent something similar happening in the future. And, without implying anything untoward, it should restore Buttfield-Addison’s account unless there is a compelling reason why it should not.

When I bought my iMac through Apple’s refurbished store in 2019, the only credit card I had was one where I kept a deliberately low limit. The iMac was over $3,700. To get around my limit, I bought a $2,000 gift card and paid it off immediately, then put the remaining $1,700 and change on my credit card.

I did not think twice about the potential consequences if this had tripped some kind of fraud detection system. I cannot imagine doing something similar today given everything Buttfield-Addison has gone through.

Update: Buttfield-Addison:

Update 18 December 2025: We’re back! A lovely man from Singapore, working for Apple Executive Relations, who has been calling me every so often for a couple of days, has let me know it’s all fixed. It looks like the gift card I tried to redeem, which did not work for me, and did not credit my account, was already redeemed in some way (sounds like classic gift card tampering), and my account was caught by that. […]

This is good news. It also answers my trying-not-to-be-clickbait headline question: yes, a gift card can, in some circumstances and possibly without your foreknowledge, compromise your account. That is not okay. Also not okay is that we are unlikely to see a sufficient explanation of this problem. You are just supposed to trust that it will all be okay. I am not sure I can.

I keep meaning to link to Screen Sizes, a wonderful utility by Trevor Kay and Christopher Muller. It is a resource for developers and designers alike to reference the screen sizes, pixel dimensions, and various other attributes of Apple’s post-P.C. device lineup.

Something I need to do at my day job on a semi-regular basis is compositing a screenshot on a photo of someone holding or using an iPhone or an iPad. One of my pet peeves is when there is little attempt at realism — like when a screenshot is pasted over a notch, or the screen corners have an obviously incorrect radius. This is not out of protection for the integrity of Apple’s hardware design, per se; it just looks careless. I constantly refer to Screen Sizes to avoid these mistakes. I did so earlier today, which is why I was reminded to link to it.

It is a great free web app with even more resources than its name suggests.

Jeff Horwitz and Engen Tham, Reuters:

Though China’s authoritarian government bans use of Meta social media by its citizens, Beijing lets Chinese companies advertise to foreign consumers on the globe-spanning platforms. As a result, Meta’s advertising business was thriving in China, ultimately reaching over $18 billion in annual sales in 2024, more than a tenth of the company’s global revenue.

But Meta calculated that about 19% of that money – more than $3 billion – was coming from ads for scams, illegal gambling, pornography and other banned content, according to internal Meta documents reviewed by Reuters.

According to Reuters’ interpretation of these documents, Meta’s internal efforts to reduce this fraud were hampered after Mark Zuckerberg intervened. Andy Stone disputes that characterization.

While “Beijing lets Chinese companies advertise to foreign consumers” on Meta’s platforms, it should be noted that Meta also accepts advertising. The company is officially opposed to operating in China on free speech grounds — a stance it has now, but it was previously comfortable with compromising until it was spooked. It is not okay with the requirements imposed upon it to permit no-cost user participation, yet it is okay with accepting advertising dollars from companies offering the same speech compromises.

Meta continues to prove it has no principles. It never had any. At its heart, it runs on the same vibe of indifference to its broader impact that defined the earliest years of Facebook.

Paris Buttfield-Addison:

My Apple ID, which I have held for around 25 years (it was originally a username, before they had to be email addresses; it’s from the iTools era), has been permanently disabled. This isn’t just an email address; it is my core digital identity. It holds terabytes of family photos, my entire message history, and is the key to syncing my work across the ecosystem.

[…]

The only recent activity on my account was a recent attempt to redeem a $500 Apple Gift Card to pay for my 6TB iCloud+ storage plan. The code failed. The vendor suggested that the card number was likely compromised and agreed to reissue it. Shortly after, my account was locked.

This post has been circulating and, since publishing, Buttfield-Addison says he has been contacted by someone at Apple’s “Executive Relations”, but still does not have access to his account. I hope his situation is corrected promptly.

What I am stunned by is the breadth of impact this lockout has, and what a similar problem would mean for me, personally. I do not blame Buttfield-Addison or anyone else for having so much of their digital life ensconced in an Apple Account. Apple has effectively made it a requirement for using the features of its devices and, thanks to Apple’s policy of only trusting itself, creates limitations to using third-party services. You cannot automatically back up an iPhone or iPad to a third-party service, for example, in the same way as you can iCloud. Given this tight control, the bar for locking a user out of their Apple Account and, to some extent, out of their devices should be unbelievably high. Like, it should require the equivalent of a court order internally.

At the very least, software and services need a warranty. Customers need a level of protection from any corporation with which they are required to have an ongoing relationship. This single high-profile incident should raise alarm bells within Apple about its presumably-automatic account security mechanisms and its support procedures.

Blake Scholl, CEO of Boom Supersonic:

Today, we’re announcing Superpower, our new 42‑megawatt natural gas turbine, along with a $300M funding round and Crusoe as our launch customer. And most importantly: this marks a turning point. Boom is now on a self-funded path to both Superpower and the Overture supersonic airliner.

As David Gerard points out, Boom’s proprietary engines are so far hypothetical, though I think Gerard gets a little over his skis in writing “[t]here’s no plan for the Overture plane”. The company’s demonstrator plane broke the sound barrier earlier this year; clearly, the company is not operating entirely in fiction.

The promotional video for this Superpower generator promises “42 megawatts of clean electricity”. This is, I will remind you, powered by a jet engine. I think even an advertising standards body used to hyperbole would question that definition of “clean”.

Rajat Saini, the Mac Observer:

Apple has started rolling out macOS Tahoe 26.2 to everyone. Apple seeded macOS 26.2 RC (build 25C56) earlier this month, and that same update is now available publicly through Software Update.

I have found the version of Safari in this build of MacOS 26.2 is noticeably buggy. It sometimes stops letting me scroll a webpage and, in rare cases, I have found the browser wholly crashes when closing tabs. I am not saying you will have the same experience but, if you are dependent on Safari and you are comfortable navigating now-public security problems, now you know to proceed with some caution. I have not seen these same bugs on my iPhone running iOS 26.2.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

With over 5,000 five star reviews, Magic Lasso Adblock is simply the best ad blocker for your iPhone, iPad, and Mac.

Designed from the ground up to protect your privacy, Magic Lasso blocks all intrusive ads, trackers, and annoyances. It stops you from being followed by ads around the web and, with App Ad Blocking, it stops your app usage being harvested by ad networks.

So, join over 350,000 users and download Magic Lasso Adblock today.

I really like Manuel Moreale’s “People and Blogs” series where different writers pull back the curtain and reflect on their process and goals. So I was a little surprised when Moreale asked if I would like to contribute, too.

I feel like I am a terrible interview subject; maybe I should have added a few more jokes. But I like this series so much I felt compelled to add my brick to the wall.

The European Commission:

The European Commission acknowledges Meta’s undertaking to offer users in the EU an alternative choice of Facebook and Instagram services that would show them less personalised ads, to comply with the Digital Markets Act (DMA). This is the first time that such a choice is offered on Meta’s social networks. Meta will give users the effective choice between: consenting to share all their data and seeing fully personalised advertising, and opting to share less personal data for an experience with more limited personalised advertising. Meta will present these new options to users in the EU in January 2026.

Good. Meta should have the option to charge users if it wants to compensate for a revenue difference between surveillance-powered ads and less creepy ads, but users should not be forced to choose between paying or sacrificing their right to privacy. If Meta’s business cannot be sufficiently profitable without conning a bunch of people, it should have a different business model.

Adam Satariano, New York Times:

Meta downplayed Monday’s announcement, saying it was making changes to the wording, design and transparency of existing policy.

“We acknowledge the European Commission’s statement,” the company said in a statement, adding that “personalized ads are vital for Europe’s economy.”

Meta might be trying to save face, but a year ago, it was so distraught as to file a legal complaint to retain its “pay or consent” model.

Josh Aas, of Let’s Encrypt:

On September 14, 2015, our first publicly-trusted certificate went live. We were proud that we had issued a certificate that a significant majority of clients could accept, and had done it using automated software. Of course, in retrospect this was just the first of billions of certificates. Today, Let’s Encrypt is the largest certificate authority in the world in terms of certificates issued, the ACME protocol we helped create and standardize is integrated throughout the server ecosystem, and we’ve become a household name among system administrators. We’re closing in on protecting one billion web sites.

Via Ben Werdmuller:

A decade ago, only organizations with money, patience, and technical support could reliably encrypt their sites. Everyone else — small nonprofits, bloggers, community groups, activists — were effectively told that their work wasn’t important enough to deserve confidentiality. Let’s Encrypt leveled that playing field.

It truly changed the web, ushering in an era where most browsers effectively assume connections will be made over HTTPS and treating plain HTTP as an anomaly. The push for security has its critics, most notably Dave Winer who promises HTTP forever. On the whole, though, it is difficult not to see Let’s Encrypt as revolutionary. This very website has a certificate issued by them.

An ironic side effect of the popularity of Let’s Encrypt is that its Certificate Transparency Logs are a fruitful resource for bots and bad actors finding new domains to exploit. A 2023 paper by Stijn Pletinckx, et al. (PDF) describes how automated traffic began hitting test servers “just seconds after publishing the [certificate log] entry” compared to no attempts against domains without a certificate. This traffic typically looks like attempts to find unpatched vulnerabilities, like basic SQL injection strings and bugs in common WordPress plugins. This abuse of C.T. logs is not unique to Let’s Encrypt. But it is popular and free, and that makes its logs a target-rich environment. Neither is this a reason to avoid using Let’s Encrypt. It just means one needs to be cautious about what is on their server from the moment they decide to install an HTTPS certificate.

The surprise departure of Alan Dye announced a week ago today provoked an outpouring of reactions both thoughtful and puerile. The general consensus seemed to be oh, hell yeah, with seemingly few lining up to defend Dye’s overseeing of Apple’s software design efforts. But something has been gnawing at me all week reading take after take, and I think it was captured perfectly by Jason Snell, of Six Colors, last week:

So. In the spirit of not making it personal, I think it’s hard to pile all of Apple’s software design missteps over the last few years at the feet of Alan Dye. He had support from other executives. He led a whole team of designers. Corporate initiatives and priorities can lead even the most well-meaning of people into places they end up regretting.

That said, Alan Dye has represented Apple’s design team in the same way that Jony Ive did ever since Jony took over software design. He was the public face of Liquid Glass. He has been a frequent target of criticism, some of it quite personal, all coming from the perspective that Apple’s design output, especially on the software side, has been seriously lacking for a while now.

This nuanced and careful reaction, published shortly after Dye’s departure was announced, holds up and is the thing I keep coming back to. Snell expanded on these comments on the latest episode of Upgrade with Myke Hurley. I think it is a good discussion and well worth your time. (Thanks to Jack Wellborn for suggesting I listen.)

Cast your mind back to two days earlier, when Apple said John Giannandrea was retiring. Giannandrea, coming from running search and A.I. at Google, signalled to many that Apple was taking the future of Siri seriously. For whatever reason — insufficient support from Apple, conflicting goals, reassignments to questionable projects, or any number of other things — that did not pan out. Siri today works similarly to Siri eight years ago, before he joined the company, the launch of Apple Intelligence was fumbled, and the features rolled out so far do not feel like Apple products. Maybe none of this was the fault of Giannandrea, yet all of it was his responsibility.

It is difficult to know from the outside what impact Giannandrea’s retirement will have for the future of Siri or Apple Intelligence. Similarly, two days after that was announced, Dye said he was leaving, too, and Apple promoted Stephen Lemay to replace him, at least temporarily. From everything I have seen, people within Apple seem to love this promotion. However, it would be wrong to think Lemay is swooping in to save the day, both because that is an immense amount of pressure to put on someone who is probably already feeling it, and because the conditions that resulted in my least favourite design choices surely had agreement from plenty of other people at Apple.

While I am excited for the potential of a change in direction, I do not think this singlehandedly validates the perception of declining competence in Apple’s software design. It was Dye’s responsibility, to be sure, but it was not necessarily his fault. I do not mean that as an excuse, though I wish I did. The taste of those in charge undoubtably shapes what is produced across the company. And, despite a tumultuous week at the top of Apple’s org chart, many of those people remain in charge. To Snell’s point of not personalizing things, and in the absence of a single mention of “design” on its leadership page, the current direction of Apple’s software should be thought of as a team effort. Whether one person should be granted the authority to transform the taste of the company’s leadership into a coherent, delightful, and usable visual language is a good question. Regardless, it will be their responsibility even if it is not their fault.

Sean Hollister, the Verge:

I read a lot of my bedtime news via Google Discover, aka “swipe right on your Samsung Galaxy or Google Pixel homescreen until you see a news feed appear,” and that’s where these new AI headlines are beginning to show up.

[…]

But in the seeming attempt to boil down every story to four words or less, Google’s new headline experiment is attaching plenty of misleading and inane headlines to journalists’ work, and with little disclosure that Google’s AI is rewriting them.

Rewriting headlines may be new to Google Discover, but it is not new to Google. In fact, some research indicates page titles and descriptions are automatically rewritten more often than not in search results, and I am of two minds about this practice. It robs publishers and website owners of agency over how they present themselves through an important source of referral traffic. Google’s automatic rewrites are — as I have experienced in search results and as documented by Hollister in Discover — sometimes wrong, and have the effect of putting words in authors’ mouths.

Still, the titles and descriptions supplied by webpages are sometimes inaccurate, too — often deliberately. Sometimes, there are people writing clickbait headlines; it is common practice in search engine optimization circles. For search results, Google tends to generate headlines that are less clickbait-y than might appear in the original publication. However, Hollister shows examples from Discover where Google’s version is entirely misleading. And Google is not the only company doing automatic clickbait nonsense.

Emanuel Maiberg, 404 Media:

Instagram is generating headlines for users’ Instagram posts without their knowledge, seemingly in an attempt to get those posts to rank higher in Google Search results.

[…]

Google told me that it is not generating the headlines, and that it’s pulling the text directly from Instagram. Meta acknowledged my request for comment but did not respond in time for publication. I’ll update this story if I hear back.

Meta’s Andy Stone, once again not on Threads but instead on Bluesky, quoted Joseph Cox’s link to the story writing:

Reports the outlet that definitely does not, ever, write clickbait-y, SEO-optimized headlines

This, obviously, does not meaningfully challenge Maiberg’s reporting, as it is Instagram generating these page titles specifically for Google whether users like it or not. This is just distracting nonsense. I wonder if being a dishonest asshole is a job description for Meta’s communications department.

Apple, in the 2020 edition of its Human Interface Guidelines:

Sometimes, icons can be used to help people recognize menu items—not menus—and associate them with content. For example, Safari uses the icons displayed by some webpages (known as favicons) to produce a visual connection between the webpage and the menu item for that webpage.

Minimize the use of icons. Use icons in menus only when they add significant value. A menu that includes too many icons may appear cluttered and be difficult to read.

Apple, in the latest version of its Human Interface Guidelines:

Represent menu item actions with familiar icons. Icons help people recognize common actions throughout your app. Use the same icons as the system to represent actions such as Copy, Share, and Delete, wherever they appear. […]

Jim Nielsen:

It’s extra noise to me. It’s not that I think menu items should never have icons. I think they can be incredibly useful (more on that below). It’s more that I don’t like the idea of “give each menu item an icon” being the default approach.

This posture lends itself to a practice where designers have an attitude of “I need an icon to fill up this space” instead of an attitude of “Does the addition of a icon here, and the cognitive load of parsing and understanding it, help or hurt how someone would use this menu system?”

Nielsen explores the different menus in Safari on MacOS Tahoe — I assume version 26.0 or 26.1. I am running 26.2, with a more complete set of icons in each menu, though not to the user’s benefit. For example, in Neilsen’s screenshot, the Safari menu has a gear icon beside the “Settings…” menu item, but not beside the “Settings for pxlnv.com…”, or whatever the current domain is. In 26.2, the latter has gained an icon — another gear. But it is a gear that is different from the “Settings…” menu item just above it, which makes sense, and also from the icon beside the “Website Settings…” menu item accessible from the menu in the address bar, which does not make sense because it does exactly the same thing.

Also, the context menu for a tab has three “×” icons, one after another, for each of the “Close Tab” menu items. This is not clarifying and is something the HIG says is not permitted.

Nikita Prokopov:

The original Windows 95 interface is _functional_. It has a function and it executes it very well. It works for you, without trying to be clever or sophisticated. Also, it follows system conventions, which also helps you, the user.

I’m not sure whom the bottom interface [from Windows 11] helps. It’s a puzzle, an art object, but it doesn’t work for you. It’s not here to make your life easier.

As someone who uses a Windows computer for my day job, I can confidently say this allergy to contrast affects both platforms alike, and Prokopov’s comparison offers just one example. Why this trend persists, I have no idea. I find it uncomfortable to look at for long periods of work — the kind of time I imagine is comparable to those who build these operating systems.

Karina Zapata, CBC News:

It’s [14 this year] the highest number of pedestrian deaths on Calgary Police Service records, which date back to 1996. According to police, it’s a death toll only seen once before, in 2005.

[…]

Here in Canada, Toronto has significantly reduced its pedestrian deaths over the past decade.

According to the City of Toronto, it has seen 16 pedestrian deaths so far this year. While that number is slightly higher than Calgary’s, it’s a far cry from the 41 pedestrian deaths in 2018.

For the record, the reduction of pedestrian deaths in Toronto this year is not because a whole bunch of people went out and bought autonomous cars. I am not saying these technologies cannot help. But the ways in which Toronto — with a metro area four times as populous as Calgary’s — cut deaths so drastically are entirely boring like, according to this article, enforcing existing rules and better planning of lane closures. Neither of those things will get a breathless New York Times op-ed, but they are doable in any city tomorrow.

CBC News:

Edmonton police are testing out artificial intelligence facial-recognition bodycams without approval from Alberta’s information and privacy commissioner Diane McLeod.

Police say they don’t legally require what they describe as “feedback” from the commissioner during the trial or proof of concept stage.

But in an interview Wednesday on CBC’s Edmonton AM, McLeod said they do.

Liam Newbigging, Edmonton Journal:

Police at the Tuesday event to unveil the pilot said the assessment was sent to Alberta’s privacy commissioner Diane McLeod to ensure a “proof of concept test” for body-worn video cameras with new facial recognition technology is fair and respects people’s privacy.

But the office of the information and privacy commissioner told Postmedia in an email that the assessment didn’t reach it until Tuesday afternoon and that it’s possible that the review of the assessment might not be finished until the police pilot project is already over.

This looks shady, and I do not understand the rush. Rick Smith — the CEO of Axon, which markets body cameras and Tasers — points out the company has not supported facial recognition in its cameras since it rejected it on privacy grounds in 2019. Surely, Edmonton Police could have waited a couple of months for the privacy commissioner’s office to examine the plan for compliance.

Smith (emphasis mine):

The reality is that facial recognition is already here. It unlocks our phones, organizes our photos, and scans for threats in airports and stadiums. The question is not whether public safety will encounter the technology—it is how to ensure it delivers better community safety while minimizing mistakes that could undermine trust or overuse that encroaches on privacy unnecessarily. For Axon, utility and responsibility must move in lockstep: solutions must be accurate enough to meaningfully help public safety, and constrained enough to avoid misuse.

Those three examples are not at all similar to each other; only one of them is similar to Axon’s body cameras, and I do not mean that as a compliment.

We opt into using facial recognition to unlock our phones, and the facial recognition technology organizing our photo libraries is limited to saved media. The use of facial recognition in stadiums and airports is the closest thing to Axon’s technology, in that it is used specifically for security screening.

This is a disconcerting step toward a more surveilled public space. It is not like the Edmonton Police are a particularly trusted institution. Between 2009–2016 (PDF), roughly 90% of people in Edmonton strongly agreed or somewhat agreed with the statement “I have a lot of confidence in the EPS [Edmonton Police Service]”. This year, that number has dropped to around 54% (PDF) — though the newer survey also allows for a “neither confident nor unconfident” response, which 22% of people agreed with. Among Indigenous, 2SLGBTQI+, and unhoused populations, the level of distrust in the EPS rises dramatically.

Public trust is not reflective of the reality of crime in Edmonton, which has declined somewhat in the same time period, despite growing by half a million people. However, institutional trust is a requirement for such an invasive practice. A good step toward gaining trust is to ensure it has clearance from the privacy commissioner’s office before beginning a trial.