Sean Hollister, the Verge:

I read a lot of my bedtime news via Google Discover, aka “swipe right on your Samsung Galaxy or Google Pixel homescreen until you see a news feed appear,” and that’s where these new AI headlines are beginning to show up.

[…]

But in the seeming attempt to boil down every story to four words or less, Google’s new headline experiment is attaching plenty of misleading and inane headlines to journalists’ work, and with little disclosure that Google’s AI is rewriting them.

Rewriting headlines may be new to Google Discover, but it is not new to Google. In fact, some research indicates page titles and descriptions are automatically rewritten more often than not in search results, and I am of two minds about this practice. It robs publishers and website owners of agency over how they present themselves through an important source of referral traffic. Google’s automatic rewrites are — as I have experienced in search results and as documented by Hollister in Discover — sometimes wrong, and have the effect of putting words in authors’ mouths.

Still, the titles and descriptions supplied by webpages are sometimes inaccurate, too — often deliberately. Sometimes, there are people writing clickbait headlines; it is common practice in search engine optimization circles. For search results, Google tends to generate headlines that are less clickbait-y than might appear in the original publication. However, Hollister shows examples from Discover where Google’s version is entirely misleading. And Google is not the only company doing automatic clickbait nonsense.

Emanuel Maiberg, 404 Media:

Instagram is generating headlines for users’ Instagram posts without their knowledge, seemingly in an attempt to get those posts to rank higher in Google Search results.

[…]

Google told me that it is not generating the headlines, and that it’s pulling the text directly from Instagram. Meta acknowledged my request for comment but did not respond in time for publication. I’ll update this story if I hear back.

Meta’s Andy Stone, once again not on Threads but instead on Bluesky, quoted Joseph Cox’s link to the story writing:

Reports the outlet that definitely does not, ever, write clickbait-y, SEO-optimized headlines

This, obviously, does not meaningfully challenge Maiberg’s reporting, as it is Instagram generating these page titles specifically for Google whether users like it or not. This is just distracting nonsense. I wonder if being a dishonest asshole is a job description for Meta’s communications department.

Apple, in the 2020 edition of its Human Interface Guidelines:

Sometimes, icons can be used to help people recognize menu items—not menus—and associate them with content. For example, Safari uses the icons displayed by some webpages (known as favicons) to produce a visual connection between the webpage and the menu item for that webpage.

Minimize the use of icons. Use icons in menus only when they add significant value. A menu that includes too many icons may appear cluttered and be difficult to read.

Apple, in the latest version of its Human Interface Guidelines:

Represent menu item actions with familiar icons. Icons help people recognize common actions throughout your app. Use the same icons as the system to represent actions such as Copy, Share, and Delete, wherever they appear. […]

Jim Nielsen:

It’s extra noise to me. It’s not that I think menu items should never have icons. I think they can be incredibly useful (more on that below). It’s more that I don’t like the idea of “give each menu item an icon” being the default approach.

This posture lends itself to a practice where designers have an attitude of “I need an icon to fill up this space” instead of an attitude of “Does the addition of a icon here, and the cognitive load of parsing and understanding it, help or hurt how someone would use this menu system?”

Nielsen explores the different menus in Safari on MacOS Tahoe — I assume version 26.0 or 26.1. I am running 26.2, with a more complete set of icons in each menu, though not to the user’s benefit. For example, in Neilsen’s screenshot, the Safari menu has a gear icon beside the “Settings…” menu item, but not beside the “Settings for pxlnv.com…”, or whatever the current domain is. In 26.2, the latter has gained an icon — another gear. But it is a gear that is different from the “Settings…” menu item just above it, which makes sense, and also from the icon beside the “Website Settings…” menu item accessible from the menu in the address bar, which does not make sense because it does exactly the same thing.

Also, the context menu for a tab has three “×” icons, one after another, for each of the “Close Tab” menu items. This is not clarifying and is something the HIG says is not permitted.

Nikita Prokopov:

The original Windows 95 interface is _functional_. It has a function and it executes it very well. It works for you, without trying to be clever or sophisticated. Also, it follows system conventions, which also helps you, the user.

I’m not sure whom the bottom interface [from Windows 11] helps. It’s a puzzle, an art object, but it doesn’t work for you. It’s not here to make your life easier.

As someone who uses a Windows computer for my day job, I can confidently say this allergy to contrast affects both platforms alike, and Prokopov’s comparison offers just one example. Why this trend persists, I have no idea. I find it uncomfortable to look at for long periods of work — the kind of time I imagine is comparable to those who build these operating systems.

Karina Zapata, CBC News:

It’s [14 this year] the highest number of pedestrian deaths on Calgary Police Service records, which date back to 1996. According to police, it’s a death toll only seen once before, in 2005.

[…]

Here in Canada, Toronto has significantly reduced its pedestrian deaths over the past decade.

According to the City of Toronto, it has seen 16 pedestrian deaths so far this year. While that number is slightly higher than Calgary’s, it’s a far cry from the 41 pedestrian deaths in 2018.

For the record, the reduction of pedestrian deaths in Toronto this year is not because a whole bunch of people went out and bought autonomous cars. I am not saying these technologies cannot help. But the ways in which Toronto — with a metro area four times as populous as Calgary’s — cut deaths so drastically are entirely boring like, according to this article, enforcing existing rules and better planning of lane closures. Neither of those things will get a breathless New York Times op-ed, but they are doable in any city tomorrow.

CBC News:

Edmonton police are testing out artificial intelligence facial-recognition bodycams without approval from Alberta’s information and privacy commissioner Diane McLeod.

Police say they don’t legally require what they describe as “feedback” from the commissioner during the trial or proof of concept stage.

But in an interview Wednesday on CBC’s Edmonton AM, McLeod said they do.

Liam Newbigging, Edmonton Journal:

Police at the Tuesday event to unveil the pilot said the assessment was sent to Alberta’s privacy commissioner Diane McLeod to ensure a “proof of concept test” for body-worn video cameras with new facial recognition technology is fair and respects people’s privacy.

But the office of the information and privacy commissioner told Postmedia in an email that the assessment didn’t reach it until Tuesday afternoon and that it’s possible that the review of the assessment might not be finished until the police pilot project is already over.

This looks shady, and I do not understand the rush. Rick Smith — the CEO of Axon, which markets body cameras and Tasers — points out the company has not supported facial recognition in its cameras since it rejected it on privacy grounds in 2019. Surely, Edmonton Police could have waited a couple of months for the privacy commissioner’s office to examine the plan for compliance.

Smith (emphasis mine):

The reality is that facial recognition is already here. It unlocks our phones, organizes our photos, and scans for threats in airports and stadiums. The question is not whether public safety will encounter the technology—it is how to ensure it delivers better community safety while minimizing mistakes that could undermine trust or overuse that encroaches on privacy unnecessarily. For Axon, utility and responsibility must move in lockstep: solutions must be accurate enough to meaningfully help public safety, and constrained enough to avoid misuse.

Those three examples are not at all similar to each other; only one of them is similar to Axon’s body cameras, and I do not mean that as a compliment.

We opt into using facial recognition to unlock our phones, and the facial recognition technology organizing our photo libraries is limited to saved media. The use of facial recognition in stadiums and airports is the closest thing to Axon’s technology, in that it is used specifically for security screening.

This is a disconcerting step toward a more surveilled public space. It is not like the Edmonton Police are a particularly trusted institution. Between 2009–2016 (PDF), roughly 90% of people in Edmonton strongly agreed or somewhat agreed with the statement “I have a lot of confidence in the EPS [Edmonton Police Service]”. This year, that number has dropped to around 54% (PDF) — though the newer survey also allows for a “neither confident nor unconfident” response, which 22% of people agreed with. Among Indigenous, 2SLGBTQI+, and unhoused populations, the level of distrust in the EPS rises dramatically.

Public trust is not reflective of the reality of crime in Edmonton, which has declined somewhat in the same time period, despite growing by half a million people. However, institutional trust is a requirement for such an invasive practice. A good step toward gaining trust is to ensure it has clearance from the privacy commissioner’s office before beginning a trial.

Do you want to block ads and trackers across all apps on your iPhone, iPad, or Mac — not just in Safari?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Magic Lasso: No ads, No trackers, No annoyances, No worries

The new App Ad Blocking feature in Magic Lasso Adblock v5.0 builds upon our powerful Safari and YouTube ad blocking, extending protection to:

  • News apps

  • Social media

  • Games

  • Other browsers like Chrome and Firefox

All ad blocking is done directly on your device, using a fast, efficient Swift-based architecture that follows our strict zero data collection policy.

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Bjarke Smith-Meyer, Politico:

The European Commission has lost access to its control panel for buying and tracking ads on Elon Musk’s X — after fining the social media platform €120 million for violating EU transparency rules.

“Your ad account has been terminated,” X’s head of product, Nikita Bier, wrote on the platform early Sunday.

Bier accused the EU executive of trying to amplify its own social media post about the fine on X by trying “to take advantage of an exploit in our Ad Composer — to post a link that deceives users into thinking it’s a video and to artificially increase its reach.”

The first thing to know about Bier’s explanation is that it is not true. The E.U. did, in fact, post a video in its tweet. You can verify that for yourself by viewing the tweet on the web, and it is visible on X mirror websites. However, tapping on the video thumbnail from the iOS app does not begin playback; instead, it takes you to the news release. The Commission provided a statement to TechCrunch saying it has not paid for advertising since October 2023 — good! — and that it used the platform’s own tools for this post.

Also, why would this “artificially increase its reach”? I thought “links are not deboosted”, and that it was “[b]est to post a text/image/video summary of what’s at the link for people to view and then decide if they want to click the link”. It is so hard to keep track of the policies of a platform run by liars and frauds.

Which, by the way, is why the European Commission should not be doing anything on X in the first place. Bier has stumbled into doing them a favour. The world’s richest man does not need anyone else’s advertising money for his incendiary website, and the Commission should not be rewarding it with attention. Let it rot.

Update: I updated the title and text of this post after reading the statement made to TechCrunch noting the E.C. has not paid for advertising on X in two years.

Kurt Wagner, Bloomberg:

Meta Platforms Inc.’s Mark Zuckerberg is expected to meaningfully cut resources for building the so-called metaverse, an effort that he once framed as the future of the company and the reason for changing its name from Facebook Inc.

Executives are considering potential budget cuts as high as 30% for the metaverse group next year, which includes the virtual worlds product Meta Horizon Worlds and its Quest virtual reality unit, according to people familiar with the talks, who asked not to be named while discussing private company plans. Cuts that high would most likely include layoffs as early as January, according to the people, though a final decision has not yet been made.

Wagner’s reporting was independently confirmed by Mike Isaac, of the New York Times, and Meghan Bobrowsky and Georgia Wells, of the Wall Street Journal, albeit in slightly different ways. While Wagner wrote it “would most likely include layoffs as early as January”, Isaac apparently confirmed the budget cuts are likely large-scale personnel cuts, which makes sense:

The cuts could come as soon as next month and amount to 10 to 30 percent of employees in the Metaverse unit, which works on virtual reality headsets and a V.R.-based social network, the people said. The numbers of potential layoffs are still in flux, they said. Other parts of the Reality Labs division develop smart glasses, wristbands and other wearable devices. The total number of employees in Reality Labs could not be learned.

Alan Dye is just about to join Reality Labs. I wonder if this news comes as a fun surprise for him.

At Meta Connect a few months ago, the company spent basically the entire time on augmented reality glasses, but it swore up and down it was all related to its metaverse initiatives:

We’re hard at work advancing the state of the art in augmented and virtual reality, too, and where those technologies meet AI — that’s where you’ll find the metaverse.

The metaverse is whatever Meta needs it to be in order to justify its 2021 rebrand.

Our vision for the future is a world where anyone anywhere can imagine a character, a scene, or an entire world and create it from scratch. There’s still a lot of work to do, but we’re making progress. In fact, we’re not far off from being able to create compelling 3D content as easily as you can ask Meta AI a question today. And that stands to transform not just the imagery and videos we see on platforms like Instagram and Facebook, but also the possibilities of VR and AR, too.

You know, whenever I am unwinding and chatting with friends after a long day at work, I always get this sudden urge to create compelling 3D content.

Apple:

Apple today announced that Jennifer Newstead will become Apple’s general counsel on March 1, 2026, following a transition of duties from Kate Adams, who has served as Apple’s general counsel since 2017. She will join Apple as senior vice president in January, reporting to CEO Tim Cook and serving on Apple’s executive team.

In addition, Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, will retire in late January 2026. The Government Affairs organization will transition to Adams, who will oversee the team until her retirement late next year, after which it will be led by Newstead. Newstead’s title will become senior vice president, General Counsel and Government Affairs, reflecting the combining of the two organizations. The Environment and Social Initiatives teams will report to Apple chief operating officer Sabih Khan.

What will tomorrow bring, I wonder?

Newstead has spent the past year working closely with Joel Kaplan, and fighting the FTC’s case against Meta — successfully, I should add. Before that, she was a Trump appointee at the U.S. State Department. Well positioned, then, to fight Apple’s U.S. antitrust lawsuit against a second-term Trump government that has successfully solicited Apple’s money.

John Voorhees, MacStories:

Although Apple doesn’t say so in its press release, it’s pretty clear that a few things are playing out among its executive ranks. First, a large number of them are approaching retirement age, and Apple is transitioning and changing roles internally to account for those who are retiring. Second, the company is dealing with departures like Alan Dye’s and what appears to be the less-than-voluntary retirement of John Giannandrea. Finally, the company is reducing the number of Tim Cook’s direct reports, which is undoubtedly to simplify the transition to a new CEO in the relatively near future.

A careful reader will notice Apple’s newsroom page currently has press releases for these departures and, from earlier this week, John Giannandrea’s, but there is nothing about Alan Dye’s. In fact, even in the statement quoted by Bloomberg, Dye is not mentioned. In fairness, Adams, Giannandrea, and Jackson all have bios on Apple’s leadership page. Dye’s was removed between 2017 and 2018.

Starting to think Mark Gurman might be wrong about that FT report.

Jonathan Slotkin, a surgeon and venture capital investor, wrote for the New York Times about data released by Waymo indicating impressive safety improvements over human drivers through June 2025:

If Waymo’s results are indicative of the broader future of autonomous vehicles, we may be on the path to eliminating traffic deaths as a leading cause of mortality in the United States. While many see this as a tech story, I view it as a public health breakthrough.

[…]

There’s a public health imperative to quickly expand the adoption of autonomous vehicles. […]

We should be skeptical of all self-reported stats, but these figures look downright impressive.

Slotkin responsibly notes several caveats, though neglects to mention the specific cities in which Waymo operates: Austin, Los Angeles, Phoenix, and San Francisco. These are warm cities with relatively low annual precipitation, almost none of which is ever snow. Slotkin’s enthusiasm for widespread adoption should be tempered somewhat by this narrow range of climate data. Still, its data is compelling. These cars seem to crash less often than those driven by people in the same cities and, in particular, avoid causing serious injuries at an impressive rate.

It is therefore baffling to me that Waymo appears to be treating this as a cushion for experimentation.

Katherine Bindley, in a Wall Street Journal article published the very same day as Slotkin’s Times piece:

The training wheels are off. Like the rule-following nice guy who’s tired of being taken advantage of, Waymos are putting their own needs first. They’re bending traffic laws, getting impatient with pedestrians and embracing the idea that when it comes to city driving, politeness doesn’t pay: It’s every car for itself.

[…]

Waymo has been trying to make its cars “confidently assertive,” says Chris Ludwick, a senior director of product management with Waymo, which is owned by Google parent Alphabet. “That was really necessary for us to actually scale this up in San Francisco, especially because of how busy it gets.”

A couple years ago, Tesla’s erroneously named “Full Self-Driving” feature began cruising through crosswalks if it judged it could pass a crossing pedestrian in time, and I wrote:

Advocates of autonomous vehicles often say increased safety is one of its biggest advantages over human drivers. Compliance with the law may not be the most accurate proxy for what constitutes safe driving, but not to a disqualifying extent. Right now, it is the best framework we have, and autonomous vehicles should follow the law. That should not be a controversial statement.

I stand by that. A likely reason for Waymo’s impressive data is that its cars behave with caution and deference. Substituting that with “confidently assertive” driving is a move in entirely the wrong direction. It should not roll through stop signs, even if its systems understand nobody is around. It should not mess up the order of an all-way stop intersection. I have problems with the way traffic laws are written, but it is not up to one company in California to develop a proprietary interpretation. Just follow the law.

Slotkin:

This is not a call to replace every vehicle tomorrow. For one thing, self-driving technology is still expensive. Each car’s equipment costs $100,000 beyond the base price, and Waymo doesn’t yet sell cars for personal use. Even once that changes, many Americans love driving; some will resist any change that seems to alter that freedom.

[…]

There is likely to be some initial public trepidation. We do not need everyone to use self-driving cars to realize profound safety gains, however. If 30 percent of cars were fully automated, it might prevent 40 percent of crashes, as autonomous vehicles both avoid causing crashes and respond better when human drivers err. Insurance markets will accelerate this transition, as premiums start to favor autonomous vehicles.

Slotkin is entirely correct in writing that “Americans love driving” — the U.S. National Household Travel Survey, last conducted in 2022, found 90.5% of commuters said they primarily used a car of some kind (table 7-2, page 50). 4.1% said they used public transit, 2.9% said they walked, and just 2.5% said they chose another mode of transportation in which taxicabs are grouped along with bikes and motorcycles. Those figures are about the same in 2017, though with an unfortunate decline in the number of transit commuters. Commuting is not the only reason for travelling, of course, but this suggests to me that even if every taxicab ride was in an autonomous Waymo, there would still be a massive gap to achieve that 30% adoption rate Slotkin wants. And, if insurance companies begin incentivizing autonomous vehicles, it really means rich people will reap the reward of being able to buy a new car.

Any argument about road safety has to be more comprehensive than what Slotkin is presenting in this article. Regardless of how impressive Waymo’s stats are, it is a vision of the future that is an individualized solution to a systemic problem. I have no specialized knowledge in this area, but I am fascinated by it. I read about this stuff obsessively. The things I want to see are things everyone can benefit from: improvements to street design that encourage drivers to travel at lower speeds, wider sidewalks making walking more comfortable, and generous wheeling infrastructure for bicycles, wheelchairs, and scooters. We can encourage the adoption of technological solutions, too; if this data holds up, it would seem welcome. But we can do so much better for everyone, and on a more predictable timeline.

This is, as Slotkin writes, a public health matter. Where I live, record numbers of people are dying, in part because more people than ever are driving bigger and heavier vehicles with taller fronts while they are distracted. Many of those vehicles will still be on the road in twenty years’ time, even if we accelerate the adoption pace of more autonomous vehicles. We do not need to wait for a headline-friendly technological upgrade. There are boring things cities can start doing tomorrow that would save lives.

Mark Gurman, Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

Big week for changes in Apple leadership.

I am sure more will trickle out about this, but one thing notable to me is that Lemay has been a software designer for over 25 years at Apple. Dye, on the other hand, came from marketing and print design. I do not want to put too much weight on that — someone can be a sufficiently talented multidisciplinary designer — but I am curious to see what Lemay might do in a more senior role.

Admittedly I also have some (perhaps morbid) curiosity about what Dye will do at Meta.

One more note from Gurman’s report:

Dye had taken on a more significant role at Apple after Ive left, helping define how the company’s latest operating systems, apps and devices look and feel. The executive informed Apple this week that he’d decided to leave, though top management had already been bracing for his departure, the people said. Dye will join Meta as chief design officer on Dec. 31.

Let me get this straight: Dye personally launches an overhaul of Apple’s entire visual interface language, then leaves. Is that a good sign for its reception, either internally or externally?

Benj Edwards, Ars Technica:

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

Based on Edwards’ summary — I still have no interest in paying for the Information — it sounds like this mostly affects sales of A.I. “agents”, a riskier technology proposition for businesses. This sounds to me like more concrete evidence of a plateau in corporate interest than the surveys reported on by the Economist.

Todd Vaziri:

As far as I can tell, Paul Haine was the first to notice something weird going on with HBO Max’ presentation. In one of season one’s most memorable moments, Roger Sterling barfs in front of clients after climbing many flights of stairs. As a surprise to Paul, you can clearly see the pretend puke hose (that is ultimately strapped to the back side of John Slattery’s face) in the background, along with two techs who are modulating the flow. Yeah, you’re not supposed to see that.

It appears as though this represents the original photography, unaltered before digital visual effects got involved. Somehow, this episode (along with many others) do not include all the digital visual effects that were in the original broadcasts and home video releases. It’s a bizarro mistake for Lionsgate and HBO Max to make and not discover until after the show was streaming to customers.

Eric Vilas-Boas, Vulture:

How did this happen? Apparently, this wasn’t actually HBO Max’s fault — the streamer received incorrect files from Lionsgate Television, a source familiar with the exchange tells Vulture. Lionsgate is now in the process of getting HBO Max the correct files, and the episodes will be updated as soon as possible.

It just feels clumsy and silly for Lionsgate to supply the wrong files in the first place, and for nobody at HBO to verify they are the correct work. An amateur mistake, frankly, for an ostensibly premium service costing U.S. $11–$23 per month. If I were king for a day, it would be illegal to sell or stream a remastered version of something — a show, an album, whatever — without the original being available alongside it.

Apple:

Apple today announced John Giannandrea, Apple’s senior vice president for Machine Learning and AI Strategy, is stepping down from his position and will serve as an advisor to the company before retiring in the spring of 2026. Apple also announced that renowned AI researcher Amar Subramanya has joined Apple as vice president of AI, reporting to Craig Federighi. Subramanya will be leading critical areas, including Apple Foundation Models, ML research, and AI Safety and Evaluation. The balance of Giannandrea’s organization will shift to Sabih Khan and Eddy Cue to align closer with similar organizations.

When Apple hired Giannandrea from Google in 2018, the New York Times called it a “major coup”, given that Siri was “less effective than its counterparts at Google and Amazon”. The world changed a lot in the past six-and-a-half years, though: Siri is now also worse than a bunch of A.I. products. Of course, Giannandrea’s role at Apple was not limited to Siri. He spent time on the Project Titan autonomous car, which was cancelled early last year, before moving to generative A.I. projects. The first results of that effort were shown at WWDC last year; the most impressive features have yet to ship.

I feel embarrassed and dumb for hoping Giannandrea would help shake the company out of its bizarre Siri stupor. Alas, he is now on the Graceful Executive Exit Express, where he gets to spend a few more months at Apple in a kind of transitional capacity — you know the drill. Maybe Subramanya will help move the needle. Maybe this ex-Googler will make it so. Maybe I, Charlie Brown, will get to kick that football.

The Economist:

On November 20th American statisticians released the results of a survey. Buried in the data is a trend with implications for trillions of dollars of spending. Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11% (see chart 1). Adoption has fallen sharply at the largest businesses, those employing over 250 people. Three years into the generative-AI wave, demand for the technology looks surprisingly flimsy.

[…]

Even unofficial surveys point to stagnating corporate adoption. Jon Hartley of Stanford University and colleagues found that in September 37% of Americans used generative AI at work, down from 46% in June. A tracker by Alex Bick of the Federal Reserve Bank of St Louis and colleagues revealed that, in August 2024, 12.1% of working-age adults used generative AI every day at work. A year later 12.6% did. Ramp, a fintech firm, finds that in early 2025 AI use soared at American firms to 40%, before levelling off. The growth in adoption really does seem to be slowing.

I am skeptical of the metrics used by the Economist to produce this summary, in part because they are all over the place, and also because they are mostly surveys. I am not sure people always know they are using a generative A.I. product, especially when those features are increasingly just part of the modern office software stack.

While the Economist has an unfortunate allergy to linking to its sources, I wanted to track them down because a fuller context is sometimes more revealing. I believe the U.S. Census data is the Business Trends and Outlook Survey though I am not certain because its charts are just plain, non-interactive images. In any case, it is the Economist’s own estimate of falling — not stalling — adoption by workers, not an estimate produced by the Census Bureau, which is curious given two of its other sources indicate more of a plateau instead of a decline.

The Hartley, et al. survey is available here and contains some fascinating results other than the specific figures highlighted by the Economist — in particular, that the construction industry has the fourth-highest adoption of generative A.I., that Gemini is shown in Figure 9 as more popular than ChatGPT even though the text on page 7 indicates the opposite, and that the word “Microsoft” does not appear once in the entire document. I have some admittedly uninformed and amateur questions about its validity. At any rate, this is the only source the Economist cites which indicates a decline.

The data point attributed to the tracker operated by the Federal Reserve Bank of St. Louis is curious. The Economist notes “in August 2024, 12.1% of working-age adults used generative A.I. every day at work. A year later 12.6% did”, but I am looking at the dashboard right now, and it says the share using generative A.I. daily at work is 13.8%, not 12.6%. In the same time period, the share of people using it “at least once last week” jumped from 36.1% to 46.9%. I have no idea where that 12.6% number came from.

Finally, Ramp’s data is easy enough to find. Again, I have to wonder about the Economist’s selective presentation. If you switch the chart from an overall view to a sector-based view, you can see adoption of paid subscriptions has more than doubled in many industries compared to October last year. This is true even in “accommodation and food services”, where I have to imagine use cases are few and far between.

After finding the actual source of the Economist’s data, it has left me skeptical of the premise of this article. However, plateauing interest — at least for now — makes sense to me on a gut level. There is a ceiling to work one can entrust to interns or entry-level employees, and that is approximately similar for many of today’s A.I. tools. There are also sector-level limits. Consider Ramp’s data showing high adoption in the tech and finance industries, with considerably less in sectors like healthcare and food services. (Curiously, Ramp says only 29% of the U.S. construction industry has a subscription to generative A.I. products, while Hartley, et al. says over 40% of the construction industry is using it.)

I commend any attempt to figure out how useful generative A.I. is in the real world. One of the problems with this industry right now is that its biggest purveyors are not public companies and, therefore, have fewer disclosure requirements. Like any company, they are incentivized to inflate their importance, but we have little understanding of how much they are exaggerating. If you want to hear some corporate gibberish, OpenAI interviewed executives at companies like Philips and Scania about their use of ChatGPT, but I do not know what I gleaned from either interview — something about experimentation and vague stuff about people being excited to use it, I suppose. It is not very compelling to me. I am not in the C-suite, though.

The biggest public A.I. firm is arguably Microsoft. It has rolled out Copilot to Windows and Office users around the world. Again, however, its press releases leave much to be desired. Levi Strauss employees, Microsoft says, “report the devices and operating system have led to significant improvements in speed, reliability and data handling, with features like the Copilot key helping reduce the time employees spend searching and free up more time for creating”. Sure. In another case study, Microsoft and Pantone brag about the integration of a colour palette generator that you can use with words instead of your eyes.

Microsoft has every incentive to pretend Copilot is a revolutionary technology. For people actually doing the work, however, its ever-nagging presence might be one of many nuisances getting in the way of the job that person actually knows how to do. A few months ago, the company replaced the familiar Office portal with a Copilot prompt box. It is still little more than a thing I need to bypass to get to my work.

All the stats and apparent enthusiasm about A.I. in the workplace are, as far as I can tell, a giant mess. A problem with this technology is that the ways in which it is revolutionary are often not very useful, its practical application in a work context is a mixed bag that depends on industry and role, and its hype encourages otherwise respectable organizations to suggest their proximity to its promised future.

The Economist being what it is, much of this article revolves around the insufficiently realized efficiency and productivity gains, and that is certainly something for business-minded people to think about. But there are more fundamental issues with generative A.I. to struggle with. It is a technology built on a shaky foundation. It shrinks the already-scant field of entry-level jobs. Its results are unpredictable and can validate harm. The list goes on, yet it is being loudly inserted into our SaaS-dominated world as a top-down mandate.

It turns out A.I. is not magic dust you can sprinkle on a workforce to double their productivity. CEOs might be thrilled by having all their email summarized, but the rest of us do not need that. We need things like better balance of work and real life, good benefits, and adequate compensation. Those are things a team leader cannot buy with a $25-per-month-per-seat ChatGPT business license.

Tyler Hall:

Maybe it’s because my eyes are getting old or maybe it’s because the contrast between windows on macOS keeps getting worse. Either way, I built a tiny Mac app last night that draws a border around the active window. I named it “Alan”.

A good, cheeky name. The results are not what I would call beautiful, but that is not the point, is it? It works well. I wish it did not feel understandable for there to be an app that draws a big border around the currently active window. That should be something made sufficiently obvious by the system.

Unfortunately, this is a problem plaguing the latest versions of MacOS and Windows alike, which is baffling to me. The bar for what constitutes acceptable user interface design seems to have fallen low enough that it is tripping everyone at the two major desktop operating system vendors.

Hank Green was not getting a lot of traction on a promotional post on Threads about a sale on his store. He got just over thirty likes, which does not sound awful, until you learn that was over the span of seven hours and across Green’s following of 806,000 accounts on Threads.

So he tried replying to rage bait with basically the same post, and that was far more successful. But, also, it has some pretty crappy implications:

That’s the signal that Threads is taking from this: Threads is like oh, there’s a discussion going on.

It’s 2025! Meta knows that “lots of discussion” is not a surrogate for “good things happening”!

I assume the home feed ranking systems are similar for Threads and Instagram — though they might not be — and I cannot tell you how many times my feed is packed with posts from many days to a week prior. So many businesses I frequent use it as a promotional tool for time-bound things I learn about only afterward. The same thing is true of Stories, since they are sorted based on how frequently you interact with an account.

Everyone is allowed one conspiracy theory, right? Mine is that a primary reason Meta is hostile to reverse-chronological feeds is because it requires businesses to buy advertising. I have no proof to support this, but it seems entirely plausible.

You have seen Moraine Lake. Maybe it was on a postcard or in a travel brochure, or it was on Reddit, or in Windows Vista, or as part of a “Best of California” demo on Apple’s website. Perhaps you were doing laundry in Lucerne. But I am sure you have seen it somewhere.

Moraine Lake is not in California — or Switzerland, for that matter. It is right here in Alberta, between Banff and Lake Louise, and I have been lucky enough to visit many times. One time I was particularly lucky, in a way I only knew in hindsight. I am not sure the confluence of events occurring in October 2019 is likely to be repeated for me.

In 2019, the road up to the lake would be open to the public from May until about mid-October, though the closing day would depend on when it was safe to travel. This is one reason why so many pictures of it have only the faintest hint of snow capping the mountains behind — it is only really accessible in summer.

I am not sure why we decided to head up to Lake Louise and Moraine Lake that Saturday. Perhaps it was just an excuse to get out of the house. It was just a few days before the road was shut for the season.

We visited Lake Louise first and it was, you know, just fine. Then we headed to Moraine.

I posted a higher-quality version of this on my Glass profile.
A photo of Moraine Lake, Alberta, frozen with chunks of ice and rocks on its surface.

Walking from the car to the lakeshore, we could see its surface was that familiar blue-turquoise, but it was entirely frozen. I took a few images from the shore. Then we realized we could just walk on it, as did the handful of other people who were there. This is one of several photos I took from the surface of the lake, the glassy ice reflecting that famous mountain range in the background.

I am not sure I would be able to capture a similar image today. Banff and Lake Louise have received more visitors than ever in recent years, to the extent private vehicles are no longer allowed to travel up to Moraine Lake. A shuttle bus is now required. The lake also does not reliably freeze at an accessible time and, when it does, it can be covered in snow or the water line may have receded. I am not arguing this is an impossible image to create going forward. I just do not think I am likely to see it this way again.

I am very glad I remembered to bring my camera.

Winston Cho, the Hollywood Reporter:

To rewind, authors and publishers have gained access to Slack messages between OpenAI’s employees discussing the erasure of the datasets, named “books 1 and books 2.” But the court held off on whether plaintiffs should get other communications that the company argued were protected by attorney-client privilege.

In a controversial decision that was appealed by OpenAI on Wednesday, U.S. District Judge Ona Wang found that OpenAI must hand over documents revealing the company’s motivations for deleting the datasets. OpenAI’s in-house legal team will be deposed.

Wang’s decision (PDF), to the extent I can read it as a layperson, examines OpenAI’s shifting story about why it erased the books 1 and books2 data sets — apparently, the only time possible training materials were deleted.

I am not sure it has yet been proven OpenAI trained its models on pirated books. Anthropic settled a similar suit in September, and Meta and Apple are facing similar accusations. For practical purposes, however, it is trivial to suggest it did use pirated data in general: if you have access to its Sora app, enter any prompt followed by the word “camrip”.

What is a camrip?, a strictly law-abiding person might ask. It is a label added to a movie pirated in the old-fashioned way: by pointing a video camera at the screen in a theatre. As a result, these videos have a distinctive look and sound which is reproduced perfectly by Sora. It is very difficult for me to see a way in which OpenAI could have trained this model to understand what a camrip is without feeding it a bunch of them, and I do not know of a legitimate source for such videos.