Todd Vaziri:

As far as I can tell, Paul Haine was the first to notice something weird going on with HBO Max’ presentation. In one of season one’s most memorable moments, Roger Sterling barfs in front of clients after climbing many flights of stairs. As a surprise to Paul, you can clearly see the pretend puke hose (that is ultimately strapped to the back side of John Slattery’s face) in the background, along with two techs who are modulating the flow. Yeah, you’re not supposed to see that.

It appears as though this represents the original photography, unaltered before digital visual effects got involved. Somehow, this episode (along with many others) do not include all the digital visual effects that were in the original broadcasts and home video releases. It’s a bizarro mistake for Lionsgate and HBO Max to make and not discover until after the show was streaming to customers.

Eric Vilas-Boas, Vulture:

How did this happen? Apparently, this wasn’t actually HBO Max’s fault — the streamer received incorrect files from Lionsgate Television, a source familiar with the exchange tells Vulture. Lionsgate is now in the process of getting HBO Max the correct files, and the episodes will be updated as soon as possible.

It just feels clumsy and silly for Lionsgate to supply the wrong files in the first place, and for nobody at HBO to verify they are the correct work. An amateur mistake, frankly, for an ostensibly premium service costing U.S. $11–$23 per month. If I were king for a day, it would be illegal to sell or stream a remastered version of something — a show, an album, whatever — without the original being available alongside it.

Apple:

Apple today announced John Giannandrea, Apple’s senior vice president for Machine Learning and AI Strategy, is stepping down from his position and will serve as an advisor to the company before retiring in the spring of 2026. Apple also announced that renowned AI researcher Amar Subramanya has joined Apple as vice president of AI, reporting to Craig Federighi. Subramanya will be leading critical areas, including Apple Foundation Models, ML research, and AI Safety and Evaluation. The balance of Giannandrea’s organization will shift to Sabih Khan and Eddy Cue to align closer with similar organizations.

When Apple hired Giannadrea from Google in 2018, the New York Times called it a “major coup”, given that Siri was “less effective than its counterparts at Google and Amazon”. The world changed a lot in the past six-and-a-half years, though: Siri is now also worse than a bunch of A.I. products. Of course, Giannadrea’s role at Apple was not limited to Siri. He spent time on the Project Titan autonomous car, which was cancelled early last year, before moving to generative A.I. projects. The first results of that effort were shown at WWDC last year; the most impressive features have yet to ship.

I feel embarrassed and dumb for hoping Giannadrea would help shake the company out of its bizarre Siri stupor. Alas, he is now on the Graceful Executive Exit Express, where he gets to spend a few more months at Apple in a kind of transitional capacity — you know the drill. Maybe Subramanya will help move the needle. Maybe this ex-Googler will make it so. Maybe I, Charlie Brown, will get to kick that football.

The Economist:

On November 20th American statisticians released the results of a survey. Buried in the data is a trend with implications for trillions of dollars of spending. Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11% (see chart 1). Adoption has fallen sharply at the largest businesses, those employing over 250 people. Three years into the generative-AI wave, demand for the technology looks surprisingly flimsy.

[…]

Even unofficial surveys point to stagnating corporate adoption. Jon Hartley of Stanford University and colleagues found that in September 37% of Americans used generative AI at work, down from 46% in June. A tracker by Alex Bick of the Federal Reserve Bank of St Louis and colleagues revealed that, in August 2024, 12.1% of working-age adults used generative AI every day at work. A year later 12.6% did. Ramp, a fintech firm, finds that in early 2025 AI use soared at American firms to 40%, before levelling off. The growth in adoption really does seem to be slowing.

I am skeptical of the metrics used by the Economist to produce this summary, in part because they are all over the place, and also because they are most often surveys. I am not sure people always know they are using a generative A.I. product, especially when those features are increasingly just part of the modern office software stack.

The Economist has an unfortunate allergy to linking to its sources, but I wanted to track them down because a fuller context is sometimes more revealing. I believe the U.S. Census data is the Business Trends and Outlook Survey though I am not certain because its charts are just plain, non-interactive images. In any case, it is the Economist’s own estimate of falling — not stalling — adoption by workers, which is curious given two of its other sources indicate a plateau, not a decline.

The Hartley, et al. survey is available here and contains some fascinating results other than the specific figures highlighted by the Economist — in particular, that the construction industry has the fourth-highest adoption of generative A.I., that Gemini is shown in Figure 9 as more popular than ChatGPT even though the text on page 7 indicates the opposite, and that the word “Microsoft” does not appear once in the entire document. I have some admittedly uninformed and amateur questions about its validity. At any rate, this is the only source the Economist cites which indicates a decline.

The data point attributed to the tracker operated by the Federal Reserve Bank of St. Louis is curious. The Economist notes “in August 2024, 12.1% of working-age adults used generative A.I. every day at work. A year later 12.6% did”, but I am looking at the dashboard right now, and it says the share using generative A.I. daily at work is 13.8%, not 12.6%. In the same time period, the share of people using it “at least once last week” jumped from 36.1% to 46.9%. I have no idea where that 12.6% number came from.

Finally, Ramp’s data is easy enough to find. Again, I have to wonder about the Economist’s selective presentation. If you switch the chart from an overall view to a sector-based view, you can see adoption of paid subscriptions has more than doubled in many industries compared to October last year. This is true even in “accommodation and food services”, where I have to imagine use cases are few and far between.

After finding the actual source of the Economist’s data, it has left me skeptical of the premise of this article. However, plateauing interest — at least for now — makes sense to me on a gut level. There is a ceiling to work one can entrust to interns or entry-level employees, and that is approximately similar for many of today’s A.I. tools. There are also sector-level limits. Consider Ramp’s data showing high adoption in the tech and finance industries, with considerably less in sectors like healthcare and food services. (Curiously, Ramp says only 29% of the U.S. construction industry has a subscription to generative A.I. products, while Hartley, et al. says over 40% of the construction industry is using it.)

I commend any attempt to figure out how useful generative A.I. is in the real world. One of the problems with this industry right now is that its biggest purveyors are not public companies and, therefore, have fewer disclosure requirements. Like any company, they are incentivized to inflate their importance, but we have little understanding of how much they are exaggerating. If you want to hear some corporate gibberish, OpenAI interviewed executives at companies like Philips and Scania about their use of ChatGPT, but I do not know what I gleaned from either interview — something about experimentation and vague stuff about people being excited to use it, I suppose. It is not very compelling to me. I am not in the C-suite, though.

The biggest public A.I. firm is arguably Microsoft. It has rolled out Copilot to Windows and Office users around the world. Again, however, its press releases leave much to be desired. Levi Strauss employees, Microsoft says, “report the devices and operating system have led to significant improvements in speed, reliability and data handling, with features like the Copilot key helping reduce the time employees spend searching and free up more time for creating”. Sure. In another case study, Microsoft and Pantone brag about the integration of a colour palette generator that you can use with words instead of your eyes.

Microsoft has every incentive to pretend Copilot is a revolutionary technology. For people actually doing the work, however, its ever-nagging presence might be one of many nuisances getting in the way of the job that person actually knows how to do. A few months ago, the company replaced the familiar Office portal with a Copilot prompt box. It is still little more than a thing I need to bypass to get to my work.

All the stats and apparent enthusiasm about A.I. in the workplace are, as far as I can tell, a giant mess. A problem with this technology is that the ways in which it is revolutionary are often not very useful, its practical application in a work context is a mixed bag that depends on industry and role, and its hype encourages otherwise respectable organizations to suggest their proximity to its promised future.

The Economist being what it is, much of this article revolves around the insufficiently realized efficiency and productivity gains, and that is certainly something for business-minded people to think about. But there are more fundamental issues with generative A.I. to struggle with. It is a technology built on a shaky foundation. It shrinks the already-scant field of entry-level jobs. Its results are unpredictable and can validate harm. The list goes on, yet it is being loudly inserted into our SaaS-dominated world as a top-down mandate.

It turns out A.I. is not magic dust you can sprinkle on a workforce to double their productivity. CEOs might be thrilled by having all their email summarized, but the rest of us do not need that. We need things like better balance of work and real life, good benefits, and adequate compensation. Those are things a team leader cannot buy with a $25-per-month-per-seat ChatGPT business license.

Tyler Hall:

Maybe it’s because my eyes are getting old or maybe it’s because the contrast between windows on macOS keeps getting worse. Either way, I built a tiny Mac app last night that draws a border around the active window. I named it “Alan”.

A good, cheeky name. The results are not what I would call beautiful, but that is not the point, is it? It works well. I wish it did not feel understandable for there to be an app that draws a big border around the currently active window. That should be something made sufficiently obvious by the system.

Unfortunately, this is a problem plaguing the latest versions of MacOS and Windows alike, which is baffling to me. The bar for what constitutes acceptable user interface design seems to have fallen low enough that it is tripping everyone at the two major desktop operating system vendors.

Hank Green was not getting a lot of traction on a promotional post on Threads about a sale on his store. He got just over thirty likes, which does not sound awful, until you learn that was over the span of seven hours and across Green’s following of 806,000 accounts on Threads.

So he tried replying to rage bait with basically the same post, and that was far more successful. But, also, it has some pretty crappy implications:

That’s the signal that Threads is taking from this: Threads is like oh, there’s a discussion going on.

It’s 2025! Meta knows that “lots of discussion” is not a surrogate for “good things happening”!

I assume the home feed ranking systems are similar for Threads and Instagram — though they might not be — and I cannot tell you how many times my feed is packed with posts from many days to a week prior. So many businesses I frequent use it as a promotional tool for time-bound things I learn about only afterward. The same thing is true of Stories, since they are sorted based on how frequently you interact with an account.

Everyone is allowed one conspiracy theory, right? Mine is that a primary reason Meta is hostile to reverse-chronological feeds is because it requires businesses to buy advertising. I have no proof to support this, but it seems entirely plausible.

You have seen Moraine Lake. Maybe it was on a postcard or in a travel brochure, or it was on Reddit, or in Windows Vista, or as part of a “Best of California” demo on Apple’s website. Perhaps you were doing laundry in Lucerne. But I am sure you have seen it somewhere.

Moraine Lake is not in California — or Switzerland, for that matter. It is right here in Alberta, between Banff and Lake Louise, and I have been lucky enough to visit many times. One time I was particularly lucky, in a way I only knew in hindsight. I am not sure the confluence of events occurring in October 2019 is likely to be repeated for me.

In 2019, the road up to the lake would be open to the public from May until about mid-October, though the closing day would depend on when it was safe to travel. This is one reason why so many pictures of it have only the faintest hint of snow capping the mountains behind — it is only really accessible in summer.

I am not sure why we decided to head up to Lake Louise and Moraine Lake that Saturday. Perhaps it was just an excuse to get out of the house. It was just a few days before the road was shut for the season.

We visited Lake Louise first and it was, you know, just fine. Then we headed to Moraine.

I posted a higher-quality version of this on my Glass profile.
A photo of Moraine Lake, Alberta, frozen with chunks of ice and rocks on its surface.

Walking from the car to the lakeshore, we could see its surface was that familiar blue-turquoise, but it was entirely frozen. I took a few images from the shore. Then we realized we could just walk on it, as did the handful of other people who were there. This is one of several photos I took from the surface of the lake, the glassy ice reflecting that famous mountain range in the background.

I am not sure I would be able to capture a similar image today. Banff and Lake Louise have received more visitors than ever in recent years, to the extent private vehicles are no longer allowed to travel up to Moraine Lake. A shuttle bus is now required. The lake also does not reliably freeze at an accessible time and, when it does, it can be covered in snow or the water line may have receded. I am not arguing this is an impossible image to create going forward. I just do not think I am likely to see it this way again.

I am very glad I remembered to bring my camera.

Winston Cho, the Hollywood Reporter:

To rewind, authors and publishers have gained access to Slack messages between OpenAI’s employees discussing the erasure of the datasets, named “books 1 and books 2.” But the court held off on whether plaintiffs should get other communications that the company argued were protected by attorney-client privilege.

In a controversial decision that was appealed by OpenAI on Wednesday, U.S. District Judge Ona Wang found that OpenAI must hand over documents revealing the company’s motivations for deleting the datasets. OpenAI’s in-house legal team will be deposed.

Wang’s decision (PDF), to the extent I can read it as a layperson, examines OpenAI’s shifting story about why it erased the books 1 and books2 data sets — apparently, the only time possible training materials were deleted.

I am not sure it has yet been proven OpenAI trained its models on pirated books. Anthropic settled a similar suit in September, and Meta and Apple are facing similar accusations. For practical purposes, however, it is trivial to suggest it did use pirated data in general: if you have access to its Sora app, enter any prompt followed by the word “camrip”.

What is a camrip?, a strictly law-abiding person might ask. It is a label added to a movie pirated in the old-fashioned way: by pointing a video camera at the screen in a theatre. As a result, these videos have a distinctive look and sound which is reproduced perfectly by Sora. It is very difficult for me to see a way in which OpenAI could have trained this model to understand what a camrip is without feeding it a bunch of them, and I do not know of a legitimate source for such videos.

The Internet Archive released a WordPress plugin not too long ago:

Internet Archive Wayback Machine Link Fixer is a WordPress plugin designed to combat link rot—the gradual decay of web links as pages are moved, changed, or taken down. It automatically scans your post content — on save and across existing posts — to detect outbound links. For each one, it checks the Internet Archive’s Wayback Machine for an archived version and creates a snapshot if one isn’t available.

Via Michael Tsai:

The part where it replaces broken links with archive links is implemented in JavaScript. I like that it doesn’t modify the post content in your database. It seems safe to install the plug-in without worrying about it messing anything up. However, I had kind of hoped that it would fix the links as part of the PHP rendering process. Doing it in JavaScript means that the fixed links are not available in the actual HTML tags on the page. And the data that the JavaScript uses is stored in an invisible <div> under the attribute data-iawmlf-post-links, which makes the page fail validation.

I love the idea of this plugin, but I do not love this implementation. I think I understand why it works this way: for the nondestructive property mentioned by Tsai, and also to account for its dependence on a third-party service of varying reliability. I would love to see a demo of this plugin in action.

Nicholas Hune-Brown, the Local:

Every media era gets the fabulists it deserves. If Stephen Glass, Jayson Blair and the other late 20th century fakers were looking for the prestige and power that came with journalism in that moment, then this generation’s internet scammers are scavenging in the wreckage of a degraded media environment. They’re taking advantage of an ecosystem uniquely susceptible to fraud—where publications with prestigious names publish rickety journalism under their brands, where fact-checkers have been axed and editors are overworked, where technology has made falsifying pitches and entire articles trivially easy, and where decades of devaluing journalism as simply more “content” have blurred the lines so much it can be difficult to remember where they were to begin with.

This is likely not the first story you have read about a freelancer managing to land bylines in prestigious publications thanks to dependency on A.I. tools, but it is one told very well.

Good tip from Jeff Johnson:

My business website has a number of “Download on the App Store” links for my App Store apps. Here’s an example of what that looks like:

[…]

The problem is that Live Text, “Select text in images to copy or take action,” is enabled by default on iOS devices (Settings → General → Language & Region), which can interfere with the contextual menu in Safari. Pressing down on the above link may select the text inside the image instead of selecting the link URL.

I love the Live Text feature, but it often conflicts with graphics like these. There is a good, simple, two-line CSS trick for web developers that should cover most situations. Also, if you rock a user stylesheet — and I think you should — it seems to work fine as a universal solution. Any issues I have found have been minor and not worth noting. I say give it a shot.

Update: Adding Johnson’s CSS to a user stylesheet mucks up the layout of Techmeme a little bit. You can exclude it by adding div:not(.ii) > before a:has(> img) { display: inline-block; }.

Quinn Nelson:

[…] at a moment when the Mac has roared back to the centre of Apple’s universe, the iPad feels closer than ever to fulfilling its original promise. Except it doesn’t, not really, because while the iPad has gained windowing and external display support, pro apps, all the trappings of a “real computer”, underneath it all, iPadOS is still a fundamentally mobile operating system with mobile constraints baked into its very DNA.

Meanwhile, the Mac is rumoured to be getting everything the iPad does best: touchscreens, OLED displays, thinner designs.

There are things I quibble with in Nelson’s video, including the above-quoted comparison to mere rumours about the Mac. The rest of the video is more compelling as it presents comparisons with the same or similar software on each platform in real-world head-to-head matches.

Via Federico Viticci, MacStories:

I’m so happy that Apple seems to be taking iPadOS more seriously than ever this year. But now I can’t help but wonder if the iPad’s problems run deeper than windowing when it comes to getting serious work done on it.

Apple’s post-iPhone platforms are only as good as Apple will allow them to be. I am not saying it needs to be possible to swap out Bluetooth drivers or monkey around with low-level code, but without more flexibility, platforms like the iPad and Vision Pro are destined to progress only at the rate Apple says is acceptable, and with the third-party apps it says are permissible. These are apparently the operating systems for the future of computers. They are not required to have similar limitations to the iPhone, but they do anyway. Those restrictions are holding back the potential of these platforms.

Marina Dunbar, the Guardian:

Many of the most influential personalities in the “Make America great again” (Maga) movement on X are based outside of the US, including Russia, Nigeria and India, a new transparency feature on the social media site has revealed.

The new tool, called “about this account”, became available on Friday to users of the Elon Musk-owned platform. It allows anyone to see where an account is located, when it joined the platform, how often its username has been changed, and how the X app was downloaded.

This is a similar approach to adding labels or notes to tweets containing misinformation in that it is adding more speech and context. It is more automatic, but the function and intent is comparable, which means Musk’s hobbyist P.R. team must be all worked up. But I checked, and none seem particularly bothered. Maybe they actually care about trust and safety now, or maybe they are lying hacks.

Mike Masnick, Techdirt:

For years, Matt Taibbi, Michael Shellenberger, and their allies have insisted that anyone working on these [trust and safety] problems was part of a “censorship industrial complex” designed to silence political speech. Politicians like Ted Cruz and Jim Jordan repeated these lies. They treated trust & safety work as a threat to democracy itself.

Then Musk rolled out one basic feature, and within hours proved exactly why trust & safety work existed in the first place.

Jason Koebler, 404 Media, has been covering the monetization of social media:

This has created an ecosystem of side hustlers trying to gain access to these programs and YouTube and Instagram creators teaching people how to gain access to them. It is possible to find these guide videos easily if you search for things like “monetized X account” on YouTube. Translating that phrase and searching in other languages (such as Hindi, Portuguese, Vietnamese, etc) will bring up guides in those languages. Within seconds, I was able to find a handful of YouTubers explaining in Hindi how to create monetized X accounts; other videos on the creators’ pages explain how to fill these accounts with AI-generated content. These guides also exist in English, and it is increasingly popular to sell guides to make “AI influencers,” and AI newsletters, Reels accounts, and TikTok accounts regardless of the country that you’re from.

[…]

Americans are being targeted because advertisers pay higher ad rates to reach American internet users, who are among the wealthiest in the world. In turn, social media companies pay more money if the people engaging with the content are American. This has created a system where it makes financial sense for people from the entire world to specifically target Americans with highly engaging, divisive content. It pays more.

The U.S. market is a larger audience, too. But those of us in rich countries outside the U.S. should not get too comfortable; I found plenty of guides similar to the ones shown by Koebler for targeting Australia, Canada, Germany, New Zealand, and more. Worrisome — especially if you, say, are somewhere with an electorate trying to drive the place you live off a cliff.

Update: Several X accounts purporting to be Albertans supporting separatism appear to be from outside Canada, including a “Concerned 🍁 Mum”, “Samantha”, “Canada the Illusion”, and this “Albertan” all from the United States, and a smaller account from Laos. I tried to check more, but X’s fragile servers are aggressively rate-limited.

I do not think people from outside a country are forbidden from offering an opinion on what is happening within it. I would be a pretty staggering hypocrite if I thought that. Nor do I think we should automatically assume people who are stoking hostile politics on social media are necessarily external or bots. It is more like a reflection of who we are now, and how easily that can be exploited.

Jonathan Weil, Wall Street Journal:

It seems like a marvel of financial engineering: Meta Platforms is building a $27 billion data center in Louisiana, financed with debt, and neither the data center nor the debt will be on its own balance sheet.

That outcome looks too good to be true, and it probably is.

The phrase “marvel of financial engineering” does not seem like a compliment. In addition to the evidence from Weil’s article, Meta is taking advantage of a tax exemption created by Louisiana’s state legislature. But, in its argument, it is merely a user of this data centre.

Also, colour me skeptical this data centre will truly be “the size of Manhattan” before the bubble bursts, despite the disruption to life in the area.

Update: Paris Martineau points to Weil’s bio noting he was “the first reporter to challenge Enron’s accounting practices”.

Fred Vogelstein, Crazy Stupid Tech — which, again, is a compliment:

We’re not only in a bubble but one that is arguably the biggest technology mania any of us have ever witnessed. We’re even back reinventing time. Back in 1999 we talked about internet time, where every year in the new economy was like a dog year – equivalent to seven years in the old.

Now VCs, investors and executives are talking about AI dog years – let’s just call them mouse years – which is internet time divided by five? Or is it by 11? Or 12? Sure, things move way faster than they did a generation ago. But by that math one year today now equals 35 years in 1995. Really?

A sobering piece that, unfortunately, is somewhat undercut since it lacks a single mention of layoffs, jobs, employment, or any other indication that this bubble will wreck the lives of people far outside its immediate orbit. In fairness, few of the related articles linked at the bottom mention that, either. Articles in Stratechery, the Brookings Institute, and the New York Times want you to think a bubble is just a sign of building something new and wonderful. A Bloomberg newsletter mentions layoffs only in the context of changing odds in predictions markets — I chuckled — while M.G. Siegler notes all the people who are being laid off while new A.I. hires get multimillion-dollar employment packages. Maybe all the pain and suffering that is likely to result from the implosion of this massive sector is too obvious to mention for the MBA and finance types. I think it is worth stating, though, not least because it acknowledges other people are worth caring about at least as much as innovation and growth and all that stuff.

Rohan Grover and Josh Widera, Techdirt:

Taking data disaffection into consideration, digital privacy is a cultural issue – not an individual responsibility – and one that cannot be addressed with personal choice and consent. To be clear, comprehensive data privacy law and changing behavior are both important. But storytelling can also play a powerful role in shaping how people think and feel about the world around them.

The correct answer to corporate and government contempt for our privacy must be in legislation. A systemic problem is not solved by each of us individually fiddling with confusing settings. But we do not get to adequate laws by treating this as a lost argument.

Jennifer Rankin, the Guardian:

The European Commission has been accused of “a massive rollback” of the EU’s digital rules after announcing proposals to delay central parts of the Artificial Intelligence Act and water down its landmark data protection regulation.

If agreed, the changes would make it easier for tech firms to use personal data to train AI models without asking for consent, and try to end “cookie banner fatigue” by reducing the number times internet users have to give their permission to being tracked on the internet.

If you are annoyed about cookie banners, get ready to have that dialled back — maybe, a bit. The proposed changes will allow users to set their cookie preference in their web browser. But media companies will be free to ignore those automatic signals and ask for your permission to set cookies anyway. Also, the circumstances under which consent is not required will be broadened, but websites will still need to ask before using cookies for targeted advertising. Oh, and consent is still required by laws elsewhere and, until policies are harmonized around the world, consent banners are here to stay. Even if everyone copies the proposed changes for the E.U., you will still see a lot of these banners if you spend a lot of time reading news.

I think relying on individual consent is ridiculous. If that is the best we can do, instead of outlawing creepy and privacy-hostile behaviour in its entirety, then a browser preference seems fine. It is too bad the Do Not Track standard, originally proposed by the U.S. FTC, was not mandatory for advertisers to follow, and that its replacement is not well supported either. Maybe this is the legislative push it needs.

My knee-jerk reaction to the weakening of A.I. regulation is that it is yet more evidence of a corporate-influenced race to the bottom. This is overly simplistic, however. It is true that A.I. companies in the U.S. love the country’s relatively lax regulatory environment, though it is apparently not lax enough. But the other country leading the charge on A.I. is heavily-regulated China which is, perhaps, a special case.

The E.U.’s proposal seems to be a compromise position for an industry that does not want to compromise. It just wants to ingest everything, explore what it can generate without constraint, and be completely insulated from the consequences. So I am skeptical these changes will move the needle on whether the E.U. can become an A.I. powerhouse any more than its current policies. That is not a knock against the E.U. specifically; all of the non-U.S. countries, including mine, are struggling to get their sweet piece of the trillion-dollar pie. I suspect the reason the money cannon has not been pointed at us has less to do with regulation, or culture, or geography. I bet it is more likely the same reason as why investment banking lives in places like New York and London and not, say, in small towns scattered across the Canadian Shield.

Leah Nylen, Bloomberg:

Google is certainly unlikely to be passive now that a judge has given it the green light to continue using the money derived from its monopolies to pay for the development and dominance of its AI tools.

[…]

The Acquired podcast hosts honed in on this point about complementarity in discussing why they believe Google will likely do better than its AI rivals in the long run. Only Google has the advantage of up-to-date information from its search monopoly and YouTube. It has massive computing resources from its main business and its cloud computing arm. It has the ability to personalize models thanks to its massive collection of information about users. And it has lots and lots of money.

There was a brief moment in early 2023 when some commentators were certain Google would face serious competition from the then-recent arrival of A.I. results in Bing. Those were quaint times. Microsoft still insists Bing is growing its market share, which might be true, but only barely. For most people, Google’s monopoly is fairly durable, and it will probably continue as it fights ChatGPT, specifically, not Copilot in Bing.

Casey Newton and Nilay Patel, the Verge, in July 2020:

It’s a combination of neutralizing a competitor and improving Facebook, Zuckerberg said in a reply. “There are network effects around social products and a finite number of different social mechanics to invent. Once someone wins at a specific mechanic, it’s difficult for others to supplant them without doing something different.”

[…]

Forty-five minutes later, Zuckerberg sent a carefully worded clarification to his earlier, looser remarks.

You have read these emails before, I am sure, but I think it is worth a reminder post-trial.

The latter email was written for the very circumstance of this thread being found. You have to imagine that, in the forty-five minute break after which Zuckerberg replied to himself to clarify he did not actually intend to write the illegal thing, he chatted with the CFO — to whom he was emailing his plans to do illegal things — and maybe some lawyers, and they advised him it might look bad if regulators looked at this.

Barbara Ortutay, the Associated Press, earlier this week following the results of the trial:

During his April testimony, Zuckerberg pushed back against claims that Facebook bought Instagram to neutralize a threat. In his line of questioning, FTC attorney Daniel Matheson repeatedly brought up emails — many of them more than a decade old — written by Zuckerberg and his associates before and after the acquisition of Instagram.

While acknowledging the documents, Zuckerberg has often sought to downplay the contents, saying he wrote the emails early in the acquisition process and that the notes did not fully capture the scope of his interest in the company. But the case was not about the acquisitions of Instagram and WhatsApp more than a decade ago, which the FTC approved at the time, but about whether Meta holds a monopoly now. Prosecutors, Boasberg wrote in the ruling, could only win if they proved “current or imminent legal violation.”

To describe his in-trial response as “push[ing] back” and “downplay[ing]” is, I think, charitable. Zuckerberg acknowledged the company struggled to build competitive apps independently.

The FTC could have reviewed these frank and incriminating emails when it approved the acquisition in 2012. Yet, to repeat myself, it approved the acquisition anyhow. The United States has, since the mid-1970s, exercised a pretty weak enforcement of its antitrust laws compared to the way it policed corporate size before. It allowed this kind of stuff to happen in the first place, where one goal of the acquisition was explicitly to eliminate competition. Whether Instagram would exist today as an independent company is a great hypothetical question, and the FTC could have laid the groundwork to answering it in 2012.

Freddie Harrison of Sketch:

Our latest update — Copenhagen — features a major redesign of Sketch’s UI. Redesigns like this don’t happen often. In fact, our last one was in 2020, when Apple launched macOS Big Sur.

Just like Big Sur, macOS Tahoe has brought about a whole new design language and — for teams like ours making pro tools — a whole new approach to consider for our UI.

This probably will not convert the kind of person who finds Liquid Glass revolting in its entirety, but I think this implementation is thoughtful and well-considered. Note, too, that Apple itself has not shipped any of its own Mac pro apps with Liquid Glass changes. The choices made by the Sketch team are instructional.

Jonathan Vanian, CNBC:

Meta won its high-profile antitrust case against the Federal Trade Commission, which had accused the company of holding a monopoly in social networking.

In a memorandum opinion released Tuesday, Judge James Boasberg of the U.S. District Court in Washington, D.C., said the FTC failed to prove its argument. The case, initially filed by the FTC five years ago, centered on Meta’s acquisitions of Instagram and WhatsApp.

“Whether or not Meta enjoyed monopoly power in the past, though, the agency must show that it continues to hold such power now,” Boasberg said in the filing. “The Court’s verdict today determines that the FTC has not done so. A judgment so stating shall issue this day.”

Briefly, I think the personal jabs at former FTC Khan by Adam Kovacevich, CEO of the lobbying group Chamber of Progress, are worth addressing:

A decisive, but not remotely surprising, loss for one of Lina Khan’s most prominent anti-big tech cases.

[…]

Brutal loss for Khan.

Kovacevich has a real axe to grind. In his first X thread about the suit — originally filed under the first Trump administration — Kovacevich does not take such a personal tone. In fact, he never mentioned then-FTC chair Joseph Simons on X during Simons’ entire tenure. But he tweeted about Khan by name incessantly during and after her time running the Commission. Strange guy.

As for the case itself, there are two things I think are true: the FTC’s argument was improbable at best, and Boasberg’s decision (PDF) is kind of bananas. (In case that link disappears, I have also put it on Dropbox.) You can read it for yourself; it is not a particularly dense text. While the judge accepts loads of evidence from Meta’s side with little snark, he undermines the credibility of C. Scott Hemphill, an expert witness who offered testimony, on the basis that he advocated for this very investigation of Meta’s market power. It is pretty clear throughout he barely believes the FTC’s argument is valid. And, based on the way it was presented, I find it difficult to disagree.

The thing that unlocked for me in reading this opinion is that the FTC created a market definition in which few platforms other than classic Facebook and Instagram lived which, almost by definition, meant that Meta monopolized the market. (What about WhatsApp? you might ask; the judge argues the FTC’s own definition of the market excludes WhatsApp from consideration.) Meta’s argument, though, is that the company no longer exists in that market at all. Its competitors are not Snapchat and MeWe; they are TikTok and YouTube. Meta is now fully an entertainment company whether users like it or not:

Nor is it clear that users want more friend posts. True, they report on surveys that they do. […] But their actions tell a different story.

What follows on page 33 is an almost entirely redacted paragraph, aside from the following lines:

[…] Instead, what users really seem to want is Reels. Meta measured the effect of Reels […] An equivalent experiment on Facebook found […]

My single ellipses are for readability but do not tell the story of how much text is redacted. Just imagine several lines of black bars in their place each time. The paragraph ends:

So whatever users might say they desire, what seems to draw them to Meta’s apps is not marginal posts from marginal friends, but unconnected videos picked just for them. Meta’s shift to the latter does not reveal monopoly power so much as a profit-maximizing corporation giving its customers what they want.

We have no idea if Meta’s experiments measured qualitative or more quantitative data. I suspect it is the latter since that is what Meta focuses on; it has reported more time spent (PDF) in its apps thanks largely to Reels. What Meta has built with the results of these experiments is personalized television. Facebook and Instagram began as utilities, and they are now fully entertainment, thereby justifying their legal claim. And, yeah, absolutely — that is the choice Meta made. The judge writes of the “small fortune in dollars and resources” (page 54) it cost Meta to change strategy, spending “around $4 billion on Reels last year and is on track to spend about $4.5 billion this year” without accounting for its reduced ad load.

What the FTC unsuccessfully tried to argue is, more or less, that Meta could only have mounted this competitive defence because it purchased Instagram in 2012, even as it maintains a monopoly in the market segment the FTC created. Boasberg did not buy this argument in part because competition for users’ time is finite, and the data shows users would rather scroll through videos than to briefly check friends’ updates and then go do something else (pages 64–65):

True enough, but TikTok has a social graph, too. It lets users follow people they know and has tried to make mapping those offline connections a bigger part of the app. It prompts users to import their list of Facebook and Instagram friends as well as their phone contacts […] TikTok has also added a Friends tab, which contains only posts created or reshared by accounts that the user follows and that follow the user back.

To be sure, TikTok’s social graph has not achieved great success. A TikTok executive estimated that fewer than 10% of users import their contacts. Meanwhile, users spend only about 1% of time on the app watching videos in the Friends tab.

Then again, these features are now also ancillary on Facebook and Instagram. […]

To be sure, TikTok is not used in remotely the same way as Facebook and Instagram were, but I will still use it as a retort to the FTC because Facebook and Instagram are now TikTok clones anyway. It is a bad argument, but it is more compelling than the one the FTC presented.