One last link to complete the good indie apps trifecta for today — frequent site sponsor Magic Lasso Adblock now has support for tvOS:

Magic Lasso Adblock for tvOS processes everything directly on your local Apple TV. That means ads are blocked without any data ever leaving your device — offering stronger privacy than VPN or DNS-based ad blockers that rely on external servers.

We also follow a strict zero data collection policy. We don’t collect, track, or store your viewing history, app activity, or any personal data — period.

This post is not part of a sponsorship deal. I have allowed Magic Lasso to sponsor my website several times because it is an app I actually use and like; it is the only ad blocker I run. I clearly think there is room for tasteful advertising and I do not block ads everywhere. But I also think there are good reasons for users to take control of the exhaustive amount of ads and trackers they — we — must endure.

It is frustrating to see paid streaming services double-dip by also inserting a heavy ad load within every show. It is almost like they are aching to re-create the worst parts of the cable television experience. I think it is perfectly fair for users to respond by mimicking DVRs that let you skip ads.

Among my friends, I am notoriously bad at keeping up to date with shows, but I am actually caught up on “The Pitt”. I watched the latest episode last week with ad blocking turned on and it was a wonderful experience. Magic Lasso Pro is $40 per year in Canada.

Skyscraper is a suite of tools created by Cameron Banga for Bluesky: feed monitoring for marketing people, a terminal client for MacOS — wild — and an iOS app. I do not have the same visceral dislike of the first-party Bluesky app as I do for, say, Instagram or Twitter/X, but Skyscraper on iOS feels like a nice change of pace.

A cool thing about open standards is how I can use Skyscraper on my iPhone, Aeronaut on my Mac, and the website anywhere I do not have access to those two native clients. As far as I know, there is no way to sync read position across different clients, but that is okay with me. I do not need to read everything.

Skyscraper has a generous free tier; using it with multiple accounts, part of a more expansive feature set, costs $4 per month or $25 per year in Canada.

Terry Godier introducing Current, his new RSS reader:

Each article has a velocity, a measure of how quickly it ages. Breaking news burns bright for three hours. A daily article stays relevant for eighteen. An essay lingers for three days. An evergreen tutorial might sit in your river for a week.

As items age, they dim. Eventually they’re gone, carried downstream. You don’t mark them as read. You don’t file them. They simply pass, the way water passes under a bridge.

I have been using Current for a couple of days now. I am a longtime NetNewsWire user, so I have needed to shake a little bias about how an RSS reader “should” work. And, ultimately, it is an RSS reader, so it is on some level a different presentation of familiar elements.

Sometimes, though, a modest rethinking is all that is needed for something to feel entirely different, and Current does. I quite like what Godier has come up with here. In the spirit of water metaphors, my main feed almost feels like an expansive pool of things to read, just floating there. No pressure. Some of the feeds I read are updated frequently; most less so. Godier’s design means the more rapid feeds do not overwhelm more careful personal essays. It all seems to work pretty well, I think.

A cool thing about open standards is that it means I can use Current on my phone and NetNewsWire on my Mac. Each feels right to me for its individual context. On my Mac, I might want to churn through a bunch of recent stories; on my phone, though, I might just want to find one great thing to read while I am taking a coffee break.

Current is a one-time $13 Canadian charge in the App Store, or $10 in the U.S., which feels like a throwback. Turns out it is still possible to sell apps without subscription pricing.

Marcin Wichary:

Half of my education in URLs as user interface came from Flickr in the late 2000s. Its URLs looked like this:

flickr.com/photos/mwichary/favorites

flickr.com/photos/mwichary/sets

flickr.com/photos/mwichary/sets/72177720330077904

flickr.com/photos/mwichary/54896695834

flickr.com/photos/mwichary/54896695834/in/set-72177720330077904

The persons responsible for this URL scheme thought well enough ahead for the complexity and scope it could encapsulate, yet kept it remarkably simple.

It is the kind of logical directory-style scheme I wish I would see in library-based desktop applications, too. I still have copies of my libraries for Aperture and iPhoto on the same Mac as I use for Photos. Each of those has a pretty understandable structure within the library package: the most substantial part is a “Masters” folder containing a year, month, day, and then project folder hierarchy; among others, there is also a “Database” folder with nondestructive edit operations, though that becomes less intelligible structure if you drill down to the project level.

For contrast, the library structure for the Photos app has no such logic. The “originals” folder contains fifteen subfolders labelled 0–9, then A–F. Each photo’s filename is a unique alphanumeric code, and there is no discernible logic or structure to which photos end up where. I found an image from 2012 following one from 2023, then another from 2022, all in the same folder. This feels undesigned, yet it is far more recent than any of the structures that preceded it.

Apple:

Apple today announced a transformative update coming to Apple Podcasts this spring that will bring advanced video podcast capabilities to the app. This enhanced video podcast experience uses Apple’s industry-leading HTTP Live Streaming (HLS) technology to set a new standard that empowers podcast creators with unprecedented control and monetization opportunities while delivering the highest-quality viewing experience for users.

The “advanced” part of this is that switching between the video and audio versions of an episode is basically seamless. When I tried it on both my fast home Wi-Fi and my tepid cell connection, it was as smooth as in Stephen Robles’ demo. That is pretty nice.

What is not so nice is that it is another proprietary take on the otherwise open standards world of podcasting. It requires a special agreement with Apple, which is why it is limited at launch to four ad-tech podcast hosting providers, and does not support a generic HLS URL. Maybe this is because it is a technical feat but, also, “Apple will charge participating ad networks an impression-based fee for the delivery of dynamic ads in HLS video”.

James Cridland, Podnews:

Up until now, transcripts, chapters, subscriptions – all these features have been available to anyone, in a truly open manner. Other launches have been enhancement: auto-submissions of new shows, for example. But now, there is no access to HLS video if you’re self-hosting, or if you are using a small podcast hosting company. Apple Podcasts can’t talk about being open when this feature is closed to all but four companies.

By keeping HLS video away from the RSS feed, this is a proprietary solution for Apple Podcasts. No other player will see these HLS video feeds (unlike creator-produced transcripts, for example, which are visible everywhere). This is a shame.

Tedious arguments about terminology aside, the infrastructure required for delivering video seems to have finally opened the door to big companies applying a lock-in strategy to a format based on open standards.

On Thursday, Scott Shambaugh published a bizarre story about a rejected pull request from what seems to be an OpenClaw A.I. agent. The agent then generated a blog post accusing Shambaugh of “gatekeeping” contributions, and personally attacking him. After backlash in the pull request, the agent deleted its own post and generated an apology.

Allegedly.

The tale here is so extraordinary that it is irresponsible to take it at face value, as it seems the Wall Street Journal has. It seems plausible to me this is an elaborate construction of a person desperate to make waves. We should leave room for the — likely, I think — revelation this could be a mix of generated text and human intervention. That ambiguity is why I did not link to the original post.

A part of the subsequent reporting, however, has become a story just as interesting. Shambaugh, in a follow-up article:

I’ve talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This was disheartening news to learn. I like Ars Technica’s reporting; in the twenty-plus years I have read the site, I have found its articles generally careful and non-alarmist without pulling deserved punches. I cite it frequently here because I respect my readers, and I assume it does the same.

This revelation was upsetting, and the editor’s note issued by Ken Fisher perhaps even more so:

That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.

Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

Fisher provides no additional detail about how fake quotes ended up in a published article. There are multiple parts of the reporting process that must have failed here to not only invent these statements, but also to escape any sort of fact-checking. There is no description of what steps will be taken to prevent this from happening in the future.

Fortunately, Benj Edwards, one of the story’s authors, posted a statement to Bluesky acknowledging he was the one who used A.I. tools that falsified these quotes:

Here’s what happened: I was incorporating information from Shambaugh’s new blog post into an existing draft from Thursday.

During the process, I decided to try an experimental Claude Code-based Al tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline.

When the tool refused to process the post due to content policy restrictions (Shambaugh’s post described harassment). I pasted the text into ChatGPT to understand why.

This is a more specific explanation than the one offered by Fisher, but also opens its own questions. Why would Edwards need a Claude tool to summarize a not-particularly-long blog post? Why would he then jump to ChatGPT? Is this the first time Edwards used this tool, or is it an example of over-reliance that went horribly awry? And, again, is there no proof-reading process at Ars Technica to confirm that the quotations from source material are accurate and in-context?

This looks bad for Edwards, of course, though it seems like he has deep remorse. As bad of a screw-up as this is, I do not think it is worth piling on him personally. What I want from Ars Technica is an explanation of how this kind of thing will be prevented in the future. The most obvious is to prohibit all tools based on large language models or generative A.I. by its reporters. However, as technologies like these begin to power things as seemingly simple as spelling and grammar checkers, that policy will be difficult to maintain in the real world. Publications need better processes for confirming that, regardless of the tools used to create an article, the reporting is accurate.

Apple has updated its own iOS usage figures (Internet Archive link for posterity). These figures measure “devices that transacted on the App Store on February 12, 2026” — but what “transacted” means is not entirely clear.

Even so, of transacting iPhones introduced in the last four years, 74% are running iOS 26; overall, 66% of iPhones measured are on iOS 26. This compares to year-ago figures of 76% and 68%, respectively, on iOS 18 — except it is not exactly a perfect comparison.

Joe Rossignol, MacRumors:

At first glance, the iOS 26 and iOS 18 adoption figures appear to be similar, but this is only because Apple released the iOS 26 statistics later than usual. iOS 26’s statistics are based on devices that transacted with the App Store approximately 150 days after the update was released to the public, compared to 127 days for iOS 18. In other words, iOS 26 was available for around three weeks longer by comparison.

As was suspected, this means that iOS 26 adoption has officially been slower than iOS 18 adoption, but not to the extent that some earlier, unofficial estimates had claimed. There is no way of knowing exactly why iOS 26 adoption has been slower, but some users have opted to avoid the new Liquid Glass design for now.

The most likely explanation is that Apple began pushing users to update to iOS 26 later than it did for iOS 18. What this does not indicate is a mass or even medium-scale rejection of Liquid Glass, and I question whether a large number of users are actively avoiding the iOS 26 update. I am sure some are but, given the scale at which Apple operates and defaulting to automatic updates, I cannot imagine this has as much of an effect as Apple’s decision of when to aggressively push an update.

I thought there was a 20-point gap between adoption last year and this year, and I thought that might be user-motivated. I got that wrong. I still have a mindset of someone who grew up with elective software updates, when we are now in a software-as-a-service model. What I think I got right, though, is my comparison with iOS 7, which achieved rapid uptake in just a few months. If iOS 26’s design or features were exciting to enough people, they would clamour to manually update. Instead, they seem perfectly happy for Apple to make that choice for them.

Karl Bode, referencing a dumb Fortune article published last month:

Nothing that headline says is true. It doesn’t seem to matter. “CEO said a thing!” journalism involves, again, no actual journalism. Sam Altman, Mark Cuban, and Mark Zuckerberg are frequent beneficiaries of the U.S. corporate press’ absolute dedication to propping up extraction class mythologies via clickbait.

Nobody has benefited more from this style of journalism than Elon Musk. His fake supergenius engineer persona was propped up by a lazy press for the better part of two decades before the public even started to realize Musk’s primary skillset was opportunistically using familial wealth and the Paypal money he lucked into to saddle up to actual innovators and take singular credit for their work.

There is a symbiotic relationship these CEOs have with modern and traditional media alike. Musk goes on a three-hour “deeply researched” podcast and says some bullshit about how space will “be by far the cheapest place to put A.I. It will be space in 36 months or less. Maybe 30 months”. And then the host replies “36 months?” and Musk says “less than 36 months”, and then they are off for ten minutes discussing this as though it is a real thing that will really happen. Then real publications cover it like it is serious and real and, when asked for comment, Musk’s companies do not engage.

All these articles and videos bring in the views despite lacking the substance implied by either their publisher or, in the case of these video interviews, their length and serious tone. These CEOs know they can just say stuff. There is no reason to take them at their word, nor to publish a raft of articles based on whatever they say in some friendly and loose interview. Or a tweet, for that matter.

Do you want to block all YouTube ads in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock – the ad blocker designed for you.

As an efficient, high performance and native Safari ad blocker, Magic Lasso blocks all intrusive ads, trackers, and annoyances – delivering a faster, cleaner, and more secure web browsing experience.

Best in class YouTube ad blocking

Magic Lasso Adblock is easy to setup, doubles the speed at which Safari loads, and also blocks all YouTube ads — including all:

  • video ads

  • pop up banner ads

  • search ads

  • plus many more

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Kashmir Hill, Kalley Huang, and Mike Isaac report, in the New York Times, that Meta has been planning on bringing facial recognition features to its smart glasses. There is a money quote in this article you may have seen on social media already, but I want to give a greater context to it (the facial recognition feature is called “Name Tag”, at least internally):

[…] The document, from May, described plans to first release Name Tag to attendees of a conference for the blind, which the company did not do last year, before making it available to the general public.

Meta’s internal memo said the political tumult in the United States was good timing for the feature’s release.

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.

The second part of this is a cynical view of public relations that would be surprising from most any company, yet seems pretty typical for Meta. This memo is apparently from May, a few months before a Customs and Border Protection agent wore Meta’s Ray-Bans to a raid, so I am not sure civil rights organizations would ignore the feature today. However, the first part of the quote I included also seems pretty cynical: releasing it as an accessibility feature first.

Facial recognition may potentially be useful to people with disabilities, assuming it works well, and I do not want to sweep that aside in the abstract. But this is Meta. It is a company with a notoriously terrible record on privacy, to the extent it is bound by a 20-year consent order (PDF) with the U.S. Federal Trade Commission, which it violated in multiple ways, one of which concerned facial recognition features. Perhaps there is a way for technology to help people recognize faces that is safe and respectful but, despite positioning itself as a privacy-focused company — coincidentally, at the same time as the FTC said it violated its consent decree — Meta will not be delivering that future.

For one, it is still considering the scope of which faces its glasses ought to recognize:

Meta is exploring who should be recognizable through the technology, two of the people said. Possible options include recognizing people a user knows because they are connected on a Meta platform, and identifying people whom the user may not know but who have a public account on a Meta site like Instagram.

The feature would not give people the ability to look up anyone they encountered as a universal facial recognition tool, two people familiar with the plans said.

Instagram has over three billion monthly users and, while that does not translate perfectly to three billion public personal accounts, it seems to me like a large proportion of people any of us randomly meet would be identifiable. Why should that suggestion even make it past the very first mention of it in some meeting long ago? Some ideas are obviously bad and should be quashed immediately.

Dell Cameron, Wired:

United States Customs and Border Protection plans to spend $225,000 for a year of access to Clearview AI, a face recognition tool that compares photos against billions of images scraped from the internet.

The deal extends access to Clearview tools to Border Patrol’s headquarters intelligence division (INTEL) and the National Targeting Center, units that collect and analyze data as part of what CBP calls a coordinated effort to “disrupt, degrade, and dismantle” people and networks viewed as security threats.

Lindsey Wilkinson, FedScoop:

In the last year, CBP has deployed several AI technologies, such as NexisXplore, to aid in open-source research of potential threats and to identify travelers. The Homeland Security organization last year began using Mobile Fortify as a facial comparison and fingerprint matching tool to quickly verify persons of interest. CBP Link is another AI use case that cropped up in the past year, streamlining facial recognition and real-time identity verification.

CBP began piloting Clearview AI’s technology in 2025, too, according to DHS’s AI inventory. The technology needed to be — and was — tuned to produce better results and limit misidentification. Guardrails have been identified to some degree.

A reminder that the way Clearview works is by scraping images it associates with specific individuals, including from sources like Facebook, across the web at massive scale — over sixty billion, according to Cameron. This is not facial recognition of criminals or even people suspected of wrongdoing. It is recognition of anyone who has a face that has been photographed and shared even semi-publicly.

This contract is likely part of the technologies for identifying incoming travellers, and not just in the U.S. — a 2022 article on the CBP website says other countries are using the CBP software that will likely have Clearview integration.

Howard Oakley:

What Apple doesn’t reveal is that it has improved, if not fixed, the shortcomings in Accessibility’s Reduced Transparency setting. When that’s enabled, at least some of the visual mess resulting from Liquid Glass, for example in the Search box in System Settings, is now cleaned up, as the sidebar header is now opaque. It’s a small step, but does address one of the most glaring faults in 26.2.

In apps like Messages and Preview, the toolbar finally has a solid background when Reduce Transparency is turned on instead of the translucent gradient previously. The toolbar itself and the buttons within it remain ill-defined, however, unless you also turn on Increase Contrast, which Apple clearly does not want you to do because it makes the system look ridiculous. Also, when Reduce Transparency is turned on, Siri looks like this:

Siri on MacOS Tahoe with illegible white text against a pastel background

One would assume this is the kind of thing someone at Apple would notice if there were people working there who used Siri, tested Accessibility features, and cared about contrast.

Adam Engst, TidBits:

Two other Liquid Glass-related pecadillos fared less well. First, although Apple fixed a macOS 26.2 problem that caused the column divider handles to be overwritten by scroll bars (first screenshot below), if you hide both the path bar and status bar, an unseemly gap appears between the scroll bar and the handles (fourth screenshot below). Additionally, while toggling the path and status bars, I managed to get the filenames to overwrite the status bar (third screenshot below). Worse, all of these were taken with Reduce Transparency on, so why are filenames ever visible under the scroll bar?

The problem with a cross-platform top-to-bottom redesign that puts translucency at the forefront is that it means addressing each of the ever-increasing number of control conditions. And then you are still stuck with Liquid Glass’ reflective quality. Even with Reduce Transparency turned on, the Dock will brighten — in light mode — when dragging an application window near it, because it is reflecting the large white expanse. Technically, the opacity of the Dock has not changed, but it still carries the perception of a translucent area with the impact it has on its contrast. Apple has written itself a whole new set of bugs to fix.

Logan McMillen, of the New Republic, is very worried about TikTok’s new ownership in the United States — so worried, in fact, that it deserves a conspiratorial touch:

The Americanization of TikTok has also introduced a more visible form of suppression through the algorithmic throttling of dissent. In the wake of recent ICE shootings in Minneapolis, users and high-profile creators alike reported that anti-ICE videos were instantly met with “zero views” or flagged as “ineligible for recommendation,” effectively purging them from the platform’s influential “For You” feed.

The new TikTok USDS Joint Venture LLC attributed these irregularities to a convenient data center power outage at its Oracle-hosted facilities. While the public attention this episode garnered will make it more conspicuous if user content gets throttled on TikTok again, the tools are there: By leveraging shadow bans and aggressive content moderation, TikTok can, if it wanted to, ensure that any visual evidence of ICE’s overreach is silenced before it reaches the masses.

If these claims sound familiar to you, it is probably because the same angle was used to argue for the divestiture of TikTok’s U.S. assets in the first place. The same implications and the same shadowy tones were invoked to make the case that TikTok was censoring users’ posts on explicit or implied instruction from Chinese authorities — and it was not convincing then, either.

These paragraphs appear near the bottom of the piece, where readers will find the following note:

This article originally misidentified TikTok’s privacy policy. It also misidentified the extant privacy policy as an updated one.

This article has been updated throughout for clarity.

That got me wondering. I compared the original article against the latest version, and put them into Diffchecker. The revisions are striking. Not only did the original version of the piece repeat the misleading claim that TikTok’s U.S. privacy policy changes were an effort to collect citizenship status, it suggested TikTok was directly “feed[ing] its data” to the Department of Homeland Security. On the power outage, quoted above, McMillen was more explicitly conspiratorial, originally writing “the timing suggests a more deliberate intention”.

404 Media reporter Jason Koelber, in a Bluesky thread [sic]:

it’s important to keep our guard up and it can be very useful to speculate about where things may go. that is not the same as saying without evidence that these things are already happening, or seeing different capabilities and assuming they are all being mashed together on a super spy platform

The revised version of the article is more responsible than the original. That is not a high bar. There are discrete things McMillen gets right, but the sum of these parts is considerably less informative and less useful.

For example, McMillen points out how advertising identifiers can be used for surveillance, which is true but is not specific to TikTok either before or after the U.S. assets divestiture. It is a function of this massive industry in which we all participate to some extent. If we want to have a discussion — or, better yet, action — regarding better privacy laws, we can do that without the snowballing effect of mashing together several possible actions and determining it is definitely a coordinated effort between ICE, TikTok, Palantir, Amazon’s Ring, and the Supreme Court of the United States.

Speaking of infinite scrolling, Kyle Hughes just updated Information Superhighway, his app for endlessly reading randomized Wikipedia articles. It has a new icon and a Liquid Glass button — and that is the update, as far as I can tell.

The only kind of brain rot this will give you is more of an ageing and fermenting process that is the result of scrolling from an article about a Japanese erotic black comedy-horror film, to another about an archaeological site in Belize, and then to one about the British Pirate Party.

A free app, with no strings attached. Probably the best way to infinitely scroll your day away.

The European Commission:

The Commission’s investigation preliminarily indicates that TikTok did not adequately assess how these addictive features could harm the physical and mental wellbeing of its users, including minors and vulnerable adults.

For example, by constantly ‘rewarding’ users with new content, certain design features of TikTok fuel the urge to keep scrolling and shift the brain of users into ‘autopilot mode’. Scientific research shows that this may lead to compulsive behaviour and reduce users’ self-control.

Additionally, in its assessment, TikTok disregarded important indicators of compulsive use of the app, such as the time that minors spend on TikTok at night, the frequency with which users open the app, and other potential indicators.

It is fair for regulators to question the efficacy of measures claiming to “promote healthier sleep habits”. This wishy-washy verbiage is just as irritating as when it is employed by supplement companies and it should be more strictly regulated.

Trying to isolate infinite scrolling as a key factor in encouraging unhealthy habits is, I think, oversimplifying the issue. Contrary to the conclusions drawn by some people, I am unsure if that is what the Commission is suggesting. The Commission appears to have found this is one part of a constellation of features that are intended to increase the time users spend in the app, regardless of the impact it may have on users. In an article published last year in Perspectives on Public Health, two psychologists sought to distinguish this kind of compulsive use from other internet-driven phenomena, arguing that short-form video “has been particularly effective at triggering psychological patterns that keep users in a continuous scrolling loop”, pointing to a 2023 article in Proceedings of the ACM on Human-Computer Interaction. It is a mix of the engaging quality of video with the unknown of what comes next — like flipping through television channels, only entirely tailored to what each user has previously become enamoured with.

Casey Newton reported on the Commission’s investigation and a similar U.S. lawsuit. Here is the lede:

The old way of thinking about how to make social platforms safer was that you had to make them do more content moderation. Hire more people, take down more posts, put warning labels on others. Suspend people who posted hate speech, and incitements to violence, or who led insurrections against their own governments.

At the insistence of lawmakers around the world, social platforms did all of this and more. But in the end they had satisfied almost no one. To the left, these new measures hadn’t gone nearly far enough. To the right, they represented an intolerable infringement of their freedom of expression.

I find the left–right framing of the outcomes of this entirely unproductive and, frankly, dumb. Even as a broad generalization, it makes little sense: there are plenty of groups across the political spectrum arguing their speech is being suppressed. I am not arguing these individual complaints are necessarily invalid. I just think Newton’s argument is silly.

Adequate moderation is an effective tool for limiting the spread of potentially harmful posts for users of all ages. While Substack is totally cool with Nazis, that stance rarely makes for a healthy community. Better behaviour, even from pseudonymous users, is encouraged by marginalizing harmful speech and setting relatively strict boundaries for what is permissible. Moderation is difficult to do well, impossible to do right, and insufficient on its own — of course — but it is not an old, outdated way of thinking, regardless of what Mark Zuckerberg argues.

Newton:

Of course, Instagram Reels and YouTube Shorts work in similar ways. And so, whether on the stand or before the commission, I hope platform executives are called to answer: if you did want to make your products addictive, how different would they really look from the ones we have now?

This is a very good argument. All of these platforms are deliberately designed to maximize user time. They are not magic, nor are they casting some kind of spell on users, but we are increasingly aware they have risks for people of all ages. Is it so unreasonable for regulators to have a role?

When I watched a bunch of A.I. company ads earlier this year, I noted Anthropic’s spot was boring and vague. Well, that did not last, as it began running a series of excellent ads mocking the concept of ads appearing in a chatbot. They are sharp and well-written. No wonder Anthropic aired them during the Super Bowl.

Anthropic also published a commitment to keep Claude ad-free. I doubt this will age well. Call me cynical, but my assumption is that Anthropic will one day have ads in its products, but perhaps not “Claude” specifically.

The reason anyone is discussing this is because ads are coming to OpenAI’s ChatGPT:

ChatGPT is used by hundreds of millions of people for learning, work, and everyday decisions. Keeping the Free and Go tiers fast and reliable requires significant infrastructure and ongoing investment. Ads help fund that work, supporting broader access to AI through higher quality free and low cost options, and enabling us to keep improving the intelligence and capabilities we offer over time. If you prefer not to see ads, you can upgrade to our Plus or Pro plans, or opt out of ads in the Free tier in exchange for fewer daily free messages.

Ads do not influence the answers ChatGPT gives you. Answers are optimized based on what’s most helpful to you. When you see an ad, they are always clearly labeled as sponsored and visually separated from the organic answer.

It is incredible how far we have come for these barely-distinguished placements to be called “visually separated”. Google’s ads, for example, used to have a coloured background, eventually fading to white. The “sponsored link” text turned into a little yellow “Ad” badge, eventually becoming today’s little bold “Ad” text. Apple, too, has made its App Store ads blend into normal results. In OpenAI’s case, they have opted to delineate ads by using a grey background and labelling them “Sponsored”.

Now OpenAI has something different to optimize for. We can all pretend that free market forces will punish the company if it does not move carefully, or it inserts too many ads, or if organic results start to feel influenced by ad buyers. But we have already seen how this works with Google search, in Instagram, in YouTube, and elsewhere. These platforms are ad-heavy to the detriment and frustration of users, yet they remain successful and growing. No matter what you think of OpenAI’s goals already, ads are going to fundamentally change ChatGPT and the company as a whole.

Do you want to block ads and trackers across all apps on your iPhone, iPad, or Mac — not just in Safari?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Magic Lasso: No ads, No trackers, No annoyances, No worries

The new App Ad Blocking feature in Magic Lasso Adblock v5.0 builds upon our powerful Safari and YouTube ad blocking, extending protection to:

  • News apps

  • Social media

  • Games

  • Other browsers like Chrome and Firefox

All ad blocking is done directly on your device, using a fast, efficient Swift-based architecture that follows our strict zero data collection policy.

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Geraldine McKelvie, the Guardian:

The global publishing platform Substack is generating revenue from newsletters that promote virulent Nazi ideology, white supremacy and antisemitism, a Guardian investigation has found.

I appreciate the intent of yet another article drawing attention to Substack’s willingness to host straightforward no-ambiguity Nazi publications, but I wish McKelvie and the Guardian would have given more credit to all the similar reporting that came before them. For example:

Among them are newsletters that openly promote racist ideology. One, called NatSocToday, which has 2,800 subscribers, charges $80 – about £60 – for an annual subscription, though most of its posts are available for free.

This is the very same account which, according to reporting by Taylor Lorenz last year, was promoted in a push notification from Substack. Substack told Lorenz the notification was “a serious error”. In the same article, Lorenz drew attention to NatSocToday’s recommendation of another explicitly Nazi publication hosted on Substack called the White Rabbit. This, too, is included as an example in McKelvie’s more recent report. Lorenz’s prior reporting goes unmentioned.

However, because both stories have contemporary screenshots of each Nazi publication’s profile, we can learn something — and this is another reason why I wish Lorenz’s story was cited. NatSocToday’s 2,800 subscribers as of this week does not sound like very much, but when Lorenz published her article at the end of July, it had only 746 subscribers. It has grown by over 2,000 subscribers in just six months. The same appears true of the White Rabbit, which went from “8.6K+” subscribers to “10K+” in the same timeframe.

One thing McKelvie gets wrong is suggesting “subscribers” equates to “paying members”. Scrolling through the subscriber lists of both of the publications above shows a mix of paid and free members. This is supported by Substack’s documentation, which I wish I had thought of checking before visiting either hateful newsletter. That is, while Substack is surely making some money by mainstreaming and recommending the kind of garbage people used to have to deliberately try and find, it is not a ten percent cut of the annual rate multiplied by the subscriber count.

By the way, while I am throwing some stones here, I should point out that Lorenz herself launched her User Magazine newsletter about a year after Jonathan M. Katz’s article “Substack Has a Nazi Problem”. Based on its archive, Lorenz just repurposed her personal Substack newsletter and existing audience to create User Mag. But Substack’s whole premise is that you own your email list and can bring it elsewhere, so Lorenz could have chosen any platform. Substack was never just infrastructure — it is a social media website with longform posts as its substance, and indifferent moderation as a feature.

If you want to understand what goes into a big YouTube production, this behind-the-scenes look from the tenth most popular tech channel seems to be a good place to start. It is remarkable how Marques Brownlee has grown from being just a guy making webcam videos from home to having a dedicated production space full of staff — and it all kind of hinges on YouTube, a singular video hosting platform. That would make me anxious daily, but Brownlee has made it work for about nine years.

Another thing that surprised me about this behind-the-scenes is just how far some companies will go to accommodate Brownlee. The Google Pixel team brought a bunch of unreleased devices to him for a first impressions video. That video was embargoed until an hour before Google was set to announce those devices. Brownlee, like anyone in the review space, has been accused of bias and favouritism, usually unfairly. If I were him, this kind of closeness would make me feel uncomfortable, as though Google is using me. I think Brownlee’s videos speak for themselves, however.