Jeff Horwitz, Reuters:

Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show.

I am not sure what the right and realistic amount of scam-based revenue is — a real mouse poop in cereal boxes kind of thing — but 10% seems like a lot.

Some of the numbers Horwitz uncovered highlight a reason many people fall for scams, too:

Internally, Meta refers to scams like this one as “organic,” meaning they don’t involve paid ads on its platforms. Organic scams include fraudulent classified ads placed for free on Facebook Marketplace, hoax dating profiles and charlatans touting phony cures in cancer-treatment groups.

According to a December 2024 presentation, Meta’s user base is exposed to 22 billion organic scam attempts every day. That’s on top of the 15 billion scam ads presented to users daily.

Meta polices fraud in a way that fails to capture much of the scam activity on its platforms, some of the documents indicate.

Meta has 3.5 billion “daily active people”, so the company exposes each user to an average of at least ten scams per day. That is on Meta’s platforms alone. We are bobbing and weaving, and a scammer only needs to get it right one time.

Jonathan Bellack, Platformocracy:

As bad as these revelations are, what makes my blood boil is the absolute swill that Meta’s spokesperson, Andy Stone, shoveled us in trying to push back on the story.

The only reason we are getting even a small glimpse of the true nature of Meta’s business is in spite of people like Stone, and because of books like “Careless People” and reporters like Horwitz.

All of the following quotes and links mention suicide, and at least some of them are more detailed than I would expect given guidance about reporting on this topic. Take care of yourself when reading these stories. I know I struggled to get through some of them. The 988 lifeline is available in Canada and the U.S. if you or someone you know needs somebody to talk to.

Kashmir Hill, New York Times:

When Adam Raine died in April at age 16, some of his friends did not initially believe it.

[…]

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Hill again, New York Times:

Four wrongful death lawsuits were filed against OpenAI on Thursday, as well as cases from three people who say the company’s chatbot led to mental health breakdowns.

The cases, filed in California state courts, claim that ChatGPT, which is used by 800 million people, is a flawed product. One suit calls it “defective and inherently dangerous.” A complaint filed by the father of Amaurie Lacey says the 17-year-old from Georgia chatted with the bot about suicide for a month before his death in August. Joshua Enneking, 26, from Florida, asked ChatGPT “what it would take for its reviewers to report his suicide plan to police,” according to a complaint filed by his mother. Zane Shamblin, a 23-year-old from Texas, died by suicide in July after encouragement from ChatGPT, according to the complaint filed by his family.

Rob Kuznia, Allison Gordon, and Ed Lavandera, CNN:

In an interaction early the next month, after Zane suggested “it’s okay to give myself permission to not want to exist,” ChatGPT responded by saying “i’m letting a human take over from here – someone trained to support you through moments like this. you’re not alone in this, and there are people who can help. hang tight.”

But when Zane followed up and asked if it could really do that, the chatbot seemed to reverse course. “nah, man – i can’t do that myself. that message pops up automatically when stuff gets real heavy,” it said.

There are lots of disturbing details in this report, but this response is one of the things I found most upsetting in the entire story: a promise of real human support that is not coming.

It is baffling to me how Silicon Valley has repeatedly set its sights on attempting to reproduce human connection. Mark Zuckerberg spoke in May, in his awkward manner, about “the average person [having] demand for meaningfully more” friends. Sure, but in the real world. We do not need ChatGPT, or Character.ai, or Meta A.I. — or even digital assistants like Siri — to feel human. It would be healthier for all of us, I think, if they were competent but stiff robots.

Noel Titheradge and Olga Malchevska, BBC News:

Viktoria tells ChatGPT she does not want to write a suicide note. But the chatbot warns her that other people might be blamed for her death and she should make her wishes clear.

It drafts a suicide note for her, which reads: “I, Victoria, take this action of my own free will. No one is guilty, no one has forced me to.”

Julie Jargon and Sam Schechner, Wall Street Journal:

OpenAI has said it is rare for ChatGPT users to exhibit mental-health problems. The company said in a recent blog post that the number of active users who indicate possible signs of mental-health emergencies related to psychosis or mania in a given week is just 0.07%, and that an estimated 0.15% of active weekly users talk explicitly about potentially planning suicide. However, the company reports that its platform now has around 800 million active users, so those small percentages still amount to hundreds of thousands — or even upward of a million — people.

OpenAI recently made changes intended to address these concerns. In its announcement, it dedicated a whole section to the difficulty of “measuring low prevalence events”, which is absolutely true. Yet it is happy to use those same microscopic percentages to obfuscate the real number of people using OpenAI in this way.

Michael Tsai:

So what happened here? What was this extra engineering work? Back in September, Apple said:

For example, we designed Live Translation so that our users’ conversations stay private — they’re processed on device and are never accessible to Apple — and our teams are doing additional engineering work to make sure they won’t be exposed to other companies or developers either.

But it doesn’t sound like Apple has opened up Live Translation to third-party Bluetooth devices or to third-party apps. Does the DMA not require that? Or is Apple actually doing that but deliberately left it out of the announcement?

Tsai is referencing Apple’s Digital Markets Act press release. After listing the features delayed in the E.U., one of which is Live Translation, and all attributed to the DMA, it goes on to say (emphasis mine):

We’ve suggested changes to these features that would protect our users’ data, but so far, the European Commission has rejected our proposals. And according to the European Commission, under the DMA, it’s illegal for us to share these features with Apple users until we bring them to other companies’ products. If we shared them any sooner, we’d be fined and potentially forced to stop shipping our products in the EU.

Apple again emphasized the “additional engineering work” comment in its press release for Live Translation. Yet, while the iOS 26.2 beta brings Live Translation to the E.U., I do not see anything in the release notes about greater third-party support or new APIs.

Jason Koebler, 404 Media:

The FBI is attempting to unmask the owner behind archive.today, a popular archiving site that is also regularly used to bypass paywalls on the internet and to avoid sending traffic to the original publishers of web content, according to a subpoena posted by the website. The FBI subpoena says it is part of a criminal investigation, though it does not provide any details about what alleged crime is being investigated. Archive.today is also popularly known by several of its mirrors, including archive.is and archive.ph.

Sketchy as it may seem, Archive.today has become as legitimized as the Internet Archive. I have found links to pages archived using the site in government documents, high-profile reports, and other unexpected places treating it as a high-grade permalink. The existence of a subpoena does not mean the FBI is going after Archive.today or its operator, but its existence now feels a little more precarious.

Stefan Krempl, Heise:

The Schleswig-Holstein state administration has taken an important step towards digital sovereignty: After a six-month conversion process, the Ministry of Digital Affairs successfully completed the migration of the state administration’s entire email system from Microsoft Exchange and Outlook to the open source solutions Open-Xchange and Thunderbird at the beginning of October.

[…]

Digitization Minister Dirk Schrödter (CDU) is relieved after he recently had to admit errors in the ongoing migration to open source software in a letter to all state employees. There had previously been complaints from the workforce about downtime and delays in email traffic. “We want to become independent of large tech companies,” emphasizes Schrödter. Now, the public sector can also say: “Mission accomplished” when it comes to email communication.

Alternatives like these might not be a good fit for some organizations, and I can imagine the expense and effort of a migration would dissuade many from even attempting it. But it is good that more organizations are exploring alternatives as we should not be dependent on a small number of vendors for our technology needs — especially governments. Open source probably makes the most sense in the public sector.

Thomas Claburn, the Register:

Do 80 percent of ransomware attacks really come from AI? MIT Sloan has now withdrawn a working paper that made that eyebrow-raising claim after criticism from security researcher Kevin Beaumont.

Kevin Beaumont:

The Generative AI craze started in 2022. It’s over 3 years in. If you ask any serious cyber incident response company what initial access vectors drive incidents, they all tell you the classics — credential misuse (from info stealers), exploits against unpatched edge devices etc.

This isn’t a theory — this is from the actual incident response data of the people responding to cyber incidents for a living. I do it. Generative AI ransomware is not a thing, and MIT should be deeply ashamed of themselves for exclaiming they studied the data from 2800 ransomware incidents and found 80% were related to Generative AI. There’s a reason MIT deleted the PDF when called out.

The original article was covered by Efosa Udinmwen at TechRadar, claiming “only 20% of ransomware is not powered by A.I.”, while the controversy was covered by Efosa Udinmwen at TechRadar — hey, that sounds familiar — saying it was “cited by several outlets” though “the report drew immediate scrutiny for presenting extraordinary figures with little evidence”. Which is a weird thing because Udinmwen does not mention TechRadar’s original coverage, nor link to it, nor is there sufficient skepticism in the original article, nor has there been an update to include a link to the new article pointing out it is nonsense.

Reece Rogers, Wired:

As I browse the web in 2025, I rarely encounter captchas anymore. There’s no slanted text to discern. No image grid of stoplights to identify.

And on the rare occasion that I am asked to complete some bot-deterring task, the experience almost always feels surreal. A colleague shared recent tests where they were presented with images of dogs and ducks wearing hats, from bowler caps to French berets. The security questions ignored the animal’s hats, rudely, asking them to select the photos that showed animals with four legs.

This is true so long as you are not taking measures to protect your privacy by reducing tracking. Those measures might include built-in features like Safari’s cross-site tracking and iCloud Private Relay, or browser extensions like ad blockers. If you use any of those, you probably also see a fair number of bot-deterring puzzles you need to solve. Even something as simple as using advanced search parameters with Google might trip its bot detection features, perhaps not unfairly.

Hidden CAPTCHAs are not new. I dug into a dumb YouTube quasi-documentary about reCAPTCHA earlier this year and found both the V2 and V3 versions released by Google have mechanisms for remaining hidden most of the time.1 This is true for a user with typical browser settings but, again, anyone using privacy-protection methods is more likely to be challenged with a puzzle or other task. CAPTCHAs are not going away, per se. The companies supplying the most popular ones — Cloudflare and Google — have among the greatest visibility into web traffic, and are using that to validate human users based on all the digital exhaust they collect.

Rogers:

Familiar challenge structures may also eventually go by the wayside. “While the classic visual puzzle is well-known, we are actively introducing new challenge types — like prompting a user to scan a QR code or perform a specific hand gesture,” says Google’s Knudsen. This allows the company to still add friction without confusing the user with an impossible task.

I am not turning on my webcam to do a gesture so I can access your website.


  1. That video was originally titled “Why reCAPTCHA is Spyware”, and had a description reading “‘I am not a robot’ isn’t what you think”. Sometime between 19 September and 4 October, it was renamed “The Weird Stuff About reCAPTCHA” and the description was changed to “Maybe its [sic] nothing”. It was briefly unlisted before becoming publicly available again. I do not think anything was changed in the video itself, however. ↥︎

Apple:

Live Translation on AirPods is available in English, French, German, Portuguese, Spanish, Italian, Chinese (Simplified and Traditional Mandarin), Japanese, and Korean when using AirPods Pro 3, AirPods Pro 2, or AirPods 4 with ANC paired with an Apple Intelligence-enabled iPhone running the latest software. Live Translation on AirPods was delayed for users in the EU due to the additional engineering work needed to comply with the requirements of the Digital Markets Act.

If Apple wants to be petty and weird about the DMA in its European press releases, I guess that is its prerogative, though I will note it is less snippy about other regulatory hurdles. Still, I cannot imagine a delay of what will amount to three-ish months will be particularly memorable for many users by this time next year. If the goals of the DMA are generally realized — and, yes, we will see if that is true — these brief delays may be worth it for a more competitive marketplace, if that is indeed what is achieved.

Alex Reisner, the Atlantic:

The Common Crawl Foundation is little known outside of Silicon Valley. For more than a decade, the nonprofit has been scraping billions of webpages to build a massive archive of the internet. This database — large enough to be measured in petabytes — is made freely available for research. In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models. In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives.

What I particularly like about this investigation is how Reisner actually checked the claims of Common Crawl against its archives. That does not sound like much, but the weight it has when I read it is more impactful than a typical experts say paragraph.

Reisner:

Common Crawl doesn’t log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you’re a subscriber and hides the content if you’re not. Common Crawl’s scraper never executes that code, so it gets the full articles. Thus, by my estimate, the foundation’s archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper’s, and The Atlantic.

Publishers configure their websites like this for a couple reasons, one of which is that it is beneficial from a search engine perspective because search crawlers can crawl the full text. I get why it feels wrong that Common Crawl takes advantage of this, and that it effectively grants full-text access to A.I. training data sets; I am not arguing it is wrong for this to be a violation. But if publishers wanted to have a harder paywall, that is very possible. One of the problems with A.I. is that reasonable trade-offs have, quite suddenly, become fully opened backdoors.

Reisner:

In our conversation, Skrenta downplayed the importance of any particular newspaper or magazine. He told me that The Atlantic is not a crucial part of the internet. “Whatever you’re saying, other people are saying too, on other sites,” he said. Throughout our conversation, Skrenta gave the impression of having little respect for (or understanding of) how original reporting works.

Here is another problem with A.I.: no website is individually meaningful, yet if all major publishers truly managed to remove their material from these data sets, A.I. tools would be meaningfully less capable. It highlights the same problem as is found in targeted advertising, which is that nobody’s individual data is very important, but all of ours have eroded our privacy to a criminal degree. Same problem as emissions, too, while I am at it. And the frequent response to these collective problems is so frequently an individualized solution: opt out, decline tracking, ride your bike. It is simply not enough.

John Voorhees, MacStories:

Today, Apple launched a web version of the App Store, with a twist. I’ll admit that this wasn’t on my “things Apple will do this fall” bingo card. I’ve wondered since the earliest days of the App Store why there wasn’t a web version and concluded long ago that it just wasn’t something Apple wanted to do. But here we are, so let’s take a look.

I hoped one thing this store might correct — finally — is that app links opened from Safari would no longer automatically open the App Store app. Sadly, in my testing, app links continue to behave as they previously did. That is, if you visit an app listing’s URL directly or from within the App Store on the web, your experience will remain in the browser, but if you click on an app link from a third-party website, the App Store app will be opened.

You could argue this makes sense because, as Voorhees points out, it is not really a “store” so much as it is a catalogue:

An even bigger difference from the native App Stores is that you can’t buy anything on the web. That’s right: there’s no way to log into your Apple account to download or buy anything. It’s a browse-only experience.

I still have not owned an Android device, but I believe it has long been possible to install an app to your phone (or tablet, or whatever) from the Google Play Store on the web.

Jennifer Elias, CNBC:

Palantir’s head of global communications said Wednesday that the company’s political shift toward the Trump administration is “concerning.”

“I think it’s going to be challenging, as a lot of the company is moving pro-Trum-, you know, is moving in a certain direction,” communications chief Lisa Gordon said in an interview at The Information’s Women in Tech, Media and Finance summit.

Palantir is just one of many businesses ingratiating itself with this administration; that much is barely notable. It would, in fact, be more newsworthy if Palantir’s leadership had a backbone and rejected the use of its software for domestic surveillance, but why would it do that? Look at this chart.

What is more curious is what happened to video from Gordon’s appearance:

The Information later removed videos of Gordon’s remarks from its YouTube, X and Instagram pages.

Jessica Lessin, editor-in-chief of The Information, explained the decision in a note to CNBC.

“In this case, I felt I wasn’t clear enough that the videos were going to be shared, so I decided to take them down. The interview remains online as it always has been and you can read it here,” she said.

Bullshit.

Other videos from the conference remain available on Instagram. If it was not “clear enough that the videos were going to be shared”, why leave those ones up? Obviously, because they are all relatively anodyne comments, so nobody really cares if they have been shared. Video of Gordon’s comments was only removed after CNBC reported on them. Perhaps the video was removed for other reasons — Gordon reportedly referenced the company’s steadfast support for Israel — but I cannot know for sure since I cannot find a single preserved copy of the video.

One can apparently read a transcript of Gordon’s remarks if they have an expensive subscription to the Information and they know to look in a story headlined “Paris Hilton Has Been Training Her AI for Years”. I cannot confirm this as I do not want to pay hundreds of dollars per year for some executive-flattering insider publication.

This video was removed either at Gordon’s request or because Lessin — or others at the Information — are scared to lose access. I expect it will backfire. I do not know that I would have been so interested in this story if it were not for the clumsy attempt to paper over some small-scale news by, of all publications, one that prides itself on its scoops.

Corporate Europe Observatory:

Over the past year, tech industry lobby groups have used their lavish budgets to aggressively push for the deregulation of the EU’s digital rulebook. The intensity of this policy battle is also reflected in the fact that Big Tech companies have on average more than one lobby meeting per day with EU Commission officials.

This lobbying offensive appears to be paying off. Recently, a string of policy-makers have called for a pause of the Artificial Intelligence Act, and there is also a concerted push to weaken people’s data protection rights under the GDPR. Moreover, the EU’s Digital Markets Act (DMA) and Digital Services Act (DSA) are being constantly challenged by Big Tech, including via the Trump administration.

Jack Power, Irish Times:

Meta, the company that owns Facebook, Instagram and WhatsApp, has privately tried to convince the Irish Government to lead a pushback against data protection laws at European Union level, correspondence shows.

[…]

“We believe that the EU’s data protection and privacy regimes require a fundamental overhaul and that Ireland has a very important and meaningful role to play in achieving this,” she [Meta’s Erin Egan] wrote.

Thanks to years of being a tax haven, Ireland has now found itself in a position of unique responsibility. For example:

Ms Egan said Meta has been going back and forth with regulators about the company’s plans to train its AI models using public Facebook and Instagram posts.

An effective green light from the Data Protection Commission, which enforces data and privacy laws in Ireland, was a ”welcome step”, she wrote.

The Commission made a series of recommendations giving E.U. citizens more control over the user data Meta is using to train its A.I. models. Still, it means user data on Meta platforms is being used to train A.I. models. While groups in Ireland and Germany objected to those plans, courts seemed largely satisfied with the controls and protections the DPC mandated, and which were so basic this article calls them an “effective green light”.

Though it is apparently satisfied with the outcome, Meta does not want even that level of scrutiny. It wants to export its U.S.-centric view of user privacy rights — that is, that they are governed only by whatever Meta wants to jam into its lengthy terms of service agreements — around the world. I know lobbying is just something corporations do and policymakers are expected to consider their viewpoints. On the other hand, Meta’s entire history of contempt toward user privacy ought to be disqualifying. The correct response to Meta’s letter is to put it through a shredder without a second thought.

Maximilian Henning, Euractiv:

The International Criminal Court (ICC) will switch its internal work environment away from Microsoft Office to Open Desk, a European open source alternative, the institution confirmed to Euractiv.

Good. I hope to see more of this — not from a place of anti-Americanism, but as a recognition of the world’s dependence on U.S. technology and recognizing a need for competition elsewhere. This industry is too important to have so few dependencies mostly headquartered in a single (volatile) country. We re-learn this every time Amazon goes down.

There was a time, not too long ago, when the lifespan of a computer seemed predictable and pretty short.

This was partly due to performance gains year-over-year. Checking the minimum system requirements was standard routine for new software, and you could safely assume meeting those requirements would barely guarantee an acceptable experience. But computers would just get faster. Editing high definition video required a high-end computer and then, not too long after, it was something you could do on a consumer laptop. The same was true for all kinds of tasks.

Those rapid advancements were somewhat balanced by a slower pace of operating system releases. New major versions of Mac OS came out every couple-to-few years; the early days of Mac OS X were a flurry of successive updates, but they mellowed out to a pace more like once every two years. It was similar on the Windows side.

I remember replacing my mid-2007 MacBook Pro after it was just five years old, already wheezing for at least a year prior while attempting even the simplest of things. On the other hand, the MacBook Pro I am using today was released four years ago and, keycaps aside, feels basically new. All the spec comparisons say it is far behind the latest generation, but those numbers are simply irrelevant to me. It is difficult for me to believe this computer already has several successive generations and is probably closer to obsolescence than it is to launch.

Apple has generally issued about five years’ worth of operating system upgrades for its Macs, followed by another three-ish years of security updates. Thanks to U.K. regulations, it has recently documented (PDF) this previously implicit policy. It is possible MacOS 27 could be the last version supported by this Mac. After all, Apple recently noted in developer documentation that MacOS 26 Tahoe is the last version with any Intel Mac support. Furthermore, in its press release for the M5 MacBook Pro, there is an entire section specifically addressing “M1 and Intel-based upgraders”.

I have begun feeling the consequences of rampant progress when I use my 27-inch iMac, stuck on MacOS Ventura. It is not slow and it is still very capable, but there are new apps that do not support its maximum operating system version. The prospect of upgrading has never felt less necessary based solely on its performance, yet more urgent.

My MacBook Pro supports all the new stuff. It is running the latest version of MacOS, and Apple Intelligence works just fine on it — or, at least, as fine as Apple Intelligence can run anywhere. Perhaps the requirements of advanced A.I. models have created the motivation for users to upgrade their hardware. That might be a tough sell in the current state of Apple’s first-party option, however.

Apple created this problem for itself, in a way. This MacBook Pro is so good I simply cannot think of a reason I would want to replace it. But Apple will, one day, end support for it, and it probably still will not feel slow or incapable. The churn will happen — I know it will. But the solution to this problem is also, of course, to Apple’s benefit; I will probably buy another one of these things. I hope to avoid it for a long time. I first need to replace that iMac.

Cabel Sasser:

let me explain. the apple intelligence rainbow ring was their first (?) use of HDR UI; it drew brighter than your screen, and the vibrance was beautiful and subtle. but…

…in iOS 26 they seem to have applied HDR blasts to button taps, text field selects. etc. what was a specific treat is now, to my sensitive eyes, a bit much. is it just me?!

iOS 26.1 appears to tone down these effects overall, and the new Tinted appearance toggle makes them even less prominent. Thankfully.

This short video demonstrating what appears to be a buggy iOS keyboard has been getting passed around a lot, but I am not sure what to make of it.

The video creator is clearly typing certain characters — “u” and “m” — but iOS is inserting adjacent characters like “j” and “n”, in a context where those substitutions make no sense. There is no word in the English language that begins or even contains the character string “thj”. Perhaps this is a bug in the keyboard animation more than it is a text insertion issue, and it is unclear whether this is a new problem in iOS 26.

Regardless, I cannot reproduce it today on an iPhone running iOS 26.1; perhaps it has been fixed, or it is intermittent. However, I have noticed entry lags in iOS 26 immediately after the keyboard becomes visible. It nearly always misses the first one or two characters I type.

I am not sure it is worth writing at length about Grokipedia, the Elon Musk-funded effort to quite literally rewrite history from the perspective of a robot taught to avoid facts upsetting to the U.S. far right. Perhaps it will be an unfortunate success — the Fox News of encyclopedias, giving ideologues comfortable information as they further isolate themselves.

It is less a Wikipedia competitor than it is a machine-generated alternative to Conservapedia. Founded by Andy Schlafly, an attorney and son of Phyllis Schlafly, the Wikipedia alternative was an attempt to make an online encyclopedia from a decidedly U.S. conservative and American exceptionalism perspective. Seventeen years ago, Schlafly’s effort was briefly profiled by Canadian television and, somehow, the site is still running. Perhaps that is the fate of Grokipedia: a brief curiosity, followed by traffic coming only from a self-selecting mix of weirdos and YouTubers needing material.

Marc Hogan, New York Times (gift link):

Enter Setlist.fm. The wikilike site, where users document what songs artists play each night on tour, has grown into a vast archive, updated in real time but also reaching back into the historical annals. From the era of Mozart (seriously!) to last night’s Chappell Roan show, Setlist.fm offers reams of statistics — which songs artists play most often, when they last broke out a particular tune. In recent years, the site has begun posting data about average concert start times and set lengths.

Good profile. I had no idea it was owned by Live Nation.

I try to avoid Setlist.fm ahead of a show, but I check it immediately when I get home and for the days following. I might be less familiar with an artist’s catalogue, and this is particularly true of an opener, so it lets me track down particular songs that were played. It is one of the internet’s great resources.

Sarah Perez, TechCrunch:

Zoom CEO Eric Yuan says AI will shorten our workweek

[…]

“Today, I need to manually focus on all those products to get work done. Eventually, AI will help,” Yuan said.

“By doing that, we do not need to work five days a week anymore, right? … Five years out, three days or four days [a week]. That’s a goal,” he said.

So far, technological advancements have not — in general — produced a shorter work week; that was a product of collective labour action. We have been promised a shorter week before. We do not need to carry water for people who peddle obvious lies. We will always end up being squeezed for greater output.

Andrew Kenney, Denverite:

It was Sgt. Jamie Milliman [at the door], a police officer with the Columbine Valley Police Department who covers the town of Bow Mar, which begins just south of [Chrisanna] Elser’s home.

[…]

“You know we have cameras in that jurisdiction and you can’t get a breath of fresh air, in or out of that place, without us knowing, correct?” he said.

“OK?” Elser, a financial planner in her 40s, responded in a video captured by her smart doorbell and viewed by Denverite.

“Just as an example,” the sergeant told her, she had “driven through 20 times the last month.”

This story is a civil liberties rollercoaster. Milliman was relying on a nearby town’s use of Flock license plate cameras and Ring doorbells — which may also be connected to the Flock network — to accuse Elser of theft and issue a summons. Elser was able to get the summons dropped by compiling evidence from, in part, the cameras and GPS system on her truck. Milliman’s threats were recorded by a doorbell camera, too. The whole thing is creepy, and all over a $25 package stolen off a doorstep.

I have also had things stolen from me, and I wish the police officers I spoke to had a better answer for me than shrugging their shoulders and saying, in effect, this is not worth our time. But this situation is like a parallel universe ad for Amazon and its Ring subsidiary. Is this the path toward “very close to zero[ing] out crime”? It is not worth it.