Firefox NightlyA Tab Groups Scoop – These Weeks in Firefox: Issue 179

Highlights

  • The WebExtensions team is fast-tracking support for “tab groups”-related updates to the tabs API (the updates have landed in Nightly 139 and been uplifted to Beta 138)
  • New Picture-in-Picture captions support was added to several sites including iq.com, rte.ie and joyn.de. Thanks to kernp25 and cmhernandezdev for their contributions!
  • The Profiles team is happy to report that the feature is currently in 138 beta with no open blockers from QA!
    • Next up, we plan to do a 0.5% rollout in 138 release. We’re being extremely cautious because profiles are where user data is stored, and we need to get this right.
  • The WebExtensions team has introduced a new pref to allow developers to more easily test the add-on update flow from about:addons. Setting extensions.webextensions.prefer-update-over-install-for-existing-addon to true changes the behavior of the “Install Add-on From File…” menu item to use the update flow rather than the install flow for pre-existing add-ons (Bug 1956540)

Friends of the Firefox team

Introductions/Shout-Outs

  • Welcome to Joel Kelly who is joining the New Tab front-end team!

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gautam Panakkal
  • Jason Jones
  • Kernp25
  • Ricardo Delgado Gomez

New contributors (🌟 = first patch)

  • 🌟 Yakub Abdulrahman Alada: Bug 1917908 – Improve layout of the about:translations controls for small-width screens
  • Brian Ouyang: Bug 1948995 – Allow Full-Page Translations on moz-extension URLs
  • Cruz Hernandez: Bug 1958974 – Updated disneyplus PiP wrapper
  • Gautam Panakkal
    • Bug 1640117 – update console warnings to state ‘enhanced tracking protection’ instead of ‘content blocking’
    • Bug 1860037 – Split up and clean up browser/extensions/formautofill/test/browser/creditCard/browser_creditCard_telemetry.js
    • Bug 1955584 – Set right margin equal to top/bottom margin on vertical tab close buttons
  • 🌟 Isaac Briandt: Bug 1944944 – Update code to align with A11y Audit for Full-Page ARIA Attribute Translations
  • 🌟 Jason Jones
    • Bug 1176600 – Remove defunct pref listed in m-c prefs/all.js
    • Bug 1689254 – Lazily initialize zoom UI
    • Bug 1855787 – Persist translation panel intro until first translation is complete
    • Bug 1956009 – Remove browser/base/content/test/zoom/browser_default_zoom_multitab_002.js
  • joel.mozillaosi: Bug 1953387 – [devtools] Display empty string for undefined/NaN in netmonitor time columns
  • keanucuco: Bug 1855839 – [Translations] Refresh offline translation language list via TranslationsView observer
  • Abdelaziz Mokhnache: Bug 1957554 – add sort predicate for path column
  • 🌟 Ricardo Delgado Gomez
    • Bug 1815793 – Display error when failing to load supported languages
    • Bug 1952132 – Add a border-radius to new-tab broken-image tiles, for consistency with other tiles
  • 🌟 Sangie[:sangie50]: Bug 1958324 – Rephrase history clearing to not include search in sanitize dialog
  • Shane Ziegler: Bug 1957495 – Move ToolbarIconColor helper object from `browser.js` into its own module `browser/themes/ToolbarIconColor.sys.mjs`
  • 🌟 Raksha Kumari: Bug 1947278 – Replace div with moz-card in Button Group story for emphasis

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • Localized string using the i18n WebExtensions API will cascade through locale subtags to find translations before falling back to the extension’s default language (Bug 1381580
    • Thanks to Carlos for contributing this enhancement to the i18n API 🎉
  • A new text-to-audio task type has been added to the trialML API to allow extensions to use the Xenova/speecht5_tts model for text to speech tasks (Bug 1959146).

DevTools

WebDriver BiDi

Lint, Docs and Workflow

New Tab Page

Picture-in-Picture

Performance Tools (aka Firefox Profiler)

  • Profiling xpcshell tests locally just became easier, ./mach test <path to xpcshell test> -–profiler will open a profile of the test at the end.

Profile Management

  • 139 is a catchup / blocker uplift / bugfix release. Main focus is making the cross-profile shared database and cross-instance notifier code independent of the profiles feature, to support Nimbus and OMC storing cross-profile data there even if the profiles feature isn’t enabled (metabug 1953861).
  • Recently fixed bugs:
    • Jared fixed bug 1957924, ensuring the profile group ID gets correctly set across a profile group if a user disables, then re-enables data collection
    • Jared fixed bug 1958196, fixing visibility issues in profiles using the System Theme after an OS theme change
    • Niklas fixed bug 1956111, App menu back button hover coloring is incorrect in HCM
    • Teddy fixed bug 1941576, Profile manager: Subcopy missing from “Open” checkbox
    • Teddy fixed bug 1956350, Limit theme chip labels to one truncated line
    • Sammy fixed bug 1957767, When account is disconnected, cannot log back into profile
    • Dave fixed some expiring probes in bugs 1958163 and 1958171

Search and Navigation

  • Dao fixed the toolbar context menu being shown on address bar results @ 1957448
  • Drew fixed several bugs relating to sponsored suggestions @ 1955360 + 1955257 + 1958038 + 1958421
  • James has been fixing TypeScript definitions across multiple files @ 1958104 + 1958102 + 1958640
  • Moritz implemented a patch to reset Search settings when they are corrupt instead of just failing @ 1945178
  • Daisuke fixed the unified search button being persisted incorrectly @ 1957630
  • Moritz fixed quick actions not being shown after tab switch @ 1958878

Storybook/Reusable Components/Acorn Design System

  • Jules created a new color palette and all of the colors are now available in the design tokens (not just the ones that are being used already 🎉)
    • You can check out the Colors section of the Tokens Table in Storybook
    • Grays were not updated in this, that is being done in another bug
  • We’re playing with the idea of being called the “Acorn Engineering”/”Design System Engineering” team so if you see that it’s still recomp 🙂

The Mozilla BlogAds performance, re-imagined. Now in beta: Anonym Private Audiences.

Together, Mozilla and Anonym are proving that effective advertising doesn’t have to come at the cost of user privacy. It’s possible to deliver both — and we’re building the tools to show the industry how.

Today, we’re unveiling Anonym Private Audiences: a confidential computing solution allowing advertisers to securely build new audiences and boost campaign results.

Powered by advanced privacy-preserving machine learning, Anonym Private Audiences enables advertisers and platforms to work together using first-party data to create targeted audiences without ever handing their users’ information to one another. Brands can discover and engage look-alike communities — reaching new high value customers — without sending, or exposing their customers’ data to ad platforms. As the evolving advertising landscape makes third-party data less viable, Private Audiences supports privacy while enabling the performance advertisers have come to expect.

Private Audiences employs differential privacy and secure computation to minimize the sharing of data commonly passed between advertisers and ad networks. It operates separately, and is not integrated with, our flagship Firefox browser.

Why advertisers are turning to Private Audiences

Advertisers today are facing a difficult challenge: how to grow their business without breaking the trust of the people they’re trying to reach. Private Audiences was built to meet that moment — helping teams use the data they already have to find new high-value customers, without giving up data control along the way.

Early adopters are already seeing meaningful gains, with campaign performance improving an average of 30% compared to traditional broad targeting. And the reasons why it’s resonating are relevant to any brand looking to grow smarter and more sustainably:

  • Find the right people, not just more people. Predictive machine learning helps advertisers reach new audiences that look and behave like their best customers — improving efficiency without ramping up spend.
  • Keep trust intact. In sectors where privacy expectations are highest, early adopters are showing that it’s possible to respect user’s privacy and still drive results.
  • Use what you already know. Private Audiences works with the tools teams already rely on. Audiences show up in platform-native interfaces, so there’s nothing new to learn or configure.
  • Stay ahead of shifting standards. Private Audiences is built on privacy-first architecture — helping brands keep pace with evolving norms, expectations, and technical requirements.

How Private Audiences protects user privacy

In most audience-building workflows today, advertisers integrate directly with ad platforms to share customer data— whether through raw file uploads or automated server-to-server transfers. The platform then uses that data to build ‘look-alike’ audiences or, in some cases, retarget those same individuals directly. Anonym’s approach enables businesses to retain full control over their user data and employ gold standard protections, which are particularly important in privacy-sensitive industries and regions. 

Private Audiences takes a fundamentally different approach

Instead of sharing data directly with platforms, brands securely upload a list of high-value customers using a simple drag-and-drop interface. That data is encrypted and processed inside Anonym’s Trusted Execution Environment (TEE), where audience modeling happens in isolation. No data is exposed — not to Anonym, and not to the platform. Anonym trains the model, ranks eligible audiences based on likely performance, and returns a ready-to-use audience segment. Anonym’s ad platform partners only learn which of their existing users to include in the audience – they receive no new personal information or audience attributes. When the process is finished, the TEE is wiped clean.

The result: strong performance, without giving up data control or compromising on privacy.

Diagram illustrating how Anonym's machine learning identifies users similar to an advertiser's high-value customers based on shared attributes.

Breakthrough performance and privacy capabilities with Private Audiences, and more

Private Audiences joins the ranks of Anonym’s other solutions: Private Attribution, which enables accurate view-through attribution without user tracking, and Private Lift, which helps advertisers understand incrementality without exposing identities. Together, Anonym’s tools represent a new foundation for digital advertising trust — a solution portfolio built on transparency, accountability, and respect for the people it reaches. 

Because trust isn’t optional — it’s foundational

Mozilla has always believed privacy is a fundamental human right, and we will continue our relentless focus on designing and delivering products and services to protect it. Advertising performance — as much as privacy — is a foundational part of this journey. 

Anonym Private Audiences is currently in closed beta, supporting early-use cases where privacy matters most. We’re excited to partner with all advertisers seeking a better way to build high-performing audiences without compromising your customers’ trust.  

For a deeper dive or beta participation details, get in touch with us here.

A teal lock icon next to the bold text "Anonym" on a black background.

Performance, powered by privacy

Learn more about Anonym

The post Ads performance, re-imagined. Now in beta: Anonym Private Audiences. appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 596

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is Maycoon, an experimental vello/wGPU-based UI framework.

Thanks to DraftedDev for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

465 pull requests were merged in the last week

Compiler
Miri
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly positive week. Most of the improvements come from a revert of a regression from a few weeks ago, but we also get nice wins from re-using Sized fast-path, coming from Sized hierarchy implementation work.

Triage done by @panstromek. Revision range: 15f58c46..8f2819b0

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
1.3% [0.4%, 2.1%] 7
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-1.0% [-12.9%, -0.1%] 144
Improvements ✅
(secondary)
-2.2% [-12.3%, -0.2%] 111
All ❌✅ (primary) -0.9% [-12.9%, 2.1%] 151

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust
Other Areas
Cargo Language Reference

No Items entered Final Comment Period this week for Rust RFCs, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-04-23 - 2025-05-21 🦀

Virtual
Europe
North America
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I don’t think about rust either. That’s a compiler’s job

Steve Klabnik on Bluesky

Thanks to Matt Wismer for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Spidermonkey Development Blog5 Things You Might Not Know about Developing Self-Hosted Code

Self-hosted code is JavaScript code that SpiderMonkey uses to implement some of its intrinsic functions for JavaScript. Because it is written in JavaScript, it gets all the benefits of our JITs, like inlining and inline caches.

Even if you are just getting started with self-hosted code, you probably already know that it isn’t quite the same as your typical, day-to-day JavaScript. You’ve probably already been pointed at the SMDOC, but here are a couple tips to make developing self-hosted code a little easier.

1. When you change self-hosted code, you need to build

When you make changes to SpiderMonkey’s self-hosted JavaScript code, you will not automatically see your changes take effect in Firefox or the JS Shell.

SpiderMonkey’s self-hosted code is split up into multiple files and functions to make it easier for developers to understand, but at runtime, SpiderMonkey loads it all from a single, compressed data stream. This means that all those files are gathered together into a single script file and compressed at build time.

To see your changes take effect, you must remember to build!

2. dbg()

Self-hosted JavaScript code is hidden from the JS Debugger, and it can be challenging to debug JS using a C++ debugger. You might want to try logging messages to console.log() to help you debug your code, but that is not available in self-hosted code!

In debug builds, you can print out messages and objects using dbg(), which takes a single argument to print to stderr.

3. Specification step comments

If you are stuck trying to figure out how to implement a step in the JS specification or a proposal, you can see if SpiderMonkey has implemented a similar step elsewhere and base your implementation off that. We try to diligently comment our implementations with references to the specification, so there’s a good chance you can find what you are looking for.

For example, if you need to use the specification function CreateDataPropertyOrThrow(), you can search for it (SearchFox is a great tool for this) and discover that it is implemented in self-hosted code using DefineDataProperty().

4. getSelfHostedValue()

If you want to explore how a self-hosted function works directly, you can use the JS Shell helper function getSelfHostedValue().

We use this method to write many of our tests. For example, unicode-extension-sequences.js checks the implementation of the self-hosted functions startOfUnicodeExtensions() and endOfUnicodeExtensions().

You can also use getSelfHostedValue() to get C++ intrinsic functions, like how toLength.js tests ToLength().

5. You can define your own self-hosted functions

You can write your own self-hosted functions and make them available in the JS Shell and XPC shell. For example, you could write a self-hosted function to print a formatted error message:

  function report(msg) {
      dbg("|ERROR| " + msg + "|");
  }

Then, while you are setting up globals for your JS runtime, call JS_DefineFunctions(cx, obj, funcs):

  static const JSFunctionSpec funcs[] = {
      JS_SELF_HOSTED_FN("report", "report", 1, 0),
      JS_FS_END,
  };

  if (!JS_DefineFunctions(cx, globalObject, funcs)) {
    return false;
  }

The JS_SELF_HOSTED_FN() macro takes the following parameters:

  1. name - The name you want your function to have in JS.
  2. selfHostedName - The name of the self-hosted function.
  3. nargs - Number of formal JS arguments to the self-hosted function.
  4. flags - This is almost always 0, but could be any combination of JSPROP_*.

Now, when you build the JS Shell or XPC Shell, you can call your function:

js> report("BOOM!");          
Iterator.js#6: |ERROR| BOOM!|

Mitchell BakerGlobal AI Summit on Africa: my experience

The Mozilla BlogExploring on-device AI link previews in Firefox

Ever opened a bunch of tabs only to realize none of them have what you need? Or felt like you’re missing something valuable in a maze of hyperlinks? In Firefox Labs 138, we introduced an optional experimental feature to enhance your browsing experience by showing a quick snapshot of what’s behind a link before you open it. This post provides some technical details of this early exploration for the community to help shape this feature and set the stage for deeper discussions into specific areas like AI models.

Interaction

To activate a Link Preview, hover over a link and press Shift (⇧) plus Alt (Option ⌥ on macOS), and a card appears including the title, description, image, reading time, and 3 key points generated by an on-device language model. This is built on top of the Firefox behavior to show the URL when over a link, so it also works when links are focused with the keyboard. We picked this keyboard shortcut to try avoiding conflicts with common shortcuts, e.g., opening tabs or Windows menus. Let us know: do you prefer some keyboard shortcut or potentially other triggers like long press, context menu, or maybe hover with delay?

animation showing shift+alt keyboard presses triggering link preview

The card appears in a panel separate from the page, allowing it to extend past the edges of the window. This helps us position the link within the card near your mouse cursor, making it convenient to visit the previewed page, while also reinforcing that this comes from Firefox and not the page. We’re also exploring the possibility of making it part of the page, allowing them to scroll together or more separately, such as a persistent space to gather multiple previews for cross-referencing or subsequent actions. Let us know: which approaches better support your browsing workflows?

Page fetching and extraction

This initial implementation uses credentialless HTTPS requests to retrieve a page’s HTML and parses it without actually loading the page or or executing scripts. While we don’t currently send cookies, we do send a custom x-firefox-ai header allowing website authors to potentially decide what content can be previewed. Let us know: would you want previews of content requiring login, perhaps with a risk of accidentally changing related logged in state?

With the parsed page, we look for metadata, such as Open Graph tags, which are commonly used for social media link sharing, to display the title, description, and image. We also reuse Firefox’s Reader View capabilities for extracting reading time and the main article content to generate key points. Improvements to page parsing capabilities can enhance both Reader View and Link Previews. Let us know: which sites you find the feature useful and on which it might pull the wrong information.

Key points, locally generated 

To ensure user privacy, we run inference on-device with Reader View’s content. This is currently powered by wllama (WebAssembly llama.cpp) with SmolLM2-360M from HuggingFace, chosen based on our evaluation of performance, relevance, consistency, etc. Testing so far shows most people can see the first key point within 4 seconds and each additional point within a second, so let us know: how that feels for you and if you’d want it faster or smarter.

There are various optimizations to speed things up, such as downloading the AI model (369MB) when you first enable the feature in Firefox Labs, as well as limiting how much content is provided to the model to match up with the intent of a preview. We also use pre-processing and post-processing heuristics that are English-focused, but some in the community have already configured the language limiting pref from “en” and provided helpful feedback that this model can work for other languages too.

Next steps

We’re actively working on improving support for multiple languages, key points quality and length, and general polish to the feature capability and user experience as well as exploring how to bring this to Android. We invite you to try Link Preview and look forward to your feedback in enhancing how Firefox helps users accomplish more on the web. You can also chat with us on AI@Mozilla discord #firefox-ai.

The post Exploring on-device AI link previews in Firefox appeared first on The Mozilla Blog.

Don MartiGoogle Ads Shitshow Report 2024

Google ads are full of crime and most web users should block them. If you don’t believe the FBI, or Malwarebytes, believe Google. Their 2024 Ads Safety Report is out (Search Engine Land covered it) and things do not look good. The report is an excellent example of some of the techniques that big companies use to misrepresent an ongoing disaster as somehow improving, so I might as well list them. If I had to do a corporate misinformation training session, I’d save this PDF for a reading assignment.

release bad news when other news is happening This was a big news week for Google, which made it the best time to release this embarrassing report. Editors aren’t going to put their Google reporter to work on an ad safety story when there’s big news from the Federal courthouse.

counting meaningless numbers Somehow our culture teaches us to love to count, so Google gives us a meaningless number when the meaningful numbers would look crappy.

Last year, we continued to invest heavily in making our LLMs more advanced than ever, launching over 50 enhancements to our models which enabled more efficient and precise enforcement at scale.

The claim is that Google continued to invest heavily and that’s the kind of statement that’s relatively easy to back up with a number that has meaningful units attached. Currency units, head count, time units, even lines of code. Instead, the count is enhancements which could be almost anything. Rebuild an existing package with different compiler optimizations? Feed an additional data file to some ML system? What this looks like from the outside is that the meaningful numbers are going in the wrong direction (maybe some of the people who would have made them go up aren’t there any more?) so they decided to put out a meaningless number instead.

control the denominator to juice the ratio Only takes elementary school math to spot this, but might be easy to miss if you’re skimming.

Our AI-powered models contributed to the detection and enforcement of 97% of the pages we took action on last year.

Wow, 97%, that’s a big number. But it’s out of pages we took action on which is totally under Google’s control. There are a bunch of possible meaningful ratios to report here, like

  • (AI-flagged ads)/(total ads)

  • (ads removed)/(AI-flagged ads)

  • (bad ad impressions)/(total ad impressions)

and those could have been reported as a percentage, but it looks like they wanted to go for the big number.

pretend something that’s not working is working The AI models contributed to 97% of the actions, but contributed isn’t defined. Does it count as contributed if, say, human reviewers flagged 1,000 ads, the AI flagged 100,000 ads, and 970 ads were flagged by both? If AI were flagging ads that had been missed by other methods, this would have been the place to put it. The newsworthy claim that’s missing is the count of bad ads first detected by AI before getting caught by a human reviewer. Contributed to the detection could be a lot of things. (If this were a report on a free trial of an AI-based abuse detection service, contributed wouldn’t get me to upgrade to the paid plan.)

report the number caught, not the number that get through Numbers of abusers caught is always the easiest number to juice. The simplest version is to go home at lunch hour, code up the world’s weakest bot, start it running from a non-work IP address, then go back to work and report some impressive numbers.

To put this into perspective: we suspended over 39.2 million accounts in total, the vast majority of which were suspended before they ever served an ad.

Are any employees given target numbers of suspensions to issue? Can anyone nail their OKRs by raising the number of accounts suspended? If this number is unreliable enough that a company wouldn’t use it for management, it’s not reliable enough to pay attention to. They’re also reporting the number of accounts, not individuals or companies. If some noob wannabe scammer writes a script to POST the new account form a million times, do they count for a million?

don’t compare to last year Here’s the graph of bad ads caught by Google in 2024.

5.1 billion bad ads were stopped in 2024 <figcaption>5.1 billion bad ads were stopped in 2024</figcaption>

And here’s the same graph from the 2023 report.

5.5 billion bad ads were stopped in 2023 <figcaption>5.5 billion bad ads were stopped in 2023</figcaption>

The total number isn’t as interesting as the individual, really problematic categories. The number caught for enabling dishonest behavior went down from about 20 million in 2023 to under 9 million in 2024.

Did the number of attempts at dishonest behavior with Google ads really go down by more than half in a single year? Or did Google catch fewer of them? From the outside, it’s fairly easy to tell that Google Ads is understaffed and the remaining employees are in the weeds, but it’s hard to quantify the problem. What’s really compelling about this report is that the staffing situation has gotten bad enough that it’s even showing up in Google’s own hand-picked numbers. In general when a report doesn’t include how a number has changed since the last report, the number went in the wrong direction and there’s no good explanation for why. And the number of ads blocked or removed for misinformation went from 30 million in 2023 to (checks notes) zero in 2024. Yes, misinformation has friends in high places now, but did all of the sites worldwide that run Google ads just go from not wanting to run misinformation to being fine with it?

report detection, not consequences Those numbers on bad ads are interesting, but pay attention to the text. These are numbers for ads blocked or removed, and repeat offenders drive the bulk of tech support scams via Google Ads. Does an advertiser caught doing misrepresentation in one ad get to keep going with different ads?

don’t compare to last year, part 2 The previous two graphs showed Google’s bad ads/good site problem, so here’s how they’re doing on their good ad/bad site problem. Here’s 2024:

1.3 billion pages taken action against in 2024 <figcaption>1.3 billion pages taken action against in 2024</figcaption>

And 2023:

2.1 billion pages taken action against in 2023 <figcaption>2.1 billion pages taken action against in 2023</figcaption>

Ad-supported AI slop is on the way up everywhere, making problem pages easier to create at scale, but Google somehow caught 800 million fewer pages than in 2023. How many pages they took action against isn’t even a good metric (and I would be surprised if anyone is incentivized based on it). Some more useful numbers would be stuff like

  • What percentage of advertisers had their ad run on a page that later had action taken against it?

  • How much money was paid out to sites that were later removed for violating the law or Google policy?

But as in the previous graph, the big problem is in one of the categories. Google caught fewer pages for malicious or unwanted software in 2024 than in 2023. Is there a good explanation for why Google is taking less action on malicious or unwanted software in 2024 than in 2023? As far as I know, nobody is claiming that developers are writing less of this kind of software, or promoting it less. (icymi: Researcher uncovers dozens of sketchy Chrome extensions with 4 million installs) Google management is just so mad about the union situation that they’re willing to make the users suffer in order to keep threatening workers with more layoffs. And did the amount of dangerous or derogatory content really go down from 104 million pages to 24.8 million pages in a year? Or did something happen on the Google side? (icymi: A trillion-dollar problem: how a broken digital ad industry is fracturing society – and how we can fix it)

A real Ad Safety Report would help an advertiser answer questions about how likely they are to sponsor illegal content when they buy Google ads. And it would help a publisher understand how likely they are to have an ad for malware show up on their pages. No help from this report. Even though from the outside we can see that Google runs a bunch of ads on copyright-infringing sites, not only does Google not report the most meaningful numbers, they’re doing worse than before on the less meaningful numbers they do choose to report.

Google employees, (yes, both FTEs and TVCs) are doing a lot of good work trying to do the right thing on the whole ads/crime problem, but management just isn’t staffing and funding the ad safety stuff at the level it needs. A company with real competition would have had to straighten this situation out by now, but that’s not the case for Google. Google’s services like Search are both free and overpriced—users don’t pay in money, but in over-exposure to fraud and malware risks that would be lower in a competitive market. If a future Google breakup works, one of the best indicators of success will be more meaningful, and more improved, metrics in future ad safety reports.

more: Click this to buy better stuff and be happier, fix Google Search

just in case anyone wants to release better numbers: How to leak to a journalist by Laura Hazard Owen

Bonus links

Flaming Fame. by George Tannenbaum. We don’t see shitty work and say that’s shitty. It’s worse than that. We simply don’t see it at all.

LG TVs’ integrated ads get more personal with tech that analyzes viewer emotions by Scharon Harding. The company plans to incorporate a partner company’s AI tech into its TV software in order to interpret psychological factors impacting a viewer, such as personal interests, personality traits, and lifestyle choices. (What happens when you do a Right to Know for the family TV?)

Former Substack creators say they’re earning more on new platforms that offer larger shares of subscription revenue by Alexander Lee. Since leaving Substack, some writers’ subscriber counts have plateaued over the past year, while others have risen — but in both cases, creators said that their share of revenue has increased because Ghost and Beehiiv charge creators flat monthly rates that scale based on their subscriber counts, rather than Substack’s 10 percent cut of all transaction fees.

The Mediocrity of Modern Google by Om Malik. What’s particularly ironic is that today’s Google has become exactly what its founders warned against in their 1998 paper: an advertising company whose business model fundamentally conflicts with serving users’ needs.

With Support of Check My Ads Institute’s Advocacy, Senator Warner Urges FTC and DOJ to Investigate Ad Fraud Affecting U.S. Government Agencies Senator Warner’s letters to FTC Chairman Andrew Ferguson and DOJ Attorney General Pam Bondi cite new research by cybersecurity and digital forensics firm Adalytics, exposing how major adtech vendors have failed to deliver the “real-time bot detection” that they promised. As a result, advertisements intended for human audiences instead were shown, for at least five years, to easily-identifiable bots operated from data centers, including bots on industry group bot lists. (Adalytics: The Ad Industry’s Bot Problem Is Worse Than We Thought)

Git turns 20: A Q&A with Linus Torvalds by Taylor Blau. So I was like, okay, I’ll do something that works for me, and I won’t care about anybody else. And really that showed in the first few months and years—people were complaining that it was kind of hard to use, not intuitive enough. And then something happened, like there was a switch that was thrown.

I’m not an expert on electric cars, so I don’t know enough to criticize some of the hard parts of the design of a Tesla. But when they get obvious stuff like getting out without power wrong, that’s a pretty good sign to stay away.

How the U.S. Became A Science Superpower by Steve Blank. Post war, it meant Britain’s early lead was ephemeral while the U.S. built the foundation for a science and technology innovation ecosystem that led the world—until now.

Mitchell BakerExpanding Mozilla’s Boards in 2020

Mozilla is a global community that is building an open and healthy internet. We do so by building products that improve internet life, giving people more privacy, security and control over the experiences they have online. We are also helping to grow the movement of people and organizations around the world committed to making the digital world healthier.

As we grow our ambitions for this work, we are seeking new members for the Mozilla Foundation Board of Directors. The Foundation’s programs focus on the movement building side of our work and complement the products and technology developed by Mozilla Corporation.

What is the role of a Mozilla board member?

I’ve written in the past about the role of the Board of Directors at Mozilla.

At Mozilla, our board members join more than just a board, they join the greater team and the whole movement for internet health. We invite our board members to build relationships with management, employees and volunteers. The conventional thinking is that these types of relationships make it hard for the Executive Director to do his or her job. I wrote in my previous post that “We feel differently”. This is still true today. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

It’s worth noting that Mozilla is an unusual organization. We’re a technology powerhouse with broad internet openness and empowerment at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the technology industry.

It’s important that our board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

What are we looking for?

Last time we opened our call for board members, we created a visual role description. Below is an updated version reflecting the current needs for our Mozilla Foundation Board.

Here is the full job description: https://mzl.la/MoFoBoardJD

Here is a short explanation of how to read this visual:

  • In the vertical columns, we have the particular skills and expertise that we are looking for right now. We expect new board members to have at least one of these skills.
  • The horizontal lines speaks to things that every board member should have. For instance, to be a board member, you should have to have some cultural sense of Mozilla. They are a set of things that are important for every candidate. In addition, there is a set of things that are important for the board as a whole. For instance, international experience. The board makeup overall should cover these areas.
  • The horizontal lines will not change too much over time, whereas the vertical lines will change, depending on who joins the Board and who leaves.

Finding the right people who match these criteria and who have the skills we need takes time. We hope to have extensive discussions with a wide range of people. Board candidates will meet the existing board members, members of the management team, individual contributors and volunteers. We see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It also helps us feel comfortable including someone at this senior level of stewardship.

We want your suggestions

We are hoping to add three new members to the Mozilla Foundation Board of Directors over the next 18 months. If you have candidates that you believe would be good board members, send them to [email protected]. We will use real discretion with the names you send us.

Chris H-CPerfect is the Enemy of Good Enough

My Papa, my mother’s father, C. J. Mortimer died in Saint John, New Brunswick in 2020. Flying through the Toronto and Montreal airports in September to his funeral was one of the surreal experiences of my life, with misting tunnels of aerosolized alcohol to kill any microbe on your skin, hair, clothes, and luggage; airport terminals with more rodent traps than people; and a hypersensitivity to everyone’s cough and sniffle that I haven’t been able to shake.

I was angry, then. I’m still angry. Angry that I couldn’t hug my grandmother. Angry that weeping itself was complicated and contagious. Angry that I couldn’t be together or near or held. Angry that I was putting my family at home at risk by even going. Angry that we didn’t hold the line on the lockdowns long enough to manage the disease properly. Angry at the whiners.

This isn’t a pandemic post, though. Well, no more than any post I’ve made since 2020. No more than any post I will make for the foreseeable.

This is a post about what my grandfather gave to me.

Y’see, I’m not the first computer nerd in the family. My Grampy, my father’s father, was and my father is a computer nerd. Grampy’s memoirs were typed into a Commodore 64. Dad is still fighting with Enterprise Java, of all things, to help his medical practice run smoothly.

And Papa? In the 60s he was offered lucrative computer positions at Irving Oil in Saint John and IBM in the US. Getting employment in the tech industry was different in those days, not leastwise because the tech industry didn’t really exist yet. You didn’t get jobs because you studied it in school, because there weren’t classes in it. You didn’t get jobs because of your experience in the field, because the most experienced you could be was the handful of years they’d been at it. You didn’t get jobs because of your knowledge of a programming language, because there were so few of them and they were all so new (and proprietary).

So what was a giant like International Business Machines to do? How could it identify in far-flung, blue-collar Atlantic Canada a candidate computer programmer? Because though the tech industry didn’t exist in a way we’d necessarily recognize, it was already hungrier for men to work in it than the local markets could supply.

In my Papa’s case, they tested his aptitude with the IBM Aptitude Test for Programmer Personnel (copyright 1964):

Logo and explanation from the front cover of a "IBM Aptitude Test for Programmer Personnel" with directions that read "1. Do not make any marks in this booklet. 2. On the separate answer sheet print your name, date, and other requested information in the proper spaces. 3. Then wait for further instructions.". The design is geometric and bland but not unpleasant.

Again, though, how do you evaluate programmer aptitude without a common programming language? Without common idioms? Without even a common vocabulary of what “code” could mean or be?

IBM used pattern-matching questions with letters:

Instructions reading "In Part I you will be given some problems like those on this page. The letters in each series follow a certain rule. For each series of letters you are to find the correct rule and complete the series. One of the letters at the right side of the page is the correct answer. Look at the example below." The provided example, marked "W." reads "a b a b a b a b" followed by five possible numbered answers "a b c d e".

And pattern-matching questions with pictures:

Instructions reading "In Part II you will be given some problems like those on this page. Each row is a problem. Each row consists of four figures on the left-hand side of the page and five figures on the right-hand side of the page. The four figures on the left make a series. You are to find out which one of the figures on the right-hand side would be the next or the fifth one in the series. Now look at example X". Example X is four squares, each with a single quadrant shaded. In order, top-right, bottom-right, bottom-left, top-left. The five possible answers labeled A through E are squares with one quadrant shaded (A bottom-right, B top-right, C bottom-left, D top-left), and a square with no quadrant shaded (E).

And arithmetic reasoning questions:

Instructions reading "In Part III you will be given some problems in arithmetical reasoning. After each problem there are five answers, but only one of them is the correct answer. You are to solve each problem and indicate the correct answer on the answer sheet. The following problems have been done correctly. Study them carefully." followed by "Example X: How many apples can you buy for 80 cents at the rate of 3 for 10 cents? (a) 6 (b) 12 (c) 18 (d) 24 (e) 30"

And that was it. For the standardized portion of the process, at least.

Papa delivered this test to my siblings and I when I think I was in Grade 9, so about 15 years of age. Even my 2- and 4-year-younger siblings performed well, and I and my 2-year-older sibling did nearly perfectly. Apparently the public education system had adapted to turning out programming personnel of high aptitude in the forty years or so since the test had been printed.

I was gifted Papa’s aptitude test booklet, some IBM flowcharting and diagramming worksheets, and a couple example punchcards before his death. I was thrilled to be entrusted with them. I had great plans for high-quality preservation digitization. If my Brother multi-function’s flatbed scanner wouldn’t do the trick, I’d ask the local University’s library for help. Or the Internet Archive itself!

The test booklet sat on my desk for years. And then Papa died. I placed the bulletin from the funeral service next to it on my desk. They both sat on my desk for further years.

I couldn’t bring myself to start the project of digitizing and preserving these things. I just couldn’t.

Part of it was how my brain works. But I didn’t need a diagnosis to develop coping mechanisms for projects that were impossible to start. I bragged about having it to my then-coworker Mike Hoye, the sort who cared about things like this. Being uncharacteristically prideful in front of a peer, a mentor, that’d surely force me to start.

They sat on my desk for years more.

We renovated the old spare room into an office for my wife and moved her desk and all her stuff out so I could have the luxury of an office to myself. We repainted and reorganized my office.

I looked at the test booklet.

I filed it away. I forgot where. I gave up.

But then, today, I read an essay that changed things. I read Dr. Cat Hicks’ Why I Cannot Be Technical. Not only does she reference Papa’s booklet (“Am I the only person who is extremely passionate about getting their hands on a copy of things like the IBM programmer aptitude tests from the 60s?”) but what she writes and how she writes reminds me of what drew me to blogging. What I wanted to contribute to and to change in this industry of ours. The feeling of maybe being a part of a movement, not a part of a machine.

I searched and searched and found the booklet. I looked at the flatbed scanner and remembered my ideas of finding the ideal digitization. The perfect preservation.

I said “Fuck it” and put it on the ground and started taking pictures with my phone.

To hell with perfect, I needed good enough.

I don’t remember what else was involved in IBM’s test of my Papa. I don’t even know if they conducted it in Canada or flew him to the States. He probably told me. I’m sorry I don’t remember.

I don’t know why he never kept up with programming. I don’t remember him ever working, just singing beautifully in the church choir, stuttering when speaking on the telephone, playing piano in the living room. He did like tech gadgets, though. He converted all our old home movies to DVD without touching a mouse or keyboard. I should’ve asked him why he never owned a minicomputer.

I do know why he didn’t choose the IBM job, though. Sure, yes, he could stay closer to his family in Nova Scotia. Sure, he wouldn’t have to wear quite as many suits. But the real crux was the offer that Irving gave him. IBM wanted him as a cog in their machine. Another programming person to feed into their maw and… well, who knows what next. But Irving? Well, Irving _also_ wanted that, true. They needed someone to operate their business machines for payroll and accounts and stuff.

But when the day’s work was done? And all the data entry girls (because of course they were all women) were still on the clock? And there were CPU cycles available?

Irving offered to let my Papa input and organize his record collection.1

My recollection of my grandfather isn’t perfect. But perhaps it’s good enough.

:chutten

  1. Another thing I have in my good enough memory is that, to have the mainframe index his 78s, Papa needed to know the longest title of all the sides in his collection. It’s a war song. And prepare your historical appreciation goggles because it’s sexist as hell in 2025. But I may never forget 1919’s “Would You Rather Be A Colonel With an Eagle on Your Shoulder or a Private With a Chicken on Your Knee?↩

The Mozilla BlogMozilla’s CEO weighs in on U.S. v. Google

The CEO of Mozilla, Laura Chambers, provided insights about the U.S. v. Google LLC case on search competition ahead of the trial slated to begin April 21, 2025.

“As the CEO of Mozilla, I often have conversations about the future of this company. Most times these conversations are tied to how we build a better Internet that is accessible to everyone. It’s never about maximizing profits because we aren’t owned by billionaires and our lone shareholder is a non-profit whose mission is to build an open and secure internet ecosystem that places privacy, security and the rights of individuals over the bottom line.

At this moment our mission is particularly top of mind. The Court presiding over the Department of Justice’s search monopolization case against Google will soon convene a long-awaited remedies hearing that has the potential to significantly alter the industry and the open web. 

Some of the remedies proposed in the case risk the future of our Firefox browser and Gecko browser engine—the last remaining non-Big Tech browser engine.

In the coming weeks, we hope to see a shift to focus on remedies that can improve search competition without harming the pro-competitive role that Firefox and other independent browsers play in the ecosystem. 

I speak for many small and independent companies like Mozilla when I say that the benefits we deliver to consumers and competition can’t be measured by our market share because we regularly punch above our weight.

We fully support the Department of Justice’s efforts to improve competition in various digital markets, but we’re concerned that the proposed remedies in the search case will do much more harm than good and unnecessarily seek to promote search competition at the expense of browser and browser engine competition. If the Department of Justice truly wants to fix competition, they can’t solve one problem by creating another.

The outcome of this case isn’t just about one company, it’s about the future of the internet and the stakes couldn’t be higher. 

There are only three main browser engines left and only one engine—Mozilla’s Gecko—is not owned by a Big Tech company. Browser engines shape how the web works. Gecko powers Firefox (and other independent browsers) and puts privacy and people first.

If it disappears, so does the open web.

Independent browsers like Firefox drive privacy innovation, security advancements, and offer people real choice. For over 25 years, Mozilla has fought for an open, competitive landscape where businesses can thrive, and consumers have real alternatives. We hope the remedies adopted by the Court enable us to continue this fight for many more years to come.”

The post Mozilla’s CEO weighs in on U.S. v. Google appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 595

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is wgpu, a cross-platform graphics and compute library based on WebGPU.

Despite a lack of suggestions, llogiq is pleased with his choice.

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

480 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Relatively small changes this week, nothing terribly impactful (positive or negative).

Triage done by @simulacrum. Revision range: e643f59f..15f58c46

1 Regressions, 3 Improvements, 3 Mixed; 2 of them in rollups 35 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust
Other Areas

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2025-04-16 - 2025-05-14 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

IEEE 754 floating point, proudly providing counterexamples since 1985!

Johannes Dahlström on rust-internals

Thanks to Ralf Jung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mitchell BakerThe Ethos of Open Source

A couple of months ago I started posting about how I want to build a better world through technology and how I’ll be doing that outside of Mozilla going forward.  The original post has many references to “open” and “open source.”   It’s easy to think that we all understand open source and we just need to apply it to new settings.   I feel differently:  we need to shape our collective understanding of the ethos of open source.   

Open source has become mainstream as a part of the software development process.   We can rightly say that the open source movement “won.”  However, this isn’t enough for the future. 

The open source movement was about more than convenience and avoiding payment. For many of us, open source was both a tool and an end in itself.   Open source software allows people to participate in creating the software that has such great impact on our lives.   The “right to fork” allows participants to try to correct wrongs in the system; it provides a mechanism for alternatives to emerge. This isn’t a perfect system of course, and we’ve seen how businesses can wrap open source and the right to fork with other systems that diminish the impact of this right.  So the past is not “The Perfect Era” that we should aim to replicate.  The history of open source gives us valuable learning into what works and what doesn’t so we can iterate towards what we need in this era.  

The practical utility of open source software has become mainstream.  The time is ripe to reinforce the deeper ethos of participation, opportunity, security and choice that drove the open source movement.  

I’m looking for a good conversation about these topics.  If you know of a venue where such conversations are happening in a thoughtful, respectful way please do let me know.  

Don Martipicking up cheap shoes in front of a steamroller

Here’s another privacy paradox for people who collect them.

  • On the web, the average personalized ad is probably better than the average non-personalized ad. (The same ad campaigns that have a decent budget for ad creative also have a budget for targeting data.)

  • But users who block personalized ads, or avoid personalization by using privacy tools and settings, are, on average, better off than users who get personalized ads.

There’s an expression in finance: Picking Up Nickels In Front Of A Steam Roller. For some kinds of investing decisions, the investor is more likely to make a small gain than to lose money in each individual trade. But the total expected return over time is negative, because a large loss is an unlikely outcome of each trade. The decision to accept personalized ads or try to avoid them might be a similar bet.

For example, a typical positive outcome of getting personalized ads might be getting better shoes, cheaper. There’s a company in China that is working the personalized ad system really well. Instead of paying for high production value ads featuring high-profile athletes in the USA, they’re just doing the incremental data-driven marketing thing. Make shoes, experiment with the personalized ad system, watch the numbers, reinvest in both shoe improvements and improvements to the personalized ads. For customers, the shoe company represents the best-case scenario for turning on the personalized ads. You get a pair of shoes from China for $40 that are about as good as the $150 shoes from China that you would get from a a big-name brand. (The shoes might even be made by the same people out of the same materials.) I don’t need to link to the company, just turn on personalized ads and if you want the shoes they’ll find you.

That example might be an outlier on the win-win side, though. On average, personalized (behaviorally targeted) ads are likely to be associated with lower quality vendors and higher product prices compared to competing alternatives found among search results. (Mustri et al.) but let’s pretend for a minute and say you figured out how to get targeted in the best possible way and come out on the winning side. That’s pretty sweet—personalized ads save you more than a hundred bucks on shoes, right?

Here comes the steamroller, though.

In recent news, Baltimore sues 2 sportsbooks over alleged exploitative practices. Some people are likely to develop a gambling problem, and if you don’t know in advance whether or not you’re one of them, should you have the personalized ads turned on? You stand to lose a lot more than you would have gained by getting the cheap shoes or other miscellaneous stuff. It is possible that machine learning on the advertising or recommended content side could know more about you than you do, and the negative outcomes from falling for an online elder fraud scheme tend to be much larger than the positive outcomes from selecting the best of competing legitimate products.

The personalized advertising system can facilitate both win-win offers like the good shoes from an unknown brand or win-lose offers like those from sports betting apps that use predatory practices. The presence of both win-win and win-lose offers in the market is a fact that keeps getting oversimplified away by personalized advertising’s advocates in academia. In practice, ad personalization gives an advantage to deceptive sellers. Another good example comes from the b2b side: malware in search ads personalized to an employee portal or SaaS application. From the CIO point of view, are you better off having employees get better-personalized search ads at work, or better off blocking a security incident before it starts?

People’s reactions to personalization are worth watching, and reflect more widely held understanding of how information works in markets than personalized ad fandom does. The fact that Google may have used this data to conduct focused ad campaigns targeted back to you was disclosed as if it was a security issue, which makes sense. Greg Knauss writes, Blue Shield says that no bad actor was involved, but is that really true? Shouldn’t a product that, apparently by default, takes literally anything it can—privacy be damned—and tosses it into the old ad-o-matic not be considered the output of a bad actor? Many people (but not everybody) consider being targeted for a personalized ad as a threat in itself. More: personalization risks

Bonus links

What If We Made Advertising Illegal? by Kōdō Simone. The traditional argument pro-advertising—that it provides consumers with necessary information—hasn’t been valid for decades. In our information-saturated world, ads manipulate, but they don’t inform. The modern advertising apparatus exists to bypass rational thought and trigger emotional responses that lead to purchasing decisions. A sophisticated machine designed to short-circuit your agency, normalized to the point of invisibility. (Personally I think it would be hard to come up with a law that would squeeze out all incentivized communication intended to cause some person to purchase some good or service, but it would be possible to regulate the information flows in the other direction—surveillance of audience by advertiser and intermediaries—in a way that would mostly eliminate surveillance advertising as we know it: Big Tech platforms: mall, newspaper, or something else?)

Meta secretly helped China advance AI, ex-Facebooker will tell Congress by Ashley Belanger. In her prepared remarks, which will be delivered at a Senate subcommittee on crime and counterterrorism hearing this afternoon, Wynn-Williams accused Meta of working hand in glove with the Chinese Communist Party (CCP). That partnership allegedly included efforts to construct and test custom-built censorship tools that silenced and censored their critics as well as provide the CCP with access to Meta user data—including that of Americans. (And if they’re willing to do that, then the elder fraud ads on Facebook are just business as usual.)

Protecting Privacy, Empowering Small Business: A Path Forward with S.71 (A privacy law with private right of action gets enforced based on what makes sense to normal people in a jury box, not to bureaucrats who think it’s normal to read too many PDFs. Small businesses are a lot better off with this common-sense approach instead of having to feed the compliance monster.)

This startup just hit a big milestone for green steel production by Casey Crownhart. Boston Metal uses electricity in a process called molten oxide electrolysis (MOE). Iron ore gets loaded into a reactor, mixed with other ingredients, and then electricity is run through it, heating the mixture to around 1,600 °C (2,900 °F) and driving the reactions needed to make iron. That iron can then be turned into steel. Crucially for the climate, this process emits oxygen rather than carbon dioxide…

Spidermonkey Development BlogShipping Temporal

The Temporal proposal provides a replacement for Date, a long standing pain-point in the JavaScript language. This blog post describes some of the history and motivation behind the proposal. The Temporal API itself is well docmented on MDN.

Temporal reached Stage 3 of the TC39 process in March 2021. Reaching Stage 3 means that the specification is considered complete, and that the proposal is ready for implementation.

SpiderMonkey began our implementation that same month, with the initial work tracked in Bug 1519167. Incredibly, our implementation was not developed by Mozilla employees, but was contributed entirely by a single volunteer, André Bargull. That initial bug consisted of 99 patches, but the work did not stop there, as the specification continued to evolve as problems were found during implementation. Beyond contributing to SpiderMonkey, André filed close to 200 issues against the specification. Bug 1840374 is just one example of the massive amount of work required to keep up to date with the specification.

As of Firefox 139, we’ve enabled our Temporal implementation by default, making us the first browser to ship it. Sometimes it can seem like the ideas of open source, community, and volunteer contributors are a thing of the past, but the example of Temporal shows that volunteers can still have a meaningful impact both on Firefox and on the JavaScript language as a whole.

Interested in contributing?

Not every proposal is as large as Temporal, and we welcome contributions of all shapes and sizes. If you’re interested in contributing to SpiderMonkey, please have a look at our mentored bugs. You don’t have to be an expert :). If your interests are more on the specification side, you can also check out how to contribute to TC39.

The Mozilla BlogInside the newsletter making layoffs feel less bleak and more like a group chat

A woman with wavy brown hair wearing an off-the-shoulder striped top sits in front of a red background, framed by colorful digital icons on a blue grid backdrop.

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Melanie Ehrenkranz, the writer and creative strategist behind Laid Off, a weekly newsletter interviewing people about job loss — and everything wrapped up in it. She talks about building a space that’s “real, not bleak”; the forums and fan theories that keep her online; and how her inbox became one of her favorite places to hang out online.

What is your favorite corner of the internet? 

I run a dedicated Discord for Laid Off – we just hit 800 members and it’s the most supportive community. It’s weird to say “we have fun” when referring to a space for people affected by layoffs, but we really do. There was a tight group of members that used to show up each week in the #severance channel to share live reactions to new episodes on Thursday nights. The rest of the week, people could talk about actual severance, that thing that you might get a few weeks or months of, if you’re lucky.

What is an internet deep dive that you can’t wait to jump back into?

The Yellowjackets subreddit after new episodes drop. The last two seasons have really gone off the rails, but I’m here til the end. I probably enjoy reading people’s theories more than the actual episodes these days.

What is the one tab you always regret closing?

My former editor (and friend) Cooper Fleishman has written an incredible Survivor watch guide. I am extremely late to the fandom, but now I’m locked in.

What can you not stop talking about on the internet right now?

Layoffs. I started my Substack, Laid Off, in August of last year. I interview smart and cool people who have been laid off each week. I also work on ~quarterly trend reports to better understand the layoff landscape.

What was the first online community you engaged with?

It’s a blur of AIM chat rooms, MySpace, and Neopets. A lot of huddling around my hunking my computer, setting moody away messages, checking changes in my friends’ top eights, and saving up for new paint brushes. I miss how slow things were, and not just the actual lack of speed, but how unfazed I was by it. Downloading a Strokes song on Kazaa meant adding another 10 to 20 minutes of load time. I’d just throw on an away message and play outside while the page loaded.

If you could create your own corner of the internet, what would it look like?

That’s the hope with Laid Off. I’m trying to build the coolest place on the internet to talk about being laid off. I want it to feel cathartic, not bleak, but no toxic positivity. A space that feels nonjudgmental and inclusive. No assholes. Treat everyone with kindness.

The newsletter and community isn’t just for people who’ve been laid off, though they’re at the heart of it. It’s also meant to illuminate the cracks in our systems and how work, stability and identity are shifting for all of us. To help us understand what’s broken, what’s changing and what we might build next.

What articles and/or videos are you waiting to read/watch right now?

I discover the coolest reads and recs from some of my favorite Substacks – as seen on by Ochuko Akpovbovbo, Feed Me by Emily Sundberg, Read Max by Max Read, Bad Brain by Ashley Reese, Extracurricular by Tembe Denton-Hurst, You’ve Got Lipstick on Your Chin by Arabelle Sicardi, and Gen-Z Gov by Kate Glavan, and the Dirt universe. 


I rarely go to a publication’s homepage anymore – I’m either reading/clicking straight from my inbox or the Substack app. And with all the brands making their way to the platform, I’ll be even more incentivized to hang out in my inbox.

Your Substack has created a space where people can be real about job loss. What’s something surprising you’ve learned from these conversations?

I try to leave my assumptions at the door. What did surprise me was how many conversations are happening. I get notifications all day long of people starting new threads in the Substack Chat and on Discord, covering… everything and anything. People venting about how AI is screwing them over both in the job search process and within their actual careers. People wondering if it’s normal for a hiring manager to ask them to take a personality test. Dissecting insensitive language of rejection emails, or swapping “getting ghosted” stories. Someone even dropped a photo of their redacted severance contract in a thread the other week for legal and negotiation advice.


Melanie Ehrenkranz is a writer and creative strategist focused on community, technology and power. She leads content and community for Sophia Amoruso’s Business Class, a membership-based digital entrepreneurship community and course for founders, freelancers, solopreneurs, creators and side hustlers.

The post Inside the newsletter making layoffs feel less bleak and more like a group chat appeared first on The Mozilla Blog.

Mozilla ThunderbirdVIDEO: The New Account Hub

In this month’s Community Office Hours, we’re chatting with Vineet Deo, a Software Engineer on the Desktop team, who walks us through the new Account Hub on the Desktop app. If you want a sneak peak at this new streamlined experience, you can find it in the Daily channel now and the Beta channel towards the end of April.

Next month, we’ll be chatting with our director Ryan Sipes. We’ll be covering the new Thunderbird Pro and Thundermail announcement and the structure of MZLA compared to the Mozilla Foundation and Corporation. And we’ll talk about how Thunderbird put the fun in fundraising!

March Office Hours: The New Account Hub

Setting up a new email account in Thunderbird is already a solid experience, so why the update? Firstly, this is the first thing new users will see in the app. Thus, it’s important it has the same clean, cohesive look that is becoming the new Thunderbird design standard. It’s also helpful for users coming from other email clients to have a familiar, wizard-like experience. While the current account setup works well, it’s browser based. This makes it possible a user could exit out before finishing and get lost before they even started. This is the opposite of what we want for potential users!

Vineet and his team are also working to make the new Account Hub ready for Exchange. Likewise, they also have plans for a similar hub to set up new address books and calendars. We’re proud of the collaboration between back and frontend teams, and designers and engineers, to make the Account Hub.

Watch, Read, and Get Involved

But don’t take our word for it! Watch Vineet’s Account Hub talk and demo, along with a Q&A session. If you’re comfortable testing Daily, you can test this new feature now. (Go to File > New > Email Account to start the experience.) Otherwise, keep an eye on our Beta release channel at the end of April. And if you’re watching this after Account Hub is part of the regular release, now you know the feature’s story!

VIDEO (Also on Peertube):

Get Involved

The post VIDEO: The New Account Hub appeared first on The Thunderbird Blog.

The Rust Programming Language Blogcrates.io security incident: improperly stored session cookies

Today the crates.io team discovered that the contents of the cargo_session cookie were being persisted to our error monitoring service, Sentry, as part of event payloads sent when an error occurs in the crates.io backend. The value of this cookie is a signed value that identifies the currently logged in user, and therefore these cookie values could be used to impersonate any logged in user.

Sentry access is limited to a trusted subset of the crates.io team, Rust infrastructure team, and the crates.io on-call rotation team, who already have access to the production environment of crates.io. There is no evidence that these values were ever accessed or used.

Nevertheless, out of an abundance of caution, we have taken these actions today:

  1. We have merged and deployed a change to redact all cookie values from all Sentry events.
  2. We have invalidated all logged in sessions, thus making the cookies stored in Sentry useless. In effect, this means that every crates.io user has been logged out of their browser session(s).

Note that API tokens are not affected by this: they are transmitted using the Authorization HTTP header, and were already properly redacted before events were stored in Sentry. All existing API tokens will continue to work.

We apologise for the inconvenience. If you have any further questions, please contact us on Zulip or GitHub.

Mozilla ThunderbirdThunderbird for Android March 2025 Progress Report

Hello, everyone, and welcome to the Thunderbird for Android March 2025 Progress Report. We’re keeping our community updated on everything that’s been happening in the Android team, which is quickly becoming a more general mobile team with some recent hires. In addition to team news, we’re talking about our roadmap board on GitHub.

Team Changes

In March we said goodbye to cketti, the K-9 Mail maintainer who joined the team when Thunderbird first announced plans for an Android app. We’re very grateful for everything he’s created, and for his trust that K-9 Mail and Thunderbird for Android are in good hands. But we also said hello to Todd Heasley, our new iOS engineer, who started March 26. We also have just added Ashley Soucar, an Android/iOS engineer, who joined us on April 7. If all continues to go well, we’ll also be adding another Android engineer in the next couple of weeks.

Our Roadmap Board

Our roadmap board is now available! We’re grateful to the Council for their trust and support in approving it. As the board will reflect any changes in our planning, this is the most up-to-date source for our upcoming development. Each epic will show its objective and what’s in scope – and as importantly, what’s out of scope. The project information on the side will tell you if an epic is in the backlog or work in progress.

If you’d like to know what we’re working on right now, check out our sprint board.

Contribute by Triaging GitHub Issues

One way to contribute to Thunderbird for Android is by triaging open GitHub Issues. In March, we did a major triage with over 150 issues closed as duplicates, marked with ‘works for me,’ or elevating them up to the efforts and features described in the roadmap above. Especially since we’re a small team, triaging issues helps us know where to act on incoming issues. This is a great way to get started as a Thunderbird for Android contributor.


To start triaging bugs, have a look at the ‘unconfirmed’ issues. Try to reproduce the issue  to help verify that the issue exists. Then add a comment with your results and any other information you found that might help narrow down the issue. If you see users generally saying “it doesn’t work”, ask them for more details or to enable logs. This way we know when to remove the unconfirmed label. If you have questions along the way or need someone to confirm a thought you had, feel free to ask in the community support channel.

Account Drawer

Our main engineering focus in March has been the account drawer we shared screenshots on in the January/February update. Given the settings design includes a few non-standard components, we took the opportunity to write a modern settings framework based on Jetpack Compose and make use of it for the new drawer. There will be some opportunities to contribute here in the future, as we’d like to migrate our old settings UI to the new system.

We have a few crashes and rough edges to polish, but are very close to enabling the feature flag in beta. If you aren’t already using it and want to get early access, install our beta today.

I’d also like to call out a pull request by Clément, who contributed support for a folder hierarchy. The amazing thing here—our design folks were working out a proposal because we were interested in this as well, and without knowing, Clément came up with the same idea and came in with a pull request that really hit the spot. Great work!

Community Contributions

In addition to the folder hierarchy mentioned above, here are a few community activities in March:

  • Shamim made sure the Unified Inbox shows up when you add your second account, retained scroll position in the drawer when rotating, removed font size customizations in favor of Android OS controls, flipped the default for being notified about new email and helped out with a few refactorings to make our codebase more modern.
  • Sergio has improved back button navigation when editing drafts.
  • Salkinnoma made our workflow runs more efficient and fixed an issue in the find folders view where a menu item was incorrectly shown.
  • Smatek improved our edge to edge support by making the bottom Android navigation bar background transparent
  • Husain fixed some inconsistencies when toggling “Show Unified Inbox”.
  • Vayun has begun work to update the Thunderbird for Android app widgets to Jetpack compose (including dark theming)
  • SttApollo has made the logo size more dynamic in the onboarding screen.

This is quite a list, great work! When you think about Thunderbird for Android or K-9 Mail, what was the last major annoyance you stumbled upon? If you are an Android developer, now is a good time to fix it. You’ll see your name up here next time as well 🙂

The post Thunderbird for Android March 2025 Progress Report appeared first on The Thunderbird Blog.

The Mozilla BlogHow UEFA and Mozilla’s Anonym nailed TikTok campaigns without compromising user data

When UEFA’s Men’s Club Competitions Online Store (operated by Event Merchandising Ltd) set out to run a TikTok campaign during the 2024 finals, the goal was clear: engage passionate club fans and drive sales of official gear — without compromising the data privacy of their most loyal supporters and spectators. 

Together with Anonym — a privacy-first data analytics and measurement solution harnessed by TikTok to balance campaign performance with user privacy — UEFA found a way to measure campaign performance while carefully safeguarding their fans’ user data. With no complex integration or data science onboarding required, UEFA used Anonym’s privacy-preserving tools to gain meaningful insights quickly and seamlessly — proving that strong results and privacy standards can go hand-in-hand. 

This case study offers an amazing example of how brands can gain real performance insights from their campaigns, while also keeping user privacy front and center — a winning strategy straight out of Mozilla’s privacy-preserving playbook. 

Read on for the UEFA case study.


Private measurement provides UEFA with first look at performance

UEFA logo next to stats showing +93% lift in conversions, +94% lift in sales, and 99% statistical significance on a green background.

The objective

UEFA’s Men’s Club Competitions Online Store (operated by Event Merchandising Ltd), supporting fans of the massively popular UEFA Champions League and UEFA Europa League, chose TikTok to promote its clothing and accessories website during the 2024 finals. UEFA cares deeply about privacy and needed a solution that allowed it to measure the impact of its advertising on TikTok without sending any user personal information to the entertainment platform directly.

The solution

To accomplish this objective, UEFA and Event Merchandising, Ltd. partnered with Anonym, a sophisticated privacy-first data analytics solution harnessed by TikTok to balance campaign performance with user privacy. UEFA leveraged Anonym Private Lift to measure the incrementality (or causal impact) of its three week campaign on TikTok across Europe. All processing occurred in Europe and results were delivered within days of the campaign end. No integration work was required from UEFA or Event Merchandising Ltd. They simply leveraged Anonym’s drag-and-drop interface, ensuring all data was correctly formatted and encrypted.

 Three vertical UEFA promo posters for 2024 finals: Athens for the Europa Conference League with a marble statue, London for the Champions League with a player in a locker room, and Dublin for the Europa League featuring a woman in a pub wearing team gear.

The results

After the campaign ended, Anonym matched hashed and encrypted sales data with hashed and encrypted impression data from TikTok within a confidential computing environment. The data was processed using a differentially private conversion lift algorithm. Differential privacy is a method that makes individual data points indistinguishable and actionable at the same time — simultaneously enhancing user privacy while allowing effective analysis of ad performance. 

The results were impressive:

  • TikTok drove an 93% increase in conversions during the three week campaign period and the subsequent week
  • The gross merchandise value of the products purchased by people who saw TikTok ads was 94% higher than those who did not see TikTok ads

With Anonym’s privacy-first measurement solution, UEFA and Event Merchandising, Ltd. unlocked world-class insights into their TikTok campaign performance — delivering winning results off the pitch, while setting a new standard for protecting fan privacy.  

A teal lock icon next to the bold text "Anonym" on a black background.

Performance, powered by privacy

Learn more about Anonym

The post How UEFA and Mozilla’s Anonym nailed TikTok campaigns without compromising user data appeared first on The Mozilla Blog.

The Mozilla BlogData ethics in action: Zenjob, TikTok and Mozilla’s Anonym show a better way to measure

Let’s face it — measuring ad performance is challenging in a shifting privacy landscape. With tightening regulations and data sharing under a well-deserved microscope, advertisers are under more pressure than ever to prove their campaigns can work without crossing privacy lines. 

That was the exact puzzle Zenjob had to solve. As a platform connecting people to flexible work, they wanted to make the most of a key hiring season in 2024 with a fresh TikTok campaign. They also wanted to do it right by keeping user privacy front and center, a goal that we at Mozilla passionately, operationally and increasingly technologically share. 

In pursuit of a campaign that successfully derived valuable insights without exposing user-level data, Zenjob teamed up with Mozilla’s Anonym, a privacy-first data analytics solution harnessed by TikTok to balance campaign performance with user privacy. Together, Zenjob, TikTok and Anonym found a way to measure what really matters — like campaign impact and attribution — without exposing any user-level data. 

Why is this story worth a read? Because it proves insights and integrity can coexist. Zenjob didn’t just run a high-performing campaign — they saw a serious lift in signups and walked away with a crystal clear view of what worked, all while keeping sensitive user data secure and private. 

If you’re wondering how to balance performance with privacy, this case study is a great place to start.


Private measurement provides Zenjob with proof of incremental performance on iOS 

The objective 

Zenjob, the innovative platform connecting job seekers with flexible work opportunities, chose TikTok to promote its services during a key hiring season in 2024. As a platform, TikTok connects billions of users on a global scale. Zenjob’s aim was to expand the reach of their job-matching marketplace while simultaneously maintaining its deep commitment to protecting user privacy. This required a solution that allowed them to measure the effectiveness of their TikTok app-focused advertising campaigns without sharing any user-level data directly with the platform. 

The solution 

To accomplish this objective, Zenjob, Ltd. partnered with Anonym, a privacy-first data analytics solution harnessed by TikTok to seamlessly integrate advanced privacy-preserving protections with campaign efficiency and performance measurements. Zenjob leveraged Anonym’s Private Lift to measure the incrementality (or causal impact) of its four-week campaign on TikTok across Germany, and Private Attribution to determine what levers to use to optimize its campaigns (e.g. creative, geotargeting, etc.) All processing occurred in Europe and results were delivered within days of the campaign end. No integration work was required from Zenjob — they simply leveraged Anonym’s drag-and-drop interface, ensuring all data was correctly formatted and encrypted. 

Three young women in different indoor settings talk about finding flexible student jobs in Germany; text overlays include phrases like "Jobs in Germany 🇩🇪", "ZENJOB", and "OF A JOB NOW".

The results 

After the campaign ended, Anonym matched hashed and encrypted sales data with hashed and encrypted impression data from TikTok within a confidential computing environment. The data was processed using differentially private algorithms for lift and attribution. Differential privacy is a method that adds noise to data sets, making individual data points indistinguishable to enhance user privacy — while simultaneously allowing effective, actionable analysis of ad performance. 

The results were impressive: 

  • TikTok drove a 38% increase in signups during the three week campaign period and the subsequent week 
  • The number of conversions Zenjob was able to match to TikTok impressions was significantly higher than what they had seen without Anonym’s technology 

By rolling out Anonym’s privacy-preserving measurement solution, Zenjob boosted visibility into campaign performance — while keeping data safe, user trust intact, and privacy at the heart of it all.

A teal lock icon next to the bold text "Anonym" on a black background.

Performance, powered by privacy

Learn more about Anonym

The post Data ethics in action: Zenjob, TikTok and Mozilla’s Anonym show a better way to measure appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 594

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is graft, a transactional storage engine optimized for lazy, partial, and strongly consistent replication.

Thanks to Carl Sverre for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

451 pull requests were merged in the last week

Compiler
Library
Cargo
Rustfmt
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

A busy week with lots of performance improvements. The largest performance improvement was from a revert of a previous week's regression just in time for the beta release. Another large improvement came to small tweaks in the query system showing that there still are opportunities for small, targeted performance improvements in the compiler.

Triage done by @rylev. Revision range: 2ea33b59..e643f59f

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.8% [0.2%, 1.9%] 11
Regressions ❌
(secondary)
8.4% [0.2%, 38.5%] 16
Improvements ✅
(primary)
-1.0% [-35.1%, -0.2%] 206
Improvements ✅
(secondary)
-1.8% [-8.6%, -0.1%] 155
All ❌✅ (primary) -0.9% [-35.1%, 1.9%] 217

2 Regressions, 9 Improvements, 5 Mixed; 4 of them in rollups 48 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust
Other Areas

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2025-04-09 - 2025-05-07 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

The moment I froze Doctest with a loop in a comment.

/u/HaMMeReD describing their first Rust Whoa! moment on /r/rust

Despite a lack of suggestions, llogiq is content with his choice.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdThunderbird Monthly Development Digest – March 2025

Hello again Thunderbird Community! It’s been almost a year since I joined the project and I’ve recently been enjoying the most rewarding and exciting work days in recent memory. The team who works on making Thunderbird better each day is so passionate about their work and truly dedicated to solving problems for users and supporting the broader developer community. If you are reading this and wondering how you might be able to get started and help out, please get in touch and we would love to get you off the ground!

Paddling Upstream

As many of you know, Thunderbird relies heavily on the Firefox platform and other lower-level code that we build upon. We benefit immensely from the constant flow of improvements, fixes, and modernizations, many of which happen behind the scenes without requiring our input. 

The flip side is that changes upstream can sometimes catch us off guard – and from time to time we find ourselves firefighting after changes have been made. This past month has been especially busy as we’ve scrambled to adapt to unexpected shifts, with our team hunting down places to adjust Content Security Policy (CSP) handling and finding ways to integrate a new experimental whitespace normalizer. Very much not part of our plan, but critical nonetheless.

Calendar UI Rebuild

The implementation of the new event dialog is moving along steadily with the following pieces of the puzzle recently landing:

  • Title
  • Border
  • Location Row
  • Join Meeting button
  • Time & Recurrence

The focus has now turned to loading data into the various containers so that we can enable this feature later this month and ask our QA team and Daily users to help us catch early problems.

Keep track of feature delivery via the [meta] bug 

Exchange Web Services support in Rust

We’re aiming to get a 0.2 release into the hands of Daily and QA testers by the end of April so a number of remaining tasks are in the queue – but March saw a number of features completed and pushed to Daily

  • Folder copy/move
  • Sync folder – update
  • Complete composition support (reply/forward)
  • Bug fixes!

Keep track of feature delivery here.

Account Hub

This feature was “preffed on” as the default experience for the Daily build but recent changes to our Oauth process have required some rework to this user experience, so it won’t hit beta until the end of the month. It’s beautiful and well worth considering a switch to Daily if you are currently running beta.

Global Message Database

The New Zealand team completed a successful work week and have since pushed through a significant chunk of the research and refactoring necessary to integrate the new database with existing interfaces.

The patches are pouring in and are enabling data adapters, sorting, testing and message display for the Local Folders Account, with an aim to get all existing tests to pass with the new database enabled. The path to this goal is often meandering and challenging but with our most knowledgeable and experienced team members dedicated to the project, we’re seeing inspiring progress.

The team maintains their documentation in Sourcedocs which are visible here.

In-App Notifications

A few last-minute changes were made and uplifted to our ESR version early this month so if you use the ESR and are in the lucky 2% of users targeted, watch out for an introductory notification!
We’ve also wrapped up work on two significant enhancements which are now on Daily and will make their way to other releases over the course of the month:

  • Granular control of notifications by type via EnterprisePolicy
  • Enhanced triggering mechanism to prevent launch when Thunderbird is in the background

 Meta Bug & progress tracking.

New Features Landing Soon

A number of requested features and important fixes have reached our Daily users this month. We want to give special thanks to the contributors who made the following possible…

As usual, if you want to see and use new features as they land, and help us squash some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – March 2025 appeared first on The Thunderbird Blog.

The Rust Programming Language BlogMarch Project Goals Update

The Rust project is currently working towards a slate of 40 project goals, with 3 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Why this goal? This work continues our drive to improve support for async programming in Rust. In 2024H2 we stabilized async closures; explored the generator design space; and began work on the dynosaur crate, an experimental proc-macro to provide dynamic dispatch for async functions in traits. In 2025H1 our plan is to deliver (1) improved support for async-fn-in-traits, completely subsuming the functionality of the async-trait crate; (2) progress towards sync and async generators, simplifying the creation of iterators and async data streams; (3) and improve the ergonomics of Pin, making lower-level async coding more approachable. These items together start to unblock the creation of the next generation of async libraries in the wider ecosystem, as progress there has been blocked on a stable solution for async traits and streams.

What has happened? Generators. Initial implementation work has started on an iter! macro experiment in https://github.com/rust-lang/rust/pull/137725. Discussions have centered around whether the macro should accept blocks in addition to closures, whether thunk closures with an empty arguments list should implement IntoIterator, and whether blocks should evaluate to a type that is Iterator as well as IntoIterator. See the design meeting notes for more.

dynosaur. We released dynosaur v0.2.0 with some critical bug fixes and one breaking change. We have several more breaking changes queued up for an 0.3 release line that we also use plan to use as a 1.0 candidate.

Pin ergonomics. https://github.com/rust-lang/rust/pull/135733 landed to implement &pin const self and &pin mut self sugars as part of the ongoing pin ergonomics experiment. Another PR is open with an early implementation of applying this syntax to borrowing expressions. There has been some discussion within parts of the lang team on whether to prefer this &pin mut T syntax or &mut pin T, the latter of which applies equally well to Box<pin T> but requires an edition.

No detailed updates available.

Why this goal? May 15, 2025 marks the 10-year anniversary of Rust's 1.0 release; it also marks 10 years since the creation of the Rust subteams. At the time there were 6 Rust teams with 24 people in total. There are now 57 teams with 166 people. In-person All Hands meetings are an effective way to help these maintainers get to know one another with high-bandwidth discussions. This year, the Rust project will be coming together for RustWeek 2025, a joint event organized with RustNL. Participating project teams will use the time to share knowledge, make plans, or just get to know one another better. One particular goal for the All Hands is reviewing a draft of the Rust Vision Doc, a document that aims to take stock of where Rust is and lay out high-level goals for the next few years.

What has happened?

  • Invite more guests, after deciding on who else to invite. (To be discussed today in the council meeting.)
  • Figure out if we can fund the travel+hotel costs for guests too. (To be discussed today in the council meeting.)

Mara has asked all attendees for suggestions for guests to invite. Based on that, Mara has invited roughly 20 guests so far. Only two of them needed funding for their travel, which we can cover from the same travel budget.

  • Open the call for proposals for talks for the Project Track (on wednesday) as part of the RustWeek conference.

The Rust Project Track at RustWeek has been published: https://rustweek.org/schedule/wednesday/

This track is filled with talks that are relevant to folks attending the all-hands afterwards.

1 detailed update available.

Comment by @m-ou-se posted on 2025-04-01:

  • Invite more guests, after deciding on who else to invite. (To be discussed today in the council meeting.)
  • Figure out if we can fund the travel+hotel costs for guests too. (To be discussed today in the council meeting.)

I've asked all attendees for suggestions for guests to invite. Based on that, I've invited roughly 20 guests so far. Only two of them needed funding for their travel, which we can cover from the same travel budget.


Why this goal? This goal continues our work from 2024H2 in supporting the experimental support for Rust development in the Linux kernel. Whereas in 2024H2 we were focused on stabilizing required language features, our focus in 2025H1 is stabilizing compiler flags and tooling options. We will (1) implement RFC #3716 which lays out a design for ABI-modifying flags; (2) take the first step towards stabilizing build-std by creating a stable way to rebuild core with specific compiler options; (3) extending rustdoc, clippy, and the compiler with features that extract metadata for integration into other build systems (in this case, the kernel's build system).

What has happened? Most of the major items are in an iteration phase. The rustdoc changes for exporting doctests are the furthest along, with a working prototype; the RFL project has been integrating that prototype and providing feedback. Clippy stabilization now has a pre-RFC and there is active iteration towards support for build-std.

Other areas of progress:

  • We have an open PR to stabilize -Zdwarf-version.
  • The lang and types team have been discussing the best path forward to resolve #136702. This is a soundness concern that was raised around certain casts, specifically, casts from a type like *mut dyn Foo + '_ (with some lifetime) to *mut dyn Foo + 'static (with a static lifetime). Rust's defaulting rules mean that the latter is more commonly written with a defaulted lifetime, i.e., just *mut dyn Foo, which makes this an easy footgun. This kind of cast has always been dubious, as it disregards the lifetime in a rather subtle way, but when combined with arbitrary self types it permits users to disregard safety invariants making it hard to enforce soundness (see #136702 for details). The current proposal under discussion in #136776 is to make this sort of cast a hard error at least outside of an unsafe block; we evaluated the feasibility of doing a future-compatibility-warning and found it was infeasible. Crater runs suggest very limited fallout from this soundness fix but discussion continues about the best set of rules to adopt so as to balance minimizing fallout with overall language simplicity.
2 detailed updates available.

Comment by @nikomatsakis posted on 2025-03-13:

Update from our 2025-03-12 meeting (full minutes):

  • RFL team requests someone to look at #138368 which is needed by kernel, @davidtwco to do so.
  • -Zbinary-dep-info may not be needed; RFL may be able to emulate it.
  • rustdoc changes for exporting doctests are being incorporated. @GuillaumeGomez is working on the kernel side of the feature too. @ojeda thinks it would be a good idea to do it in a way that does not tie both projects too much, so that rustdoc has more flexibility to change the output later on.
  • Pre-RFC authored for clippy stabilization.
  • Active iteration on the build-std design; feedback being provided by cargo team.
  • @wesleywiser sent a PR to stabilize -Zdwarf-version.
  • RfL doesn't use cfg(no_global_oom_handling) anymore. Soon, stable/LTS kernels that support several Rust versions will not use it either. Thus upstream Rust could potentially remove the cfg without breaking Linux, though other users like Windows may be still using it (#t-libs>no_global_oom_handling removal).
  • Some discussion about best way forward for disabling orphan rule to allow experimentation with no firm conclusion.

Comment by @nikomatsakis posted on 2025-03-26:

Updates from today's meeting:

Finalizing 2024h2 goals

ABI-modifying compiler flags

Extract dependency information, configure no-std externally (-Zcrate-attr)

Rustdoc features to extract doc tests

  • No update.

Clippy configuration

  • Pre-RFC was published but hasn't (to our knowledge) made progress. Would be good to sync up on next steps with @flip1995.

Build-std

  • No update. Progress will resume next week when the contributor working on this returns from holiday.

-Zsanitize-kcfi-arity


Goals looking for help

Help wanted: Help test the deadlock code in the issue list and try to reproduce the issues. If you'd like to help, please post in this goal's dedicated zulip topic.

1 detailed update available.

Comment by @SparrowLii posted on 2025-03-18:

  • Key developments: Several deadlock issue that remain for more than a year were resolved by #137731 The new test suit for parallel front end is being improved
  • Blockers: null
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue

Help wanted: T-compiler people to work on the blocking issues #119428 and #71043. If you'd like to help, please post in this goal's dedicated zulip topic.

1 detailed update available.

Comment by @epage posted on 2025-03-17:

  • Key developments: @tgross35 got rust-lang/rust#135501 merged which improved which made progress on rust-lang/rust#119428, one of the two main blockers. In rust-lang/rust#119428, we've further discussed further designs and trade offs.
  • Blockers: Further work on rust-lang/rust#119428 and rust-lang/rust#71043
  • Help wanted: T-compiler people to work on those above issues.

Other goal updates

1 detailed update available.

Comment by @BoxyUwU posted on 2025-03-17:

camelids PR has been merged, we now correctly (to the best of my knowledge) lower const paths under mgca. I have a PR open to ensure that we handle evaluation of paths to consts with generics or inference variables correctly, and that we do not attempt to evaluate constants before they have been checked to be well formed. I'm also currently mentoring someone to implement proper handling of normalization of inherent associated constants under mgca.

1 detailed update available.

Comment by @davidtwco posted on 2025-03-03:

A small update, @adamgemmell shared revisions to the aforementioned document, further feedback to which is being addressed.

Earlier this month, we completed one checkbox of the goal: #[doc(hidden)] in sealed trait analysis, live in cargo-semver-checks v0.40. We also made significant progress on type system modeling, which is part of two more checkboxes.

  • We shipped method receiver types in our schema, enabling more than a dozen new lints.
  • We have a draft schema for ?Sized bounds, and are putting the finishing touches on 'static and "outlives" bounds. More lints will follow here.
  • We also have a draft schema for the new use<> precise capturing syntax.

Additionally, cargo-semver-checks is participating in Google Summer of Code, so this month we had the privilege of merging many contributions from new contributors who are considering applying for GSoC with us! We're looking forward to this summer, and would like to wish the candidates good luck in the application process!

1 detailed update available.

Comment by @obi1kenobi posted on 2025-03-08:

Key developments:

  • Sealed trait analysis correctly handles #[doc(hidden)] items. This completes one checkbox of this goal!
  • We shipped a series of lints detecting breakage in generic types, lifetimes, and const generics. One of them has already caught accidental breakage in the real world!

cargo-semver-checks v0.40, released today, includes a variety of improvements to sealed trait analysis. They can be summarized as "smarter, faster, more correct," and will have an immediate positive impact on popular crates such as diesel and zerocopy.

While we already shipped a series of lints detecting generics-related breakage, more work is needed to complete that checkbox. This, and the "special cases like 'static and ?Sized", will be the focus of upcoming work.

No detailed updates available.
1 detailed update available.

Comment by @tmandry posted on 2025-03-25:

Since our last update, there has been talk of dedicating some time at the Rust All Hands for interop discussion; @baumanj and @tmandry are going to work on fleshing out an agenda. @cramertj and @tmandry brainstormed with @oli-obk (who was very helpful) about ways of supporting a more ambitious "template instantiation from Rust" goal, and this may get turned into a prototype at some point.

There is now an early prototype available that allows you to write x.use; if the type of X implements UseCloned, then this is equivalent to x.clone(), else it is equivalent to a move. This is not the desired end semantics in a few ways, just a step along the road. Nothing to see here (yet).

1 detailed update available.

Comment by @nikomatsakis posted on 2025-03-17:

Update: rust-lang/rust#134797 has landed.

Semantics as implemented in the PR:

  • Introduced a trait UseCloned implemented for Rc and Arc types.
  • x.use checks whether x's type X implements the UseCloned trait; if so, then x.use is equivalent to x.clone(), otherwise it is a copy/move of x;
  • use || ...x... closures act like move closures but respect the UseCloned trait, so they will either clone, copy, or move x as appropriate.

Next steps:

  • Modify codegen so that we guarantee that x.use will do a copy if X: Copy is true after monomorphization. Right now the desugaring to clone occurs before monomorphization and hence it will call the clone method even for those instances where X is a Copy type.
  • Convert x.use to a move rather than a clone if this is a last-use.
  • Make x equivalent to x.use but with an (allow-by-default) lint to signal that something special is happened.

Notable decisions made and discussions:

  • Opted to name the trait that controls whether x.use does a clone or a move UseCloned rather than Use. This is because the trait does not control whether or not you can use something but rather controls what happens when you do.
  • Question was raised on Zulip as to whether x.use should auto-deref. After thinking it over, reached the conclusion that it should not, because x and x.use should eventually behave the same modulo lints, but that (as ever) a &T -> T coercion would be useful for ergonomic reasons.
1 detailed update available.

Comment by @ZuseZ4 posted on 2025-03-25:

I just noticed that I missed my February update, so I'll keep this update a bit more high-level, to not make it too long.

Key developments:

  1. All key autodiff PRs got merged. So after building rust-lang/rust with the autodiff feature enabled, users can now use it, without the need for any custom fork.
  2. std::autodiff received the first PRs from new contributors, which have not been previously involved in rustc development! My plan is to grow a team to maintain this feature, so that's a great start. The PRs are here, here and here. Over time I hope to hand over increasingly larger issues.
  3. I received an offer to join the Rust compiler team, so now I can also officially review and approve PRs! For now I'll focus on reviewing PRs in the fields I'm most comfortable with, so autodiff, batching, and soon GPU offload.
  4. I implemented a standalone batching feature. It was a bit larger (~2k LoC) and needed some (back then unmerged) autodiff PRs, since they both use the same underlying Enzyme infrastructure. I therefore did not push for merging it.
  5. I recently implemented batching as part of the autodiff macro, for people who want to use both together. I subsequently split out a first set of code improvements and refactorings, which already got merged. The remaining autodiff feature PR is only 600 loc, so I'm currently cleaning it up for review.
  6. I spend time preparing an MCP to enable autodiff in CI (and therefore nightly). I also spend a lot of time discussing a potential MLIR backend for rustc. Please reach out if you want to be involved!

**Help wanted: ** We want to support autodiff in lib builds, instead of only binaries. oli-obk and I recently figured out the underlying bug, and I started with a PR in https://github.com/rust-lang/rust/pull/137570. The problem is that autodiff assumes fat-lto builds, but lib builds compile some of the library code using thin-lto, even if users specify lto=fat in their Cargo.toml. We'd want to move every thing to fat-lto if we enable Autodiff as a temporary solution, and later move towards embed-bc as a longer-term solution. If you have some time to help please reach out! Some of us have already looked into it a little but got side-tracked, so it's better to talk first about which code to re-use, rather than starting from scratch.

I also booked my RustWeek ticket, so I'm happy to talk about all types of Scientific Computing, HPC, ML, or cursed Rust(c) and LLVM internals! Please feel free to dm me if you're also going and want to meet.

1 detailed update available.

Comment by @Eh2406 posted on 2025-03-14:

Progress continues to be stalled by high priority tasks for $DAY_JOB. It continues to be unclear when the demands of work will allow me to return focus to this project.

No detailed updates available.
1 detailed update available.

Comment by @epage posted on 2025-03-17:

  • Key developments:
    • Between tasks on #92, I've started to refresh myself on the libtest-next code base
  • Blockers:
  • Help wanted:
No detailed updates available.
No detailed updates available.

We've started work on implementing #[loop_match] on this branch. For the time being integer and enum patterns are supported. The benchmarks, are extremely encouraging, showing large improvements over the status quo, and significant improvements versus -Cllvm-args=-enable-dfa-jump-thread.

Our next steps can be found in the todo file, and focus mostly on improving the code quality and robustness.

3 detailed updates available.

Comment by @folkertdev posted on 2025-03-18:

@traviscross how would we make progress on that? So far we've mostly been talking to @joshtriplett, under the assumption that a #[loop_match] attribute on loops combined with a #[const_continue] attribute on "jumps to the next iteration" will be acceptable as a language experiment.

Our current implementation handles the following

#![feature(loop_match)]

enum State {
    A,
    B,
}

fn main() {
    let mut state = State::A;
    #[loop_match]
    'outer: loop {
        state = 'blk: {
            match state {
                State::A =>
                {
                    #[const_continue]
                    break 'blk State::B
                }
                State::B => break 'outer,
            }
        }
    }
}

Crucially, this does not add syntax, only the attributes and internal logic in MIR lowering for statically performing the pattern match to pick the right branch to jump to.

The main challenge is then to implement this in the compiler itself, which we've been working on (I'll post our tl;dr update shortly)

Comment by @folkertdev posted on 2025-03-18:

Some benchmarks (as of march 18th)

A benchmark of https://github.com/bjorn3/comrak/blob/loop_match_attr/autolink_email.rs, basically a big state machine that is a perfect fit for loop match

Benchmark 1: ./autolink_email
  Time (mean ± σ):      1.126 s ±  0.012 s    [User: 1.126 s, System: 0.000 s]
  Range (min … max):    1.105 s …  1.141 s    10 runs
 
Benchmark 2: ./autolink_email_llvm_dfa
  Time (mean ± σ):     583.9 ms ±   6.9 ms    [User: 581.8 ms, System: 2.0 ms]
  Range (min … max):   575.4 ms … 591.3 ms    10 runs
 
Benchmark 3: ./autolink_email_loop_match
  Time (mean ± σ):     411.4 ms ±   8.8 ms    [User: 410.1 ms, System: 1.3 ms]
  Range (min … max):   403.2 ms … 430.4 ms    10 runs
 
Summary
  ./autolink_email_loop_match ran
    1.42 ± 0.03 times faster than ./autolink_email_llvm_dfa
    2.74 ± 0.07 times faster than ./autolink_email

#[loop_match] beats the status quo, but also beats the llvm flag by a large margin.


A benchmark of zlib decompression with chunks of 16 bytes (this makes the impact of loop_match more visible)

Benchmark 1 (65 runs): target/release/examples/uncompress-baseline rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          77.7ms ± 3.04ms    74.6ms … 88.9ms          9 (14%)        0%
  peak_rss           24.1MB ± 64.6KB    24.0MB … 24.2MB          0 ( 0%)        0%
  cpu_cycles          303M  ± 11.8M      293M  …  348M           9 (14%)        0%
  instructions        833M  ±  266       833M  …  833M           0 ( 0%)        0%
  cache_references   3.62M  ±  310K     3.19M  … 4.93M           1 ( 2%)        0%
  cache_misses        209K  ± 34.2K      143K  …  325K           1 ( 2%)        0%
  branch_misses      4.09M  ± 10.0K     4.08M  … 4.13M           5 ( 8%)        0%
Benchmark 2 (68 runs): target/release/examples/uncompress-llvm-dfa rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          74.0ms ± 3.24ms    70.6ms … 85.0ms          4 ( 6%)        🚀-  4.8% ±  1.4%
  peak_rss           24.1MB ± 27.1KB    24.0MB … 24.1MB          3 ( 4%)          -  0.1% ±  0.1%
  cpu_cycles          287M  ± 12.7M      277M  …  330M           4 ( 6%)        🚀-  5.4% ±  1.4%
  instructions        797M  ±  235       797M  …  797M           0 ( 0%)        🚀-  4.3% ±  0.0%
  cache_references   3.56M  ±  439K     3.08M  … 5.93M           2 ( 3%)          -  1.8% ±  3.6%
  cache_misses        144K  ± 32.5K     83.7K  …  249K           2 ( 3%)        🚀- 31.2% ±  5.4%
  branch_misses      4.09M  ± 9.62K     4.07M  … 4.12M           1 ( 1%)          -  0.1% ±  0.1%
Benchmark 3 (70 runs): target/release/examples/uncompress-loop-match rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          71.6ms ± 2.43ms    69.3ms … 78.8ms          6 ( 9%)        🚀-  7.8% ±  1.2%
  peak_rss           24.1MB ± 72.8KB    23.9MB … 24.2MB         20 (29%)          -  0.0% ±  0.1%
  cpu_cycles          278M  ± 9.59M      270M  …  305M           7 (10%)        🚀-  8.5% ±  1.2%
  instructions        779M  ±  277       779M  …  779M           0 ( 0%)        🚀-  6.6% ±  0.0%
  cache_references   3.49M  ±  270K     3.15M  … 4.17M           4 ( 6%)        🚀-  3.8% ±  2.7%
  cache_misses        142K  ± 25.6K     86.0K  …  197K           0 ( 0%)        🚀- 32.0% ±  4.8%
  branch_misses      4.09M  ± 7.83K     4.08M  … 4.12M           1 ( 1%)          +  0.0% ±  0.1%
Benchmark 4 (69 runs): target/release/examples/uncompress-llvm-dfa-loop-match rs-chunked 4
  measurement          mean ± σ            min … max           outliers         delta
  wall_time          72.8ms ± 2.57ms    69.7ms … 80.0ms          7 (10%)        🚀-  6.3% ±  1.2%
  peak_rss           24.1MB ± 35.1KB    23.9MB … 24.1MB          2 ( 3%)          -  0.1% ±  0.1%
  cpu_cycles          281M  ± 10.1M      269M  …  312M           5 ( 7%)        🚀-  7.5% ±  1.2%
  instructions        778M  ±  243       778M  …  778M           0 ( 0%)        🚀-  6.7% ±  0.0%
  cache_references   3.45M  ±  277K     2.95M  … 4.14M           0 ( 0%)        🚀-  4.7% ±  2.7%
  cache_misses        176K  ± 43.4K      106K  …  301K           0 ( 0%)        🚀- 15.8% ±  6.3%
  branch_misses      4.16M  ± 96.0K     4.08M  … 4.37M           0 ( 0%)        💩+  1.7% ±  0.6%

The important points: loop-match is faster than llfm-dfa, and when combined performance is worse than when using loop-match on its own.

Comment by @traviscross posted on 2025-03-18:

Thanks for that update. Have reached out separately.

1 detailed update available.

Comment by @celinval posted on 2025-03-17:

We have been able to merge the initial support for contracts in the Rust compiler under the contracts unstable feature. @tautschnig has created the first PR to incorporate contracts in the standard library and uncovered a few limitations that we've been working on.

1 detailed update available.

Comment by @jieyouxu posted on 2025-03-15:

Update (2025-03-15):

  • Doing a survey pass on compiletest to make sure I have the full picture.
1 detailed update available.

Comment by @yaahc posted on 2025-03-03:

After further review I've decided to limit scope initially and not get ahead of myself so I can make sure the schemas I'm working with can support the kind of queries and charts we're going to eventually want in the final version of the unstable feature usage metric. I'm hoping that by limiting scope I can have most of the items currently outlined in this project goal done ahead of schedule so I can move onto building the proper foundations based on the proof of concept and start to design more permanent components. As such I've opted for the following:

  • minimal change to the current JSON format I need, which is including the timestamp
  • Gain clarity on exactly what questions I should be answering with the unstable feature usage metrics, the desired graphs and tables, and how this influences what information I need to gather and how to construct the appropriate queries within graphana
  • gathering a sample dataset from docs.rs rather than viewing it as the long term integration, since there are definitely some sampleset bias issues in that dataset from initial conversations with docs.rs
    • Figure out proper hash/id to use in the metrics file names to avoid collisions with different conditional compilation variants of the same crate with different feature enabled.

For the second item above I need to have more detailed conversations with both @rust-lang/libs-api and @rust-lang/lang

1 detailed update available.

Comment by @nikomatsakis posted on 2025-03-17:

Update:

@tiif has been working on integrating const-generic effects into a-mir-formality and making good progress.

I have begun exploring integration of the MiniRust definition of MIR. This doesn't directly work towards the goal of modeling coherence but it will be needed for const generic work to be effective.

I am considering some simplification and cleanup work as well.

1 detailed update available.

Comment by @lcnr posted on 2025-03-17:

The two cycle handling PRs mentioned in the previous update have been merged, allowing nalgebra to compile with the new solver enabled. I have now started to work on opaque types in borrowck again. This is a quite involved issue and will likely take a few more weeks until it's fully implemented.

1 detailed update available.

Comment by @veluca93 posted on 2025-03-17:

Key developments: Started investigating how the proposed SIMD multiversioning options might fit in the context of the efforts for formalizing a Rust effect system

No detailed updates available.
1 detailed update available.

Comment by @blyxyas posted on 2025-03-17:

Monthly update!

  • https://github.com/rust-lang/rust-clippy/issues/13821 has been merged. This has successfully optimized the MSRV extraction from the source code.

On the old MSRV extraction,Symbol::intern use was sky high being about 3.5 times higher than the rest of the compilation combined. Now, it's at normal levels. Note that Symbol::intern is a very expensive and locking function, so this is very notable. Thanks to @Alexendoo for this incredible work!

As a general note on the month, I'd say that we've experimented a lot.

  • Starting efforts on parallelizing the lint system.
  • https://github.com/rust-lang/rust-clippy/issues/14423 Started taking a deeper look into our dependence on libLLVM.so and heavy relocation problems.
  • I took a look into heap allocation optimization, seems that we are fine. For the moment, rust-clippy#14423 is the priority.
1 detailed update available.

Comment by @oli-obk posted on 2025-03-20:

I opened an RFC (https://github.com/rust-lang/rfcs/pull/3762) and we had a lang team meeting about it. Some design exploration and bikeshedding later we have settled on using (const)instead of ~const along with some more annotations for explicitness and some fewer annotations in other places. The RFC has been updated accordingly. There is still ongoing discussions about reintroducing the "fewer annotations" for redundancy and easier processing by humans.

No detailed updates available.
2 detailed updates available.

Comment by @JoelMarcey posted on 2025-03-14:

Key Developments: Working on a public announcement of Ferrous' contribution of the FLS. Goal is to have that released soon. Also working out the technical details of the contribution, particularly around how to initially integrate the FLS into the Project itself.

Blockers: None yet.

Comment by @JoelMarcey posted on 2025-04-01:

Key Developments: Public announcement of the FLS donation to the Rust Project.

Blockers: None

2 detailed updates available.

Comment by @celinval posted on 2025-03-20:

We have proposed a project idea to Google Summer of Code to implement the refactoring and infrastructure improvements needed for this project. I'm working on breaking down the work into smaller tasks so they can be implemented incrementally.

Comment by @celinval posted on 2025-03-20:

I am also happy to share that @makai410 is joining us in this effort! 🥳

No detailed updates available.
2 detailed updates available.

Comment by @nikomatsakis posted on 2025-03-03:

Update: February goal update has been posted. We made significant revisions to the way that goal updates are prepared. If you are a goal owner, it's worth reading the directions for how to report your status, especially the part about help wanted and summary comments.

Comment by @nikomatsakis posted on 2025-03-17:

Update: We sent out the first round of pings for the March update. The plan is to create the document on March 25th, so @rust-lang/goal-owners please get your updates in by then. Note that you can create a TL;DR comment if you want to add 2-3 bullet points that will be embedded directly into the final blog post.

In terms of goal planning:

  • @nandsh is planning to do a detailed retrospective on the goals program in conjunction with her research at CMU. Please reach out to her on Zulip (Nandini) if you are interested in participating.
  • We are planning to overhaul the ping process as described in this hackmd. In short, pings will come on the 2nd/3rd Monday of the month. No pings will be sent if you've posted a comment that month. The blog post will be prepared on the 3rd Friday.
  • We've been discussing how to structure 2025H2 goals and are thinking of making a few changes. We'll break out three categories of goals (Flagship / Core / Stretch), with "Core" goals being those deemed most important. We'll also have a 'pre-read' before the RFC opens with team leads to look for cross-team collaborative opportunities. At least that's the current plan.
  • We drafted a Rust Vision Doc Action Plan.
  • We expect to publish our announcement blog post by end of Month including a survey requesting volunteers to speak with us. We are also creating plans for interviews with company contacts, global community groups, and Rust maintainers.
1 detailed update available.

Comment by @nikomatsakis posted on 2025-03-17:

Update:

I've asked @jackh726 to co-lead the team with me. Together we pulled together a Rust Vision Doc action plan.

The plan begins by posting a blog post (draft available here) announcing the effort. We are coordinating with the Foundation to create a survey which will be linked from the blog post. The survey questions ask about user's experience but also look for volunteers we can speak with.

We are pulling together the team that will perform the interviewing. We've been in touch with UX reseearchers who will brief us on some of the basics of UX research. We're finalizing team membership now plus the set of focus areas, we expect to cover at least users/companies, Rust project maintainers, and Rust global communities. See the Rust Vision Doc action plan for more details.

1 detailed update available.

Comment by @davidtwco posted on 2025-03-03:

A small update, @Jamesbarford aligned with @kobzol on a high-level architecture and will begin fleshing out the details and making some small patches to rustc-perf to gain familiarity with the codebase.

1 detailed update available.

Comment by @lqd posted on 2025-03-24:

Here are the key developments for this update.

Amanda has continued on the placeholder removal task. In particular on the remaining issues with rewritten type tests. The in-progress work caused incorrect errors to be emitted under the rewrite scheme, and a new strategy to handle these was discussed. This has been implemented in the PR, and seems to work as hoped. So the PR should now be in a state that is ready for more in-depth review pass, and should hopefully land soon.

Tage has started his master's thesis with a focus on the earliest parts of the borrow checking process, in order to experiment with graded borrow-checking, incrementalism, avoiding work that's not needed for loans that are not invalidated, and so on. A lot of great progress has been made on these parts already, and more are being discussed even in the later areas (live and active loans).

I have focused on taking care of the remaining diagnostics and test failures of the location-sensitive analysis. For diagnostics in particular, the PRs mentioned in the previous updates have landed, and I've fixed a handful of NLL spans, all the remaining differences under the compare-mode, and blessed differences that were improvements. For the test failures, handling liveness differently in traversal fixed most of the remaining failures, while a couple are due to the friction with mid-points avoidance scheme. For these, we have a few different paths forward, but with different trade-offs and we'll be discussing and evaluation these in the very near future. Another two are still left to analyze in-depth to see what's going on.

Our near future focus will be to continue down the path to correctness while also expanding test coverage that feels lacking in certain very niche areas, and that we want to improve. At the same time, we'll also work on a figuring out a better architecture to streamline the entire end-to-end process, to allow early outs, avoid work that is not needed, etc.

No detailed updates available.
1 detailed update available.

Comment by @lqd posted on 2025-03-26:

This project goal was actually carried over from 2024h2, in https://github.com/rust-lang/rust-project-goals/pull/294

2 detailed updates available.

Comment by @davidtwco posted on 2025-03-03:

A small update, we've opened a draft PR for the initial implementation of this - rust-lang/rust#137944. Otherwise, just continued to address feedback on the RFCs.

Comment by @davidtwco posted on 2025-03-18:

  • We've been resolving review feedback on the implementation of the Sized Hierarchy RFC on rust-lang/rust#137944. We're also working on reducing the performance regression in the PR, by avoiding unnecessary elaboration of sizedness supertraits and extending the existing Sized case in type_op_prove_predicate query's fast path.
  • There's not been any changes to the RFC, there's minor feedback that has yet to be responded to, but it's otherwise just waiting on t-lang.
  • We've been experimenting with rebasing rust-lang/rust#118917 on top of rust-lang/rust#137944 to confirm that const sizedness allows us to remove the type system exceptions that the SVE implementation previously relied on. We're happy to confirm that it does.
No detailed updates available.
1 detailed update available.

Comment by @Muscraft posted on 2025-03-31:

While my time was limited these past few months, lots of progress was made! I was able to align annotate-snippets internals with rustc's HumanEmitter and get the new API implemented. These changes have not been merged yet, but they can be found here. As part of this work, I got rustc using annotate-snippets as its only renderer. During all of this, I started working on making rustc use annotate-snippets as its only renderer, which turned out to be a huge benefit. I was able to get a feel for the new API while addressing rendering divergences. As of the time of writing, all but ~30 tests of the roughly 18,000 UI tests are passing.

test result: FAILED. 18432 passed; 29 failed; 193 ignored; 0 measured; 0 filtered out; finished in 102.32s

Most of the failing tests are caused by a few things:

  • annotate-snippets right aligns numbers, whereas rustc left aligns
  • annotate-snippets doesn't handle multiple suggestions for the same span very well
  • Problems with handling FailureNote
  • annotate-snippets doesn't currently support colored labels and titles, i.e., the magenta highlight rustc uses
  • rustc wants to pass titles similar to error: internal compiler error[E0080], but annotate-snippets doesn't support that well
  • differences in how rustc and annotate-snippets handle term width during tests
    • When testing, rustc uses DEFAULT_COLUMN_WIDTH and does not subtract the code offset, while annotate-snippets does
  • Slight differences in how "newline"/end of line highlighting is handled
  • JSON output rendering contains color escapes

Frederik BraunWith Carrots & Sticks - Can the browser handle web security?

NB: This is the blog version of my keynote from Measurements, Attacks, and Defenses for the Web (MADWeb) 2025, earlier this year. It was not recorded.

In my keynote, I examined web security through the browser's perspective. Various browser features have helped fix transport security issues and increase HTTPS adoption …

Firefox NightlyPutting up Wallpaper – These Weeks in Firefox: Issue 178

Putting up Wallpaper – These Weeks in Firefox: Issue 178

Highlights

  • Custom Wallpapers for New Tab are undergoing further refinement and bugfixing! Amy just fixed an issue which would cause the custom wallpaper image to flash under certain circumstances.
    • This can be tested in Nightly by visiting Firefox Labs in about:preferences and making sure “Choose a custom wallpaper or colour for New Tab” is checked.
  • Profile Management
    • We are on track to ship our initial feature set to Beta and 0.5% of Release in Firefox 138!
    • We’ve been enabled in Nightly for a while, but to try this out in 138 Beta/Release, flip the browser.profiles.enabled pref to true
  • Nicolas Chevobbe fixed an 11 year old bug by improving the performance of StyleEditor autocomplete for a specific case that would end up freezing/crashing Firefox!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug
  • Carlos
  • Chris Shiohama
  • cob.bzmoz
  • Harold Camacho
  • Shane Ziegler
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions
Addon Manager & about:addons
WebExtensions Framework
  • Thanks to Florian, the last remaining WebExtensions telemetry recorded as legacy telemetry scalars and histograms have been migrated to Glean (and mirrored to legacy telemetry through GIFFT) – Bug 1953106
  • Fixed manifest validation error on manifests using an empty background.scripts property – Bug 1954637
DevTools
WebDriver BiDi
Fluent
Lint, Docs and Workflow
  • Julien fixed an issue where the ESLint configuration was defining ContentTaskUtils as a global variable available for all tests (it is only available within certain test functions).
New Tab Page
  • New Tab is now packaged as a built-in addon on the Beta channel! This sets us up to try our first pilot out-of-band update to New Tab sometime in May.
  • Nathan has added ASRouter / OMC plumbing to make it possible to show onboarding messages inline within New Tab. The first use of this capability will be to highlight the new Custom Wallpapers feature
Performance
Places
  • Moritz has fixed cases where bookmarks were sorted wrongly in the view after moving multiple of them at once. Bug 1557853
Profile Management
  • Big picture
    • 100% release timeline planning is well underway with Nimbus, OMC, and DI teams
    • We are starting to look at testing across multiple Firefox instances using Marionette or using background tasks. Reach out if you have suggestions or ideas.
  • Bugs fixed in the past 2 weeks:
    • tschuster fixed bug 1883387, suppressing a telemetry error shown at startup on linux
    • Jared fixed bug 1933264 and bug 1956105 to locally propagate changes to data policy preferences between profiles in a group
    • Teddy fixed bug 1934921 – Voice over reads the “Edit your profile” title as Article
    • Cieara fixed bug 1949022 – ‘Customize your new profile’ does not have the correct heading level
    • Teddy fixed bug 1950198 – Correct styling details for profile editor
    • Teddy fixed bug 1950199 – Correct styling details for profile toolbar menu
    • Niklas fixed bug 1950250 – <img> used in Theme radios buttons need null alt text
    • Niklas fixed bug 1952985 – Update theme names
    • Jared fixed bug 1955222 – remove profile name input autofocus on about:editprofile and about:newprofile pages to improve screen readerability with NVDA
    • Dave fixed bug 1926997 – Selectable Profile directory permissions are incorrect
    • Cieara fixed bug 1955036 – Double focus ring on edit button in Profiles submenu of FxA toolbar button menu
    • Niklas fixed bug 1955397 – Avatars and profiles panel and cards display mixed themes colours after switching themes
    • Teddy fixed bug 1956286 – The selected theme and avatar don’t remain focused, as specified in the Figma guidelines
    • Cieara fixed bug 1955244 – Update the SUMO URL in the profiles Learn More links
    • Dave fixed bug 1954832 – [macOS] The Original profile can’t be reached when all the other profiles are Unicode-named
Search and Navigation

Don Marticonverting PDFs for Tesseract

There are two kinds of PDFs. Some have real embedded text that you can select in a PDF reader, and some are just images.

The second kind is what I sometimes get in response to a CCPA/CPRA Right to Know. Some companies, for whatever reason, want to make it harder to do automated processing of multiple RtKs. This should make privacy researchers more likely to look at them, because what are they hiding and they must be up to something.

But the PDF still needs to get run through some kind of OCR. Tesseract OCR has been giving me pretty good results, but it needs to be fed images, not PDFs.

So I have been feeding the PDFs to pdf2image—in Python code, and then passing the images to Tesseract. But it turns out that Tessaract works a lot better with higher resolution images, and the default for pdf2image is 200 DPI. So I’m gettting a lot more accurate OCR by making the images oversized with the dpi named parameter:

pages = pdf2image.convert_from_bytes(blob, dpi=600)

I might tweak this and try 300 DPI, or also try passing grayscale=True to preserve more information. Some other approaches to try next, if I need them.

Anyway, Meta (Facebook) made some of their info easy to parse (in JSON format) and got some of us to do research on them. Some of the other interesting companies, though, are going to be those who put in the time to obfuscate their responses to RtKs.

Related

OCRmyPDF is an all-in-one tool that adds a text layer to the PDF. Uses Tessaract internally. When possible, inserts OCR information as a “lossless” operation without disrupting any other content. Thanks to Gaurav Ujjwal for the link. (I’m doing an OCR step as part of ingesting PDFs into a database, so I don’t need to see the text, but this could be good for PDFs that you actually want to read and not just do aggregated reporting on.)

Example of where GDPR compliance doesn’t get you CCPA compliance: This is the mistake that Honda recently made. CCPA/CPRA is not just a subset of GDPR. GDPR allows a company to verify an objection to processing, but CCPA does not allow a company to verify an opt out of sale. (IMHO the EU should harmonize by adopting the California good-faith, reasonable, and documented belief that a request to opt-out is fraudulent standard for objections to processing.)

New Report: Many Companies May Be Ignoring Opt-Out Requests Under State Privacy Laws - Innovation at Consumer Reports The study examined 40 online retailers and found that many of them appear to be ignoring opt-out requests under state privacy laws. (A lot more companies are required to comply with CCPA/CPRA than there are qualified compliance managers. Even if companies fix some of the obvious problems identified in this new CR report, there are still a bunch of data transfers that are obvious detectable violations if a GPC flag wasn’t correctly set for a user in the CRM system. You can’t just fix the cookie—GPC also has to cover downstream usage such as custom audiences and server-to-server APIs.)

Bonus links

EU may “make an example of X” by issuing $1 billion fine to Musk’s social network by Jon Brodkin at Ars Technica. (A lot of countries don’t need to raise their oen tariffs in order to retaliate against the USA’s tariffs. They just need to stop letting US companies slide when they violate laws over there. If they can’t rely on the USA for regional security, there’s no reason not to. Related: US Cloud soon illegal? at noyb.eu)

Big Tech Backed Trump for Acceleration. They Got a Decel President Instead by Emanuel Maiberg and Jason Koebler at 404 Media. Unless Trump folds, the tariffs will make the price of everything go up. Unemployment will go up. People will buy less stuff, and companies will spend less money on advertising that powers tech platforms. The tech industry, which has thrived on the cheap labor, cheap parts, cheap manufacturing, and supply chains enabled by free and cheap international trade, will now have artificial costs and bureaucracy tacked onto all of this. The market knows this, which is why tech stocks are eating shit. (Welcome to the weak men create hard times phase—but last time we had one of these, the dismal Microsoft monopoly days are when we got the web and Linux scenes that evolved into today’s Big Tech. Whatever emerges from the high-unemployment, import-denied generation, it’s going to surprise us.)

Alternative to Starlink: Eutelsat Provides Ukraine With Access to Satellite Internet by Taras Safronov. According to Berneke, Eutelsat has been providing high-speed satellite Internet services in Ukraine through a German distributor for about a year.

The coming pro-smoking discourse by Max Read. (Then: social media is the new smoking. Now: smoking is the new social media?)

Signal sees its downloads double after scandal by Sarah Perez on TechCrunch. Appfigures chalks up the doubling of downloads to the old adage all press is good press, as the scandal increased Signal’s visibility and likely introduced the app to thousands of users for the first time. (Signal is also, according to traders on Manifold Markets, the e2e messaging program least likely to provide message content to US law enforcement. Both Apple, the owner of iMessage, and Meta, the owner of WhatsApp, have other businesses that governments can lean on in order to get cooperation. Signal just has e2e software and reputation, so fewer points of leverage.)

Substack rival Ghost is now connected to the fediverse also by Sarah Perez. Per-byline RSS feeds ftw. Check some reporter pages on there, such as Sarah Perez, Author at TechCrunch and Natasha Lomas, Author at TechCrunch with the RSSPreview extension installed. (I’m cautiously optimistic that ActivityPub might be able to address the comments and pingback problems for blogs and small sites in ways that SaaS comments didn’t and Twitter at its peak almost did.)

YouTube removes ‘gender identity’ from hate speech policy by Taylor Lorenz (In the medium term, a lot of the moderation changes at Big Tech are going to turn into a recruiting challenge for hiring managers in marketing departments. If an expected part of working in marketing is going to be mandatory involvement in sending money to weird, creepy right-wing dudes, that means you’re mostly going to get to hire…weird, creepy right-wing dudes.) Related: slop capitalism and dead internet theory by Adam Aleksic. Our best way of fighting back? Spend as little time on algorithmic media as possible, strengthen our social ties, and gather information from many different sources—remembering that the platforms are the real enemy.

William LachanceElectrification and solar

I did up an evidence dashboard with some (hopefully) data-driven thoughts on the environmental and financial aspects of heat pump and solar technology in the Greater Toronto / Hamilton area:

wlach.github.io/gtha-electrification

Evidence is pretty neat: very close to what I originally had in mind when building Irydium a few years ago at Mozilla and Recurse (see previous entries in this journal).

Mozilla ThunderbirdThundermail and Thunderbird Pro Services

Today we’re pleased to announce what many in our open source contributor community already know. The Thunderbird team is working on an email service called “Thundermail” as well as file sharing, calendar scheduling and other helpful cloud-based services that as a bundle we have been calling “Thunderbird Pro.”

First, a point of clarification: Thunderbird, the email app, is and always will be free. We will never place features that can be delivered through the Thunderbird app behind a paywall. If something can be done directly on your device, it should be. However, there are things that can’t be done on your computer or phone that many people have come to expect from their email suites. This is what we are setting out to solve with our cloud-based services.

All of these new services are (or soon will be) open source software under true open source licenses. That’s how Thunderbird does things and we believe it is our super power. It is also a major reason we exist: to create open source communication and productivity software that respects our users. Because you can see how it works, you can know that it is doing the right thing.

The Why for offering these services is simple. Thunderbird loses users each day to rich ecosystems that are both products and services, such as Gmail and Office365. These ecosystems have both hard vendor lock-ins (through interoperability issues with 3rd-party clients) and soft lock-ins (through convenience and integration between their clients and services). It is our goal to eventually have a similar offering so that a 100% open source, freedom-respecting alternative ecosystem is available for those who want it. We don’t even care if you use our services with Thunderbird apps, go use them with any mail client. No lock-in, no restrictions – all open standards. That is freedom.

What Are The Services?

Thunderbird Appointment

Appointment is a scheduling tool that allows you to send a link to someone, allowing them to pick a time on your calendar to meet. The repository for Appointment has been public for a while and has seen pretty remarkable development so far. It is currently in a closed Beta and we are letting more users in each day.

Appointment has been developed to make meeting with others easier. We weren’t happy with the existing tools as they were either proprietary or too bloated, so we started building Appointment.

Thunderbird Send

Send is an end-to-end encrypted file sharing service that allows you to upload large files to the service and share links to download those files with others. Many Thunderbird users have expressed interest in the ability to share large files in a privacy-respecting way – and it was a problem we were eager to solve.

Thunderbird Send is the rebirth of Firefox Send – well, kind of. At this point, we have a bit of a Ship of Theseus situation – having rebuilt much of the project to allow for a more direct method of sharing files (from user-to-user without the need to share a link). We opened up the repo to the public earlier this week. So we encourage everyone interested to go and check it out.

Thunderbird Send is currently in Alpha testing, and will move to a closed Beta very soon.

Thunderbird Assist

Assist is an experiment, developed in partnership with Flower AI, a flexible open-source framework for scalable, privacy-preserving federated learning, that will enable users to take advantage of AI features. The hope is that processing can be done on devices that can support the models, and for devices that are not powerful enough to run the language models locally, we are making use of Flower Confidential Remote Compute in order to ensure private remote processing (very similar to Apple’s Private Cloud Compute). 

Given some users’ sensitivity to this, these types of features will always be optional and something that users will have to opt into. As a reminder, Thunderbird will never train AI with your data. The repo for Assist is not public yet, but it will be soon.

Thundermail

Thundermail is an email service (with calendars and contacts as well). We want to provide email accounts to those who love Thunderbird, and we believe that we are capable of providing a better service than the other providers out there. Email that aligns with our values of privacy, freedom and respect of our users. No ads, no selling or training AI on your data – just your email and it is your email.

With Thundermail, it is our goal to create a next generation email experience that is completely, 100% open source and built by all of us, our contributors and users. Unlike the other services, there will not be a single repository where this work is done. But we will try and share relevant places to contribute in future posts like this.

The email domain for Thundermail will be Thundermail.com or tb.pro. Additionally, you will be able to bring your own domain on day 1 of the service.

Heading to thundermail.com you will see a sign up page for the beta waitlist. Please join it!

Final Thoughts

Don’t services cost money to run?

You may be thinking: “this all sounds expensive, how will Thunderbird be able to pay for it?” And that’s a great question! Services such as Send are actually quite expensive (storage is costly). So here is the plan: at the beginning, there will be paid subscription plans at a few different tiers. Once we have a sufficiently strong base of paying users to sustainably support our services, we plan to introduce a limited free tier to the public. You see this with other providers: limitations are standard as free email and file sharing are prone to abuse.

It’s also important to highlight again that Thunderbird Pro will be a completely separate offering from the Thunderbird you already use. While Thunderbird and the additional new services may work together and complement each other for those who opt in, they will never replace, compromise, or interfere with the core features or free availability of Thunderbird. Nothing about your current Thunderbird experience will change unless you choose to opt in and sign up with Thunderbird Pro. None of these features will be automatically integrated into Thunderbird desktop or mobile or activated without your knowledge.

The Realization of a Dream

This has been a long time coming. It is my conviction that all of this should have been a part of the Thunderbird universe a decade ago. But it’s better late than never. Just like our Android client has expanded what Thunderbird is (as will our iOS client), so too will these services.

Thunderbird is unique in the world. Our focus on open source, open standards, privacy and respect for our users is something that should be expressed in multiple forms. The absence of Thunderbird web services means that our users must make compromises that are often uncomfortable ones. This is how we correct that.

I hope that all of you will check out this work and share your thoughts and test these things out. What’s exciting is that you can run Send or Appointment today, on your own server. Everything that we do will be out in the open and you can come and help us build it! Together we can create amazing experiences that enhance how we manage our email, calendars, contacts and beyond.

Thank you for being on this journey with us.

Ryan Sipes
Managing Director of Product
Thunderbird

The post Thundermail and Thunderbird Pro Services appeared first on The Thunderbird Blog.

The Rust Programming Language BlogHelp us create a vision for Rust's future

tl;dr: Please take our survey here

Rust turns 10 this year. It's a good time to step back and assess where we are at and to get aligned around where we should be going. Where is Rust succeeding at empowering everyone to build reliable, efficient software (as it says on our webpage)? Where are there opportunities to do better? To that end, we have taken on the goal of authoring a Rust Vision RFC, with the first milestone being to prepare a draft for review at the upcoming Rust All Hands.

Goals and non-goals

The vision RFC has two goals

  • to build a shared understanding of where we are and
  • to identify where we should be going at a high-level.

The vision RFC also has a non-goal, which is to provide specific designs or feature recommendations. We'll have plenty of time to write detailed RFCs for that. The vision RFC will instead focus more on higher-level recommendations and on understanding what people need and want from Rust in various domains.

We hope that by answering the above questions, we will then be able to evolve Rust with more confidence. It will also help Rust users (and would-be users) to understand what Rust is for and where it is going.

Community and technology are both in scope

The scope of the vision RFC is not limited to the technical design of Rust. It will also cover topics like

  • the experience of open-source maintainers and contributors, both for the Rust project and for Rust crates;
  • integrating global Rust communities across the world;
  • and building momentum and core libraries for particular domains, like embedded, CLI, or gamedev.

Gathering data

To answer the questions we have set, we need to gather data - we want to do our best not to speculate. This is going to come in two main formats:

  1. A survey about peoples' experiences with Rust (see below). Unlike the Annual Rust survey, the questions are open-ended and free-form, and cover somewhat different topics. This also allows us to gather a list of people to potentially interview.
  2. Interviews of people from various backgrounds and domains. In an ideal world, we would interview everyone who wants to be interviewed, but in reality we're going to try to interview as many people as we can to form a diverse and representative set.

While we have some idea of who we want to talk to, we may be missing some! We're hoping that the survey will not only help us connect to the people that we want to talk to, but also potentially help us uncover people we haven't yet thought of. We are currently planning to talk to

  • Rust users, novice to expert;
  • Rust non-users (considering or not);
  • Companies using (or considering) Rust, from startup to enterprise;
  • Global or language-based Rust affinity groups;
  • Domain-specific groups;
  • Crate maintainers, big and small;
  • Project maintainers and contributors, volunteer or professional;
  • Rust Foundation staff.

Our roadmap and timeline

Our current "end goal" is to author and open a vision RFC sometime during the second half of the year, likely in the fall. For this kind of RFC, though, the journey is really more important than the destination. We plan to author several drafts along the way and take feedback, both from Rust community members and from the public at large. The first milestone we are targeting is to prepare an initial report for review at the Rust All Hands in May. To that end, the data gathering process starts now with the survey, but we intend to spend the month of April conducting interviews (and more after that).

How you can help

For starters, fill out our survey here. This survey has three sections

  1. To put the remaining responses into context, the survey asks a few demographic questions to allow us to ensure we are getting good representation across domains, experience, and backgrounds.
  2. It asks a series of questions about your experiences with Rust. As mentioned before, this survey is quite different from the Annual Rust survey. If you have experiences in the context of a company or organization, please feel free to share those (submitting this separately is best)!
  3. It asks for recommendations as to whom we ought to speak to. Please only recommend yourself or people/companies/groups for which you have a specific contact.

Note: The first part of the survey will only be shared publicly in aggregate, the second may be made public directly, and the third section will not be made public. For interviews, we can be more flexible with what information is shared publicly or not.

Of course, other than taking the survey, you can also share it with people. We really want to reach people that may not otherwise see it through our typical channels. So, even better if you can help us do that!

Finally, if you are active in the Rust maintainer community, feel free to join the #vision-doc-2025 channel on Zulip and say hello.

The Rust Programming Language BlogC ABI Changes for `wasm32-unknown-unknown`

The extern "C" ABI for the wasm32-unknown-unknown target has been using a non-standard definition since the inception of the target in that it does not implement the official C ABI of WebAssembly and it additionally leaks internal compiler implementation details of both the Rust compiler and LLVM. This will change in a future version of the Rust compiler and the official C ABI will be used instead.

This post details some history behind this change and the rationale for why it's being announced here, but you can skip straight to "Am I affected?" as well.

History of wasm32-unknown-unknown's C ABI

When the wasm32-unknown-unknown target was originally added in 2017, not much care was given to the exact definition of the extern "C" ABI at the time. In 2018 an ABI definition was added just for wasm and the target is still using this definition to this day. This definitions has become more and more problematic over time and while some issues have been fixed, the root cause still remains.

Notably this ABI definition does not match the tool-conventions definition of the C API, which is the current standard for how WebAssembly toolchains should talk to one another. Originally this non-standard definition was used for all WebAssembly based targets except Emscripten, but this changed in 2021 where the WASI targets for Rust use a corrected ABI definition. Still, however, the non-standard definition remained in use for wasm32-unknown-unknown.

The time has now come to correct this historical mistake and the Rust compiler will soon be using a correct ABI definition for the wasm32-unknown-unknown target. This means, however, that generated WebAssembly binaries will be different than before.

What is a WebAssembly C ABI?

The definition of an ABI answers questions along the lines of:

  • What registers are arguments passed in?
  • What registers are results passed in?
  • How is a 128-bit integers passed as an argument?
  • How is a union passed as a return value?
  • When are parameters passed through memory instead of registers?
  • What is the size and alignment of a type in memory?

For WebAssembly these answers are a little different than native platforms. For example, WebAssembly does not have physical registers and functions must all be annotated with a type. What WebAssembly does have is types such as i32, i64, f32, and f64. This means that for WebAssembly an ABI needs to define how to represent values in these types.

This is where the tool-conventions document comes in. That document provides a definition for how to represent primitives in C in the WebAssembly format, and additionally how function signatures in C are mapped to function signatures in WebAssembly. For example a Rust u32 is represented by a WebAssembly i32 and is passed directly as a parameter as a function argument. If the Rust structure #[repr(C)] struct Pair(f32, f64) is returned from a function then a return pointer is used which must have alignment 8 and size of 16 bytes.

In essence, the WebAssembly C ABI is acting as a bridge between C's type system and the WebAssembly type system. This includes details such as in-memory layouts and translations of a C function signature to a WebAssembly function signature.

How is wasm32-unknown-unknown non-standard?

Despite the ABI definition today being non-standard, many aspects of it are still the same as what tool-conventions specifies. For example, size/alignment of types is the same as it is in C. The main difference is how function signatures are calculated. An example (where you can follow along on godbolt) is:

#[repr(C)]
pub struct Pair {
    x: u32,
    y: u32,
}

#[unsafe(no_mangle)]
pub extern "C" fn pair_add(pair: Pair) -> u32 {
    pair.x + pair.y
}

This will generate the following WebAssembly function:

(func $pair_add (param i32 i32) (result i32)
  local.get 1
  local.get 0
  i32.add
)

Notably you can see here that the struct Pair was "splatted" into its two components so the actual $pair_add function takes two arguments, the x and y fields. The tool-conventions, however specifically says that "other struct[s] or union[s]" are passed indirectly, notably through memory. We can see this by compiling this C code:

struct Pair {
    unsigned x;
    unsigned y;
};

unsigned pair_add(struct Pair pair) {
    return pair.x + pair.y;
}

which yields the generated function:

(func (param i32) (result i32)
  local.get 0
  i32.load offset=4
  local.get 0
  i32.load
  i32.add
)

Here we can see, sure enough, that pair is passed in linear memory and this function only has a single argument, not two. This argument is a pointer into linear memory which stores the x and y fields.

The Diplomat project has compiled a much more comprehensive overview than this and it's recommended to check that out if you're curious for an even deeper dive.

Why hasn't this been fixed long ago already?

For wasm32-unknown-unknown it was well-known at the time in 2021 when WASI's ABI was updated that the ABI was non-standard. Why then has the ABI not been fixed like with WASI? The main reason originally for this was the wasm-bindgen project.

In wasm-bindgen the goal is to make it easy to integrate Rust into a web browser with WebAssembly. JavaScript is used to interact with host APIs and the Rust module itself. Naturally, this communication touches on a lot of ABI details! The problem was that wasm-bindgen relied on the above example, specifically having Pair "splatted" across arguments instead of passed indirectly. The generated JS wouldn't work correctly if the argument was passed in-memory.

At the time this was discovered it was found to be significantly difficult to fix wasm-bindgen to not rely on this splatting behavior. At the time it also wasn't thought to be a widespread issue nor was it costly for the compiler to have a non-standard ABI. Over the years though the pressure has mounted. The Rust compiler is carrying an ever-growing list of hacks to work around the non-standard C ABI on wasm32-unknown-unknown. Additionally more projects have started to rely on this "splatting" behavior and the risk has gotten greater that there are more unknown projects relying on the non-standard behavior.

In late 2023 the wasm-bindgen project fixed bindings generation to be unaffected by the transition to the standard definition of extern "C". In the following months a future-incompat lint was added to rustc to specifically migrate users of old wasm-bindgen versions to a "fixed" version. This was in anticipation of changing the ABI of wasm32-unknown-unknown once enough time had passed. Since early 2025 users of old wasm-bindgen versions will now receive a hard error asking them to upgrade.

Despite all this heroic effort done by contributors, however, it has now come to light that there are more projects than wasm-bindgen relying on this non-standard ABI definition. Consequently this blog post is intended to serve as a notice to other users on wasm32-unknown-unknown that the ABI break is upcoming and projects may need to be changed.

Am I affected?

If you don't use the wasm32-unknown-unknown target, you are not affected by this change. If you don't use extern "C" on the wasm32-unknown-unknown target, you are also not affected. If you fall into this bucket, however, you may be affected!

To determine the impact to your project there are a few tools at your disposal:

  • A new future-incompat warning has been added to the Rust compiler which will issue a warning if it detects a signature that will change when the ABI is changed.
  • In 2023 a -Zwasm-c-abi=(legacy|spec) flag was added to the Rust compiler. This defaults to -Zwasm-c-abi=legacy, the non-standard definition. Code can use -Zwasm-c-abi=spec to use the standard definition of the C ABI for a crate to test out if changes work.

The best way to test your crate is to compile with nightly-2025-03-27 or later, ensure there are no warnings, and then test your project still works with -Zwasm-c-abi=spec. If all that passes then you're good to go and the upcoming change to the C ABI will not affect your project.

I'm affected, now what?

So you're using wasm32-unknown-unknown, you're using extern "C", and the nightly compiler is giving you warnings. Additionally your project is broken when compiled with -Zwasm-c-abi=spec. What now?

At this time this will unfortunately be a somewhat rough transition period for you. There are a few options at your disposal but they all have their downsides:

  1. Pin your Rust compiler version to the current stable, don't update until the ABI has changed. This means that you won't get any compiler warnings (as old compilers don't warn) and additionally you won't get broken when the ABI changes (as you're not changing compilers). Eventually when you update to a stable compiler with -Zwasm-c-abi=spec as the default you'll have to port your JS or bindings to work with the new ABI.

  2. Update to Rust nightly as your compiler and pass -Zwasm-c-abi=spec. This is front-loading the work required in (1) for your target. You can get your project compatible with -Zwasm-c-abi=spec today. The downside of this approach is that your project will only work with a nightly compiler and -Zwasm-c-abi=spec and you won't be able to use stable until the default is switched.

  3. Update your project to not rely on the non-standard behavior of -Zwasm-c-abi=legacy. This involves, for example, not passing structs-by-value in parameters. You can pass &Pair above, for example, instead of Pair. This is similar to (2) above where the work is done immediately to update a project but has the benefit of continuing to work on stable Rust. The downside of this, however, is that you may not be able to easily change or update your C ABI in some situations.

  4. Update to Rust nightly as your compiler and pass -Zwasm-c-abi=legacy. This will silence compiler warnings for now but be aware that the ABI will still change in the future and the -Zwasm-c-abi=legacy option will be removed entirely. When the -Zwasm-c-abi=legacy option is removed the only option will be the standard C ABI, what -Zwasm-c-abi=spec today enables.

If you have uncertainties, questions, or difficulties, feel free to reach out on the tracking issue for the future-incompat warning or on Zulip.

Timeline of ABI changes

At this time there is not an exact timeline of how the default ABI is going to change. It's expected to take on the order of 3-6 months, however, and will look something roughly like this:

  • 2025 March: (soon) - a future-incompat warning will be added to the compiler to warn projects if they're affected by this ABI change.
  • 2025-05-15: this future-incompat warning will reach the stable Rust channel as 1.87.0.
  • 2025 Summer: (ish) - the -Zwasm-c-abi flag will be removed from the compiler and the legacy option will be entirely removed.

Exactly when -Zwasm-c-abi is removed will depend on feedback from the community and whether the future-incompat warning triggers much. It's hoped that soon after the Rust 1.87.0 is stable, though, that the old legacy ABI behavior can be removed.

Mozilla Addons BlogRethinking Extension Data Consent: Clarity, Consistency, and Control

Firefox logoHello, extension developers! I’m Alan, the Product Manager at Mozilla responsible for the Firefox add-ons ecosystem.

I wanted to share news about a project we’re working on that will streamline how extension developers implement user data consent experiences.

Firefox extension data collection policies protect our users

Today, our Add-on policies dictate that any extension that collects or transmits user data must create and display a data consent dialog. This consent dialog must clearly state what type of data is being collected and inform the user about the impact of accepting or declining the data collection.

Whilst the policy is a great example of Firefox’s commitment to transparency and protecting user data, it can add significant overhead for developers who want to build on our platform, and it creates a confusing experience for end users who often encounter many different data consent experiences for every extension they install. These custom data consent experiences also increase the time it takes for add-on reviewers to process a new extension version, as they need to verify this custom code is compliant with our policies.

We’re simplifying how extensions gets consent to collect data

In 2025 we will launch a new data consent experience for extensions, built into the Firefox add-on installation flow itself. This will dramatically reduce the:

  1. development effort required to be compliant with Firefox data policies
  2. confusion users faces when installing extensions by providing a more consistent experience, giving them more confidence and control around the data collected or transmitted
  3. effort it takes AMO reviewers to evaluate an extension version to ensure it’s compliant with our data collection policies

Developers won’t need to bother with creating their own custom data consent experiences. Soon, developers will simply be able to specify in the manifest what types of data the extension collects/transmits and this will automatically be reflected in a unified consent experience across all Firefox extensions.

When a user then adds an extension to Firefox, the installation prompt will show what required types of data the extension collects, if any, alongside a list of permissions that the extension requests. Users will have a choice to opt in/out of providing the optional technical and usage data if the add-on has requested it, as well as any optional data collection the developer requests. As always, the user then has the choice to continue adding the extension if they agree to the required permissions and data collection, or cancel the installation flow. We plan to extend the existing WebExtensions permissions APIs to include these data collection options, making it as easy as possible for developers to adopt this new functionality.

The data collection information will also be displayed on AMO extension listing pages to help Firefox users make informed download decisions. We’re also exploring ways to let developers provide more context about their data practices, if they wish.

We will eventually accept this standardized approach instead of requiring a developer to build custom consent screens, but acknowledge this will take time as we gather feedback from our community of developers and users. To begin with, we will be adding this functionality to the Nightly version of Firefox for desktop in an upcoming release so that we can gather feedback on how this approach compares with their existing consent experiences. We’ll be sure to announce here on this blog with further technical details about how to use it, so stay tuned!

Help us make this better

We would love our Firefox extension developers to help us shape the future of this feature and we encourage you to test it out in Nightly when it’s released and send us your feedback. Finally, if you’re an extension developer, please help us build this feature by completing a survey about how you’re using permissions and data in your own extensions. This will help us make sure we’re not missing anything important during this stage of design!

Complete the extension permissions and data collection survey

The post Rethinking Extension Data Consent: Clarity, Consistency, and Control appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogAnnouncing Rust 1.86.0

The Rust team is happy to announce a new version of Rust, 1.86.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.86.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.86.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.86.0 stable

Trait upcasting

This release includes a long awaited feature — the ability to upcast trait objects. If a trait has a supertrait you can coerce a reference to said trait object to a reference to a trait object of the supertrait:

trait Trait: Supertrait {}
trait Supertrait {}

fn upcast(x: &dyn Trait) -> &dyn Supertrait {
    x
}

The same would work with any other kind of (smart-)pointer, like Arc<dyn Trait> -> Arc<dyn Supertrait> or *const dyn Trait -> *const dyn Supertrait.

Previously this would have required a workaround in the form of an upcast method in the Trait itself, for example fn as_supertrait(&self) -> &dyn Supertrait, and this would work only for one kind of reference/pointer. Such workarounds are not necessary anymore.

Note that this means that raw pointers to trait objects carry a non-trivial invariant: "leaking" a raw pointer to a trait object with an invalid vtable into safe code may lead to undefined behavior. It is not decided yet whether creating such a raw pointer temporarily in well-controlled circumstances causes immediate undefined behavior, so code should refrain from creating such pointers under any conditions (and Miri enforces that).

Trait upcasting may be especially useful with the Any trait, as it allows upcasting your trait object to dyn Any to call Any's downcast methods, without adding any trait methods or using external crates.

use std::any::Any;

trait MyAny: Any {}

impl dyn MyAny {
    fn downcast_ref<T>(&self) -> Option<&T> {
        (self as &dyn Any).downcast_ref()
    }
}

You can learn more about trait upcasting in the Rust reference.

HashMaps and slices now support indexing multiple elements mutably

The borrow checker prevents simultaneous usage of references obtained from repeated calls to get_mut methods. To safely support this pattern the standard library now provides a get_disjoint_mut helper on slices and HashMap to retrieve mutable references to multiple elements simultaneously. See the following example taken from the API docs of slice::get_disjoint_mut:

let v = &mut [1, 2, 3];
if let Ok([a, b]) = v.get_disjoint_mut([0, 2]) {
    *a = 413;
    *b = 612;
}
assert_eq!(v, &[413, 2, 612]);

if let Ok([a, b]) = v.get_disjoint_mut([0..1, 1..3]) {
    a[0] = 8;
    b[0] = 88;
    b[1] = 888;
}
assert_eq!(v, &[8, 88, 888]);

if let Ok([a, b]) = v.get_disjoint_mut([1..=2, 0..=0]) {
    a[0] = 11;
    a[1] = 111;
    b[0] = 1;
}
assert_eq!(v, &[1, 11, 111]);

Allow safe functions to be marked with the #[target_feature] attribute.

Previously only unsafe functions could be marked with the #[target_feature] attribute as it is unsound to call such functions without the target feature being enabled. This release stabilizes the target_feature_11 feature, allowing safe functions to be marked with the #[target_feature] attribute.

Safe functions marked with the target feature attribute can only be safely called from other functions marked with the target feature attribute. However, they cannot be passed to functions accepting generics bounded by the Fn* traits and only support being coerced to function pointers inside of functions marked with the target_feature attribute.

Inside of functions not marked with the target feature attribute they can be called inside of an unsafe block, however it is the caller's responsibility to ensure that the target feature is available.

#[target_feature(enable = "avx2")]
fn requires_avx2() {
    // ... snip
}

#[target_feature(enable = "avx2")]
fn safe_callsite() {
    // Calling `requires_avx2` here is safe as `safe_callsite`
    // requires the `avx2` feature itself.
    requires_avx2();
}

fn unsafe_callsite() {
    // Calling `requires_avx2` here is unsafe, as we must
    // ensure that the `avx2` feature is available first.
    if is_x86_feature_detected!("avx2") {
        unsafe { requires_avx2() };
    }
}

You can check the target_features_11 RFC for more information.

Debug assertions that pointers are non-null when required for soundness

The compiler will now insert debug assertions that a pointer is not null upon non-zero-sized reads and writes, and also when the pointer is reborrowed into a reference. For example, the following code will now produce a non-unwinding panic when debug assertions are enabled:

let _x = *std::ptr::null::<u8>();
let _x = &*std::ptr::null::<u8>();

Trivial examples like this have produced a warning since Rust 1.53.0, the new runtime check will detect these scenarios regardless of complexity.

These assertions only take place when debug assertions are enabled which means that they must not be relied upon for soundness. This also means that dependencies which have been compiled with debug assertions disabled (e.g. the standard library) will not trigger the assertions even when called by code with debug assertions enabled.

Make missing_abi lint warn by default

Omitting the ABI in extern blocks and functions (e.g. extern {} and extern fn) will now result in a warning (via the missing_abi lint). Omitting the ABI after the extern keyword has always implicitly resulted in the "C" ABI. It is now recommended to explicitly specify the "C" ABI (e.g. extern "C" {} and extern "C" fn).

You can check the Explicit Extern ABIs RFC for more information.

Target deprecation warning for 1.87.0

The tier-2 target i586-pc-windows-msvc will be removed in the next version of Rust, 1.87.0. Its difference to the much more popular i686-pc-windows-msvc is that it does not require SSE2 instruction support, but Windows 10, the minimum required OS version of all windows targets (except the win7 targets), requires SSE2 instructions itself.

All users currently targeting i586-pc-windows-msvc should migrate to i686-pc-windows-msvc before the 1.87.0 release.

You can check the Major Change Proposal for more information.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.86.0

Many people came together to create Rust 1.86.0. We couldn't have done it without all of you. Thanks!

The Mozilla BlogHow Firefox’s vertical tabs came to life with a little help from our community

Screenshot of a Firefox browser window showing the Mozilla homepage with the headline “Welcome to Mozilla” and the message “Love the internet again,” layered over a Wikipedia page about Maria Prymachenko.

If you’ve ever had more tabs open than you can count, you know the struggle: tiny, unreadable tab titles, constant scrolling, and that moment of panic when you close the wrong one. Enter vertical tabs, a long-requested Firefox feature designed to make tab management and multitasking easier. 

But this wasn’t just something we built overnight — getting it right took time, iteration, and a lot of feedback from our community. I spoke with Ania Safko, the product manager leading the charge on vertical tabs, about how the idea evolved — and how global community voices helped shape every step of the journey. 

A new way to manage tabs

We’re naturally wired to scan lists vertically — it’s how we read, process information, and navigate menus. Yet, browser tabs have traditionally been horizontal, which works fine — until you have more than 10 tabs open. At that point, finding the right tab isn’t just tedious, it’s downright overwhelming and frustrating. 

With vertical tabs, Firefox offers an alternative: tabs stacked along the side of your browser window, making it easier to organize, switch between tabs, and keep track of what’s open. This feature is especially useful for people who juggle multiple tasks and have lots of tabs open, need to remove distractions for deep focus, or just like a cleaner way to manage their tabs.

Recognizing the need for change

Vertical tabs weren’t just a shot in the dark. We’ve had signals of the feature’s value for years — requests in Mozilla Connect, popularity of third-party extensions for vertical tabs, and popular CSS customizations that the community was trying out and distributing. Many power users resorted to using multiple windows to compensate for crowded tab bars. The demand was clear, and we knew we’d disappoint a lot of folks if we got this wrong – so we set out to improve tab management for everyone.

It’s more than just turning tabs sideways

While the concept of vertical tabs sounds simple, making it work seamlessly was a big undertaking. Our core team of one product manager, one UX lead, five engineers (and multiple internal contributors) had to tackle a number of unique design and functionality challenges:

  • Balancing the need for a clean and minimalistic tab view with the need to see longer tab titles useful for many tasks.
  • Ensuring tabs remained easy to close, mute, and share, even in the minimalistic, collapsed mode. 
  • Balancing smart defaults with customizations: offer a useful experience right away while also allowing users to tweak things to their preference.
  • Ensuring performant and smooth animations for expanding and collapsing on a broad range of devices and OSs

We also wanted to build something that contributed to a better, more cohesive Firefox experience in the long run. That meant considering how vertical tabs would work with the existing tab management tools and future features, like tab groups.

The Firefox’s community shaped the vertical tabs feature

One of the biggest strengths of Firefox is our user community — and they played a major role in shaping vertical tabs. 

When we released an early version of vertical tabs in Firefox Nightly, feedback started pouring in. Early adopters helped us:

  • Improve stability and accessibility by testing vertical tabs on a variety of devices,operating systems, and pre-existing browser settings.
  • Polish user experience, by using it in real environments, for diverse and complex tasks, over weeks – something no amount of usability testing can replace.

Like many Firefox features, vertical tabs began as a small-scale experiment. The Mozilla team had been using this feature internally from day one – which helped spot bugs early, refine accessibility, and polish the interface in small but meaningful ways. Once external testing began, community feedback  became even more crucial –  both positive and negative. 

We especially appreciated hearing from long-time sidebar users on Mozilla Connect, who pointed out where the experience didn’t quite meet their expectations or disrupted their workflows. 

One key example: Originally, vertical tabs auto-collapsed when users opened a sidebar panel. While usability testing suggested this would save space, real-world users found this behavior cumbersome. We listened — and changed it.

Another experience shift came from a community discussion around expanding tabs on hover. A user suggestion reignited an internal conversation about the best implementation, leading us to refining existing implementation. The final version looks more polished and saves users extra space, thanks to community input.

Throughout the development, Ania was excited to see strong engagement from the Firefox’s international community. She often found herself responding in Ukrainian and Polish – her two native languages – to gather additional feedback, troubleshoot issues, or help users file bugs. She even jogged her memory of French and Spanish while translating notes from users in Mexico, France, and Colombia. For Ania, it was reassuring to see how much communities around the world cared about Firefox and actively contributed to making it better.  

Balancing different opinions

As with any new feature, opinions were divided. Some users loved the on-hover close button for collapsed tabs; others found it too easy to accidentally click. Instead of making a snap decision, we let the feature sit in Nightly, gathered more data, and are continuing to fine-tune it based on broader feedback.

Firefox browser window with vertical tabs and sidebar enabled, showing a tab preview for The Guardian and a customizable new tab page with frequently visited sites.

For us, balancing user habits with innovation is always a challenge. People get used to workflows, and even if a change ultimately improves usability, the initial adjustment period can be tough. That’s why we take an iterative approach — rolling out changes gradually, listening to real-world experiences, and making improvements along the way.

For Ania, this process reinforced the joy of building something that people feel passionate about. Seeing how different users engaged with vertical tabs — even when they had concerns — helped the team craft a better experience, ensuring that solutions met real needs of a diverse global community that chooses Firefox as their daily browser.

Thank you Firefox community

We couldn’t have done it without the community. Every bug report, suggestion and discussion helped make vertical tabs a better experience for everyone. Whether it was through Mozilla Connect, social media or direct feedback, users showed us what worked, what didn’t and what could be improved.

To all of you who shared your thoughts — thank you. Your feedback continues to shape Firefox, and we’re excited to keep building alongside you.

What’s next?

The launch of vertical tabs is just the beginning. We’ll continue refining the experience based on real-world usage and feedback, and we’re excited to see how people incorporate it into their browsing workflows.

If you haven’t tried vertical tabs yet, now’s the perfect time to give it a spin: navigate to Firefox Settings > General > Browser layout and switch the radio button to Vertical tabs. And as always, we’re listening — so let us know what you think!

Browser layout settings showing options for horizontal or vertical tabs, with vertical tabs and sidebar currently selected for quick access to bookmarks, synced tabs, AI chatbots, and more.
An illustration shows the Firefox logo, a fox curled up in a circle.

Get the browser that puts your privacy first — and always has

Download Firefox

The post How Firefox’s vertical tabs came to life with a little help from our community appeared first on The Mozilla Blog.

The Mozilla BlogBuilt together: How Firefox fans help shape the browser

Illustration of three hands pointing at a laptop screen displaying the Firefox logo, set against an orange and yellow grid background.

If you’ve ever wished Firefox had vertical tabs or an easier way to share links on your phone — and you left a comment somewhere asking for it — there’s a good chance someone saw it. And not just someone. The actual people building Firefox.

That’s the magic of Mozilla Connect. It launched in 2022 as a place where Firefox users and Firefox builders could actually talk to each other. No middlemen. No black box. Just real conversations, ideas, feedback, and yes — plenty of feature requests.

I spoke with Jon Siddoway, the community manager behind Mozilla Connect, about how it all got started. Before Mozilla Connect, there was a platform called Ideas@Mozilla. People could submit suggestions, but the tool wasn’t set up for real dialogue. “We needed something better,” Jon said. “A place where people could not only share ideas but also get updates, participate in discussions, and feel heard.”

“We wanted a space where people could share ideas and actually get updates on what happened next. It wasn’t just about collecting feedback — it was about building a community around it.”

Jon Siddoway, community manager for Mozilla Connect

So the team spent six months building something new — something built for conversation. Since then, more than 80,000 users have joined, and community input has directly influenced over 125 ideas that have made their way into Firefox and Mozilla products.

Jon explained it best: “We wanted a space where people could share ideas and actually get updates on what happened next. It wasn’t just about collecting feedback — it was about building a community around it.”

From “Wouldn’t it be cool if…” to shipping features

Right after Mozilla Connect launched, users jumped in with some big asks: vertical tabs, tab groups and better ways to manage profiles. Fast forward to now? All three are either being built or already starting to roll out.

Karen Kim, a product manager on Firefox desktop, remembers those early days well. “I loved the idea of  incorporating open community feedback earlier into our process of building our browser,” she said. “You have early adopters who are excited to try something even if it’s still rough around the edges. And their feedback? It adds real value. It’s a joyful moment when we can come back to an idea and tell people we now have the resources to build it. ”

 “I loved the idea of  incorporating open community feedback earlier into our process of building our browser,”

Karen Kim, product manager for Firefox desktop

She also shared that it’s not just about one-off requests — it’s about spotting patterns. “When you see the same idea pop up again and again from different people, that’s a strong signal. It helps us prioritize what to build next.”

When feedback flips the script

Andres Furlan, who works on Firefox for iOS, told us about a time when feedback totally changed the direction of a project. His team had redesigned the app’s toolbar, and posted the update on Mozilla Connect to get user reactions. “We were deciding whether to include the share and new tab buttons, and Mozilla Connect helped confirm it was the right move. We cross-checked it with user research and competitive benchmarks. Everything pointed in the same direction.”

And it’s not just about convenience. Sometimes feedback brings up things that weren’t even on the team’s radar. Like when Firefox removed night mode from iOS. “We deprecated night mode without realizing how many people relied on it for accessibility,” Andres recalled. “The posts started rolling in — almost 200 users requested we bring it back. And some shared how it helped with visual strain or low vision. That changed how I think about feature impact. It was a real eye-opener.”

Designing with users, not just for them

For both Karen and Andres, Mozilla Connect isn’t just for post-launch praise (or complaints). It’s become a tool for every phase of the product cycle — from exploring ideas, to testing prototypes, to validating decisions.

Andres shared how he used Mozilla Connect to post about an experimental menu redesign. “We got early feedback that it took too many clicks to access certain actions. That input helped us reevaluate the design before releasing it more widely.”

“On Mozilla Connect, people share constructive feedback. They explain their problem, suggest a solution. It helps me change my approach.”

Andres Furlan, product manager for Firefox iOS

And when changes spark strong reactions, Mozilla Connect becomes a two-way street. “We once made a security-driven change to private tabs,” Andres said. “Users weren’t happy. The Mozilla Connect thread got big. But it gave us a way to talk with them, explain why we did it, and figure out a compromise that worked.”

Yes, the internet can be nice

If you’ve ever read through App Store reviews, you know how intense (and unhelpful) some feedback can be. That’s why Andres values Mozilla Connect so much. “App Store reviews are mostly people yelling into the void. On Mozilla Connect, people share constructive feedback. They explain their problem, suggest a solution. It helps me change my approach.”

“You’ll see people responding to each other’s ideas, offering workarounds, or tagging us when something really needs attention. It’s collaborative in the best way.”

Jon Siddoway, community manager for Mozilla Connect

Mozilla Connect has a different vibe. “The feedback is actually helpful,” Andres said. “People say what’s not working and why — and they often suggest a fix. It’s the kind of conversation that helps us build better stuff.”

Jon shared that beyond feedback, Mozilla Connect is starting to feel like a true community. “You’ll see people responding to each other’s ideas, offering workarounds, or tagging us when something really needs attention. It’s collaborative in the best way.”

Jon uses a “gratitude tracker” to monitor all the ways users express appreciation on Connect – like thank-you messages and upvotes – while keeping an eye on the number of comments that require moderation. It helps the team strike a balance and lead with gratitude for the time and insights that users share. “It’s always overwhelmingly positive,” he said. “We’re seeing real conversations, not just complaints.”

Karen sees that positivity too. “There’s something really special about showing users we’re listening. Like when someone requested custom wallpapers for new tabs — and we could tell them, ‘Hey, we’re building that!’ You can feel the excitement on both sides.”

What’s on the horizon for Mozilla Connect

Mozilla Connect has grown a lot in the last few years, with more people joining and more teams across Mozilla using it to shape what’s next. Jon wants to expand it with community events, more product sneak peeks, even language-specific spaces. (We recently held an Ask Me Anything (AMA) with Firefox leaders that sparked incredible engagement  – and it’s clear people want to be part of the conversation.) Karen envisions more early-access opportunities for users to play, test and help shape features. And Andres? He’s using it to shape his entire roadmap.

“At the start of the year, Jon gave me a list of the top 10 concerns from the community,” Andres said. “That list basically guided everything we’re doing in Q1 – fixing pain points, improving basics, making sure the experience just works. That’s how we build trust.”

“We’re seeing real conversations, not just complaints.”

Jon Siddoway, community manager for Mozilla Connect

As Mozilla Connect enters its third year, the vision is clear: keep growing, but stay grounded in what works. That means more engagement from product teams, more ways for users to test early features (like Firefox Labs), and maybe even community events — virtual or in-person.

But the core mission won’t change: meaningful conversations between the people building Firefox and the people using it every day.

As Jon puts it: “It’s that middle space where we reach out, users reach out, and we meet in the middle.”

An illustration shows the Firefox logo, a fox curled up in a circle.

Get the browser that puts your privacy first — and always has

Download Firefox

The post Built together: How Firefox fans help shape the browser appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy BlogNew Mozilla Research: Civil Liability Along the AI Value Chain

What happens when AI systems fail? Who should be held responsible when they cause harm? And how can we ensure that people harmed by AI can seek redress?

READ THE REPORT HERE

As AI is increasingly integrated in products and services across sectors, these questions will only become more pertinent. In the EU, a proposal for an AI Liability Directive (AILD) in 2022 catalyzed debates around this issue.  Its recent withdrawal by the European Commission leaves a wide range of open questions to linger as businesses and consumers will need to navigate fragmented liability rules across the EU’s 27 member states.

To answer these questions, policymakers will need to ask themselves: what does an effective approach to AI and liability look like?

New research published by Mozilla tackles these thorny issues and explores how liability could and should be assigned across AI’s complex and heterogeneous value chain.

Solving AI’s “problem of many hands” 

The report, commissioned from Beatriz Botero Arcila — a professor at Sciences Po Law School and a Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society — explores how liability law can help solve the “problem of many hands” in AI: that is, determining who is responsible for harm that has been dealt in a value chain in which a variety of different companies and actors might be contributing to the development of any given AI system. This is aggravated by the fact that AI systems are both opaque and technically complex, making their behavior hard to predict.

Why AI Liability Matters

To find meaningful solutions to this problem, different kinds of experts have to come together. This resource is designed for a wide audience, but we indicate how specific audiences can best make use of different sections, overviews, and case studies.

Specifically, the report:

  • Proposes a 3-step analysis to consider how liability should be allocated along the value chain: 1) The choice of liability regime, 2) how liability should be shared amongst actors along the value chain and 3) whether and how information asymmetries will be addressed.
  • Argues that where ex-ante AI regulation is already in place, policymakers should consider how liability rules will interact with these rules.
  • Proposes a baseline liability regime where actors along the AI value chain share responsibility if fault can be demonstrated, paired with measures to alleviate or shift the burden of proof and to enable better access to evidence — which would incentivize companies to act with sufficient care and address information asymmetries between claimants and companies.
  • Argues that in some cases, courts and regulators should extend a stricter regime, such as product liability or strict liability.
  • Analyzes liability rules in the EU based on this framework.

Why Now?

We have already seen examples of AI causing harm, from biased automated recruitment systems to predictive AI tools used in public services and law enforcement generating faulty outputs. As the number of such examples will increase with AI’s diffusion across the economy, affected individuals should have effective ways of seeking redress and justice — as we have already argued in our initial response to the AILD proposal in 2022 — and businesses should be incentivized to take sufficient measures to prevent harm. At the same time, they should not be overburdened with ineffective rules and have legal certainty rather than facing a patchwork of varying rules across different jurisdictions in which they operate. A well-designed, targeted, and robust liability regime for AI could address all of these challenges — and we hope the research released today can contribute to a more grounded debate around this issue.

The post New Mozilla Research: Civil Liability Along the AI Value Chain appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 593

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is candystore, a fast, persistent key-value store that does not require LSM or WALs.

Thanks to Tomer Filiba for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

438 pull requests were merged in the last week

Compiler
Library
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Positive week, with a lot of primary improvements and just a few secondary regressions. Single big regression got reverted.

Triage done by @panstromek. Revision range: 4510e86a..2ea33b59

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
0.9% [0.2%, 1.5%] 17
Improvements ✅
(primary)
-0.4% [-4.5%, -0.1%] 136
Improvements ✅
(secondary)
-0.6% [-3.2%, -0.1%] 59
All ❌✅ (primary) -0.4% [-4.5%, -0.1%] 136

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
Rust Rust RFCs Cargo,
Other Areas

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-04-02 - 2025-04-30 🦀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If you write a bug in your Rust program, Rust doesn’t blame you. Rust asks “how could the compiler have spotted that bug”.

Ian Jackson blogging about Rust

Despite a lack of suggestions, llogiq is quite pleased with his choice.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Security BlogUpdated GPG key for signing Firefox Releases

The GPG key used to sign the Firefox release manifests is expiring soon, and so we’re going to be switching over to a new signing subkey shortly.

The GPG fingerprint is 14F2 6682 D091 6CDD 81E3 7B6D 61B7 B526 D98F 0353. The new signing subkey’s fingerprint is 09BE ED63 F346 2A2D FFAB 3B87 5ECB 6497 C1A2 0256, and it expires 2027-03-13.

The public key can be fetched from KEY files from the latest Firefox Nightly, keys.openpgp.org, or from below. This can be used to validate existing releases signed with the current key, or future releases signed with the new key.

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFWpQAQBEAC+9wVlwGLy8ILCybLesuB3KkHHK+Yt1F1PJaI30X448ttGzxCz
PQpH6BoA73uzcTReVjfCFGvM4ij6qVV2SNaTxmNBrL1uVeEUsCuGduDUQMQYRGxR
tWq5rCH48LnltKPamPiEBzrgFL3i5bYEUHO7M0lATEknG7Iaz697K/ssHREZfuuc
B4GNxXMgswZ7GTZO3VBDVEw5GwU3sUvww93TwMC29lIPCux445AxZPKr5sOVEsEn
dUB2oDMsSAoS/dZcl8F4otqfR1pXg618cU06omvq5yguWLDRV327BLmezYK0prD3
P+7qwEp8MTVmxlbkrClS5j5pR47FrJGdyupNKqLzK+7hok5kBxhsdMsdTZLd4tVR
jXf04isVO3iFFf/GKuwscOi1+ZYeB3l3sAqgFUWnjbpbHxfslTmo7BgvmjZvAH5Z
asaewF3wA06biCDJdcSkC9GmFPmN5DS5/Dkjwfj8+dZAttuSKfmQQnypUPaJ2sBu
blnJ6INpvYgsEZjV6CFG1EiDJDPu2Zxap8ep0iRMbBBZnpfZTn7SKAcurDJptxin
CRclTcdOdi1iSZ35LZW0R2FKNnGL33u1IhxU9HRLw3XuljXCOZ84RLn6M+PBc1eZ
suv1TA+Mn111yD3uDv/u/edZ/xeJccF6bYcMvUgRRZh0sgZ0ZT4b0Q6YcQARAQAB
tC9Nb3ppbGxhIFNvZnR3YXJlIFJlbGVhc2VzIDxyZWxlYXNlQG1vemlsbGEuY29t
PokCTwQTAQIAIgUCValABAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AAIQkQ
Ybe1JtmPA1MWIQQU8maC0JFs3YHje21ht7Um2Y8DU1CqD/9Gvr9Xu4uqsjDHRQWS
fI0lqxElmFSRjF0awsPXzM7Q1rxV7dCxik4LeiOmpoVTOmqboo2/x5d938q7uPdY
av2Q+RuNk2CG/LpXku9rgmTE7oszEqQliqKoXajUZ91rw19wrTwYXLgLQvzM3CUA
O+Z0yjjfza2Yc0ZtNN+3sF5VpGsT3Fb14aYZDaNg6yPFvkyxp0B1lS4rwgL3lkeV
QNHeAf0qqF9tBankGj3bgqK/5/YlTM2usb3x46bVBvwX2t4/NnYM5hEnI57inwam
X6SiMJc2e2QmBzAnVrXJETrDL1HOl4GUJ6hC4tL3Yw2d7515BlSyRNkWhhdRp1/q
9t1+ovSe48Ip2X2WF5/VA3ATfQhHKa3p+EkIV98VCMZ14x9KIIeBwjyJyFBuvOEE
IYZHdsAdqf1zYRtD6m6obcBrRiNfoNsYmNY4joDrVupI96ksIxVpepXaZkQhplZ1
mQ4eOdGtToIl1cb/4PibVgFnBgzrR4mQ27h4wzAwWdGweJZ/tuGoqm3C6TwfIgan
ajiPyKqsVFUkRsr9y12EDcfUCUq6D182t/AJ+qE0JIGO73tXTdTbqPTgkyf2etnZ
QQZum3L7w41NvfxZfn+gLrUGDBXwqLjovDJvt8iZTPPyMTzemOHuzf40Iq+9sf5V
9PXZ/5X9+ymE3cTAbAk9MLd9fbkCDQRkVUBzARAA1cD3n5ue0sCcZmqX2FbtIFRs
k39rlGkvuxYABsWBTzr0RbRW7h46VzWbOcU5ZmbJrp/bhgkSYRR3drmzT63yUZ62
dnww6e5LJjGSt19zzcber9BHELjqKqfAfLNsuZ7ZQ5p78c6uiJhe8WpbWogbspxJ
20duraLGmK4Kl23fa3tF0Gng1RLhoFcSVK/WtDZyC+elPKpch1Sru6sw/r8ktfuh
NIRGxdbj/lFHNVOzCXb3MTAqpIynNGMocFFnqWLZLtItphHxPUqVr6LKvc3i3aMl
C6IvLNg0Nu8O088Hg3Ah9tRmXKOshLjYjPeXqM9edqoWWqpzxDTNl6JlFMwP+Oac
MKsyX7Wq+ZXC/o3ygC/oclYUKtiuoGg47fSCN2GS3V2GX2zFlT6SEvEQQb2g5yIS
LX9Q/g9AyJdqtfaLe4Fv6vM4P1xhOUDnjmdoulm3FGkC701ZF7eFhMSRUM9QhkGH
6Yz2TvS4ht6Whg7aVt4ErIoJfj9jzJOp6k9vna5Lmgkj8l19NTiUQ7gk98H3wW4m
RrINxZ2yQD47V/LJ+tUamJc5ac+I0VP7c15xmKEJ2rfGCGhiSWQwZZw7Y2/qoADS
BlI28RlBTuRP2i6AdwyJU+75CzxGzMpr/wBLhZT+fNRV4HHd5dgR3YxajpkzZ6wX
L2aaJhznFEmLBLokOwMAEQEAAYkEcgQYAQoAJhYhBBTyZoLQkWzdgeN7bWG3tSbZ
jwNTBQJkVUBzAhsCBQkDwmcAAkAJEGG3tSbZjwNTwXQgBBkBCgAdFiEErdcHlHlw
Dcrf3VM34207E/PZMnQFAmRVQHMACgkQ4207E/PZMnRgdg/+LAha8Vh1SIVpXzUH
Vdx81kPyxBSaXtOtbBw6u9EiPW+xCUiF/pyn7H1lu+hAodeNFADsXmmONKcBjURV
fwO81s60gLKYBXxpcLLQXrfNOLrYMnokr5FfuI3zZ0AoSnEoS9ufnf/7spjba8Rl
dV1q2krdw1KtbiLq3D8v4E3qRfx5SqCA+eJSavaAh3aBi6lvRlUSZmz8RWwq6gP9
Z4BiTTyFp5jQv1ZKJb5OJ+44A0pS+RvGDRq/bAAUQULLIJVOhiTM74sb/BPmeRYU
S++ee10IFW4bsrKJonCoSQTXQexOpH6AAFXeZDakJfyjTxnl3+AtA4VEp1UJIm0Y
we0h6lT0isSJPVp3RFZRPjq0g+/VniBsvYhLE/70ph9ImU4HXdNumZVqXqawmIDR
wv7NbYjpQ8QnzcP3vJ5XQ4/bNU/xWd1eM2gdpbXI9B46ER7fQcIJRNrawbEbfzuH
y5nINAzrznsg+fAC76w2Omrn547QiY2ey7jy7k79tlCXGXWAt9ikkJ95BCLsOu5O
TxPi4/UUS2en1yDbx5ej7Hh79oEZxzubW1+v5O1+tXgMOWd6ZgXwquq50vs+X4mi
7BKE2b1Mi6Zq2Y+Kw7dAEbYYzhsSA+SRPu5vrJgLTNQmGxxbrSA+lCUvQ8dPywXz
00vKiQwI9uRqtK0LX1BLuHKIhg4OgxAAnmFSZgu7wIsE2kBYwabCSIFJZzHu0lgt
RyYrY8Xh7Pg+V9slIiMGG4SIyq5eUfmU8bXjc4vQkE6KHxsbbzN6gFVLX1KDjxRK
h+/nG/RDtfw/ic7iiXZfgkEqzIVgIrtlDb/DK6ZDMeABnJcZZTJMAC4lWpJGgmnZ
xfAIGmtcUOA0CKGT43suyYET7L7HXd0TM+cJRnbEb7m8OexT9Xqqwezfqoi1MGH2
g8lRKQE4Z2eEFvCiuJnCw547wtpJWEQrGw1eqL3AS8Y051YqblbXLbgf5Oa49yo6
30ehq9OxoLd7+GdWwYBlr/0EzPUWezhdIKKvh1RO+FQGAlzYJ6Pq7BPwvu3dC3YY
dN3Ax/8dj5036Y+mHgDsnmlUk8dlziJ0O3h1fke/W81ABx4ASBktXAf1IweRbbxq
W8OgMhG6xHTeiEjjav7SmlD0XVOxjhI+qBoNPovWlChqONxablBkuh0Jd6kdNiaS
EM9cd60kK3GT/dBMyv0yVhhLci6HQZ+Mf4cbn0KtayzuQLOcdRCN3FF/JNQH3v6L
A1MdRfmJlgC4UdiepBb1uCgtVIPizRuXWDjyjzePZRN/AqaUbEoNBHhIz0nKhQGD
bst4ugIzJWIX+6UokwPC3jvJqQQttccjAy6kXBmxfxyRMB5BEeLY0+qVPyvOxpXE
GnlSHYmdIS65Ag0EZ9KQfQEQAOVIyh0sZPPFLWxoFT0WhPzHw8BhgnCBNdZAh9+S
M0Apq2VcQKSjBjKiterOTtc6EVh0K2ikbGKHQ1SvwNdsYL01cSkJSJORig/1Du1e
h+2nlo8nut7xT//V+2FQyWFCLDeQvLlAs3QHMrMYxTcwNk3qi/z1Z5Q4e6Re2aKR
U00LtSomD6CKWy9nAaqTRNzzdndJwIyCyshX4bbUzAzE7Wbgh/E0/FgBGw87LYIT
qyU6US4lvoUXB+89XxwMxO9I74L118gXEyybz+JN0/w87hXAKnaKjasSvobKE4ma
u8SXqmOO66MxiMaF4Xsmr3oIwo8q9W5d+hA+t225ipq2rZZErmPL44deMCeKmepj
LTa9CoxX2oVpDWGOYFRyJRkLDyyH4O3gCo/5qv4rOTJqPFfKPtrjWFJKGf4P4UD0
GSBX2Q+mOf2XHWsMJE4t8T7jxQCSAQUMwt6M18h1auIqcfkuNvdJhcl2GvJyCMIb
kA3AoiuKaSPgoVCmJdbc6Ao9ydmMUB5Q1rYpMNKCMsuVP9OcX8FoHEVMXOvr0f6W
fj+iHytfO2VTqrw/cqoCyuPoSrgxjs1/cRSz5g9fZ0zrOtQyNB5yJ3YPTG3va1/X
LflrjPcT4ZUkej9nkFpCNWdEZVWD/z3vXBGSV11N9Cdy60QbD4yZvDjV2GQ+dwAF
1o1BABEBAAGJBHIEGAEKACYWIQQU8maC0JFs3YHje21ht7Um2Y8DUwUCZ9KQfQIb
AgUJA8JnAAJACRBht7Um2Y8DU8F0IAQZAQoAHRYhBAm+7WPzRiot/6s7h17LZJfB
ogJWBQJn0pB9AAoJEF7LZJfBogJW9I4QAJbv4Rhb4x6Jl75x2Lfp46/e3fZVDhzU
dLjK8A/acRF7JRBuJVJRaijJ5tngdknmlmbzfqlyzsMWUciAwVJRvijNFDeicet5
zJpBRsXEUAug3iVCD1KlVvLzjCi9Eb9s6xCQjSJ8DZE020s41wdqtb1nziDASAkg
+YH2DzpTEaZVNM39uNDKbaJLYIjKA9MV1YHArqUldFsoofBe4zIZRFyvMD7Gmr7X
m0IWYLrfmnenm1JJYIkvGUeVoP8dEonAVhLVwvwwufobV0qdtMfhZsgFwf1XSHI9
MtD4yAVtBqBTkfFeRLnBjJK/ywYxGqbadt1b57I4ywTQ16oXNrlTF1Su0I8i/fo0
i/9ohNl3opN3LbaEbhT37M4xpy4MgL2Fthddc2gWvF/8TFRaXw7LaLSR7HwO+Y0C
pOtV/Ct4RzKEulY5DpV9b1JQJhpLcjMz+pBDAM3KJuiV6Bcfoz5PZowFy74UmE02
Vzk/oyuI/o4KMihy0UzWQVkOZTTu4eONktgGiZOnRFdiLKVgeLEDXTLdhbuwGS2+
wX3I7lLP9AWpK8Ahc81eUwU6MwdbfwfJ1ELtKaa/JmMjaWkr5aGrp88d8ePR9jYA
47Z2q0esB67pRJVe0McVJlu9GQGq05S7lZKs6mi9dHTzeHwua//IXHMK0s3WhMU7
vGwJ3E2+pTstf8AQALSwkezD3QchPV+5CAUYY7CmMXB6zzIU18wCS61Y8QdDvqmt
WHdMVTp4xT14fS6cvB4uFzacGQJ7CVIWeZgwEFzZiev3dKpnUOGg0WQSwmQQA0JC
g6/qS0AeUPINjhWtNcR7voCqAYeRcjo47UJclD/KKNTCn27btHRaEmpTdTtC6sxi
VElFObb3a9tHXqwLWp8gJ+NZ+6mlrvvH2hm1CAyQTDRYC7nN69QJrKHR8HA3AeR5
figQHLwvmfQlV2erZE17GT+L5t0HxX/HKZCim91PApqa+7iY0eKPAG5iacABrBi9
zzh/ex0ovvuxsBDKUFCSu7HIivnAVrdS/kbO1qJ5I3MBMp0dlQ6PS6LeZIRhxts0
aPPZedsXytoL7kFLISfJ55AuhJpskz+55uviJhp/H3zNBYtQ+dmFmp4RRk/Nvu0z
v6OGtaZy6M5X24Pbzb/OApBML84cEmb3iZie9J2ZYW68/D96sP09x6GItCJlCIdQ
ZkRcwmkQwgtq9sJDw92/vSGeYdRn+oCAxJ14eObCsVwcfJARLt45btEnx+zRCAHA
HQHpV6qTGT6nqg57XuM9iNNdyTGKRU+Iklgb9LRxVAQfbn5uXYb5j2ox5pjxtbXT
f9Lbo7RkygcWSKZPWmYgGsKS6jmXkDa/TyOlPxkbaknpPbYMBztRT4Ju0VU4
=8qIP
-----END PGP PUBLIC KEY BLOCK-----

The post Updated GPG key for signing Firefox Releases appeared first on Mozilla Security Blog.

Mozilla Open Policy & Advocacy BlogMozilla Mornings: Unleashing PETs – Regulating Online Ads for a Privacy-First Future

Our first edition of Mozilla Mornings in 2025 will explore the state of online advertising and what needs to change to ensure a fairer, healthier, and privacy-respecting ads ecosystem where everyone stands to benefit.

The European regulatory landscape for online advertising is at a turning point. Regulators step up enforcement under the GDPR, the DMA and the DSA and industry players explore alternatives to cookies. Despite these advancements, online advertising remains an area where users do not experience strong privacy protections, and the withdrawal of the ePrivacy Regulation proposal can only exacerbate these concerns.

The industry’s reliance on invasive tracking, excessive profiling, and opaque data practices makes the current model deeply flawed. At the same time, online advertising remains central to the internet economy, supporting access to information, content creators, and journalism.

This Mozilla Mornings session will bring together policymakers, industry experts and civil society to discuss how online advertising can evolve in a way that benefits both users and businesses.

  • How can we move towards a more privacy-respecting and transparent advertising ecosystem while maintaining the economic sustainability of the open web?
  • How can regulatory reforms, combined with developments in the space of Privacy-Enhancing Technologies (PETs) and Privacy-Preserving Technologies (PPTs), provide a viable alternative to today’s surveillance-based advertising?
  • And what are the key challenges in making this shift at both the policy and technological levels?

To discuss these issues, the panel will welcome:

  • Rob van Eijk, Managing Director at Future of Privacy Forum
  • Svea Windwehr, Associate Director Public Policy at Electronic Frontier Foundation
  • Petra Wikström, Senior Director Public Policy at Schibsted
  • Martin Thomson, Distinguished Engineer at Mozilla

The discussion will also feature a fireside chat with Prof. Dr. Max von Grafenstein from Einstein Center Digital Future at the UdK Berlin.

  • Date: Wednesday 9th April 2025
  • Time: 08:45-10:15 CET
  • Venue: L42, Rue de la Loi 42, 1000 Brussels

To register, click here.

The post Mozilla Mornings: Unleashing PETs – Regulating Online Ads for a Privacy-First Future appeared first on Open Policy & Advocacy.

Firefox Developer ExperienceNetwork override in Firefox DevTools

With Firefox 137 comes a new feature for the Network Monitor: network response override!

Screenshot of Firefox DevTools network panel, showing several requests and a context menu with the "Set Network Override" item selected

Override all the things!

A long, long time ago when I was building rather serious web-applications, one of my worst fears was getting a frontend bug which only occurred with some specific live production data. We didn’t really have source maps back then, so debugging the minified code was already complicated. If I was lucky enough to understand the issue and to write a fix, it was hard to be sure that it would fully address the issue.

Thankfully all that changed, the moment I installed a proxy tool (in my case, I was using Fiddler). Now I could pick any request captured on the network and setup a rule to redirect future similar requests to a local file. All of a sudden, minified JS files could be replaced with development versions, API endpoints could be mocked to return reduced test cases, and most importantly I could verify a frontend patch against live production data from my machine without having to wait for a 2 months release cycle.

We really wanted to bring this feature to Firefox DevTools, but it was a complex task, and took quite some time and effort. But we are finally there, so let’s take a look at what is available in Firefox 137.

Debugger Local Script Override

You may or may not know, but Firefox DevTools already had an override feature: the Debugger Local Script Override. This functionality was added in Firefox 113 and allows you to override JavaScript files from the Debugger Source Tree.

Screenshot the Firefox DevTools' Debugger Source Tree, with a context menu opened on a JS file showing the "Add script override" menu item.<figcaption class="wp-element-caption">Debugger Source Tree context menu to “Add script override”</figcaption>

After opening the context menu on a JS file in the Debugger Source Tree and selecting “Add script override”, you are prompted to create a new local file which will initially have the same contents as the file you selected. But since this is a local file, you can now modify it. The next time this script will be loaded, Firefox will use your local file as the response.

Thanks to Local Script Override, you could already modify and test JavaScript changes on any website, even if you didn’t have direct access to the sources. However this feature was limited to JS. If you had to modify inline scripts in HTML pages, or the data returned returned by an API endpoint, you were out of luck.

Introducing Network Response Override

Without going into details, the main reason this feature was limited to the Debugger and to JS files was because our trick to override responses involved redirecting the request to a data URI containing the overridden content. And while this was OK for scripts, this hack didn’t work for HTML files or other resources. But in the meantime, we also worked on overriding responses for WebDriver BiDi and we implemented a solution that worked for any response. After that, it was only a matter of reusing this solution in DevTools and updating the UI to support overriding responses of any request in Firefox DevTools.

The workflow is similar to the Debugger Local Script Override. First you find the request you want to override in the Network Monitor, open the context menu and select “Set Network Override”.

<figcaption class="wp-element-caption">Network Panel context menu: Set Network Override</figcaption>

After that you will also be prompted to create a new local file, which will have the same content as the original response you want to override. Open this file in the editor of your choice to modify it. Back to the DevTools’ Network panel, you should notice that a new column called “Override” appeared and shows a purple circle on the line where you added the override.

<figcaption class="wp-element-caption">Network Panel shows a purple circle for overridden requests</figcaption>

In case you forgot the path of the file you created, just hover on the override icon and it will display the path again. Note that the Override column can not be hidden manually. It is automatically displayed if you have any override enabled, and it will disappear after all overrides have been removed.

Now that the override is set, go ahead and modify the file locally, reload your tab and you should see the updated content. You might want to check the “Disable Cache” option in the network panel to make sure the browser will send a new request and your override will be used – we have a bug filed to automatically do this. Again, you can use this feature with any request from the network monitor: HTML, CSS, JS, images etc…

Once you are done with testing you can remove the override by opening the context menu again and selecting “Remove Network Override”.

<figcaption class="wp-element-caption">Network Panel context menu: Remove Network Override</figcaption>

Limitations and next steps

I am very happy to be able to use network overrides directly from Firefox DevTools without any additional tool, but I should still mention some known limitations and issues with the current feature.

First of all, the overrides are not persisted after you close DevTools or the tab. In a sense it’s good because it makes it easy to get rid of all your overrides at once. But if you have a complicated setup requiring to override several requests, it would be nice be able to persist some of that configuration.

Also the Override “status” only indicates that you enabled an override for a given request, not that the response was actually overridden. It would be great if it also indicated whether the response for this request was overridden or not (bug).

We also currently don’t support network overrides in remote debugging (bug).

In terms of user experience, we might also look into what Chrome DevTools is doing for network overrides, where you can set a folder to store all your network overrides.

Finally we are open to suggestions on which network debugging tool could be useful to you. For example it would be nice to allow modifying response headers or delaying responses. But you probably have other ideas and we would be happy to read those, either in the comments down below or directly on discourse, bugzilla or Element.

In the meantime, thanks for reading and happy overrides!

The Mozilla BlogSpring cleaning? Watch out for these product categories while online shopping

Illustration of an online shopping website with warning icons over a shopping cart, five-star review, and credit card, along with a magnifying glass and a checkmark symbol, representing scrutiny of online product reviews.

Spring is in the air, which means it’s time to swap stuffy winter layers to fresh air — and maybe confront the dust bunnies that moved in while you were busy. If you’re gearing up for a deep Spring clean or some much needed home maintenance, chances are you’re heading online to stock up on supplies. But before you fill up that cart, let’s talk about something a little less refreshing: the flood of unreliable product reviews that can make finding quality products harder than it should be. 

Discounted prices? Thousands of five-star reviews? That must mean it’s a great buy, right? Not always. Some of the trendiest products come with a bummer: unreliable reviews. That’s where Fakespot comes in. 

Fakespot is your secret weapon for smarter online shopping. The AI-powered browser extension helps millions of shoppers make better purchasing decisions by analyzing product reviews as you shop on Amazon, Walmart, Best Buy and more. It identifies which product reviews are reliable and can flag when extra caution should be considered in the product pages. Whether shopping for cleaning supplies or home upgrades, Fakespot’s Review Grades offer a clear A-F rating system to help navigate review reliability. It also evaluates seller credibility on platforms like eBay and Shopify, ensuring consumers can shop with confidence across the web.

Reliable reviews aren’t just about trust — they’re about saving you time, money, and frustration. A product propped up by fake reviews might seem like a steal, but if it falls apart after one use or doesn’t work as promised, you’re stuck dealing with returns or, worse, adding to the landfill. When you shop with confidence using Fakespot, you’re not just avoiding duds — you’re making smarter choices that keep quality products in your home and bad ones out of your cart (and the trash).

Breaking down the most & least reliable home products — according to reviews

Fakespot analyzed customer reviews on Amazon to highlight the categories with the most and least trustworthy feedback. Before you stock up on cleaning supplies or home upgrades, here’s what to know: 

Categories you can count on

When it comes to these spring refresh essentials, the reviews are as trustworthy as they get.

  • Furnace Filters (82% reliable) — Not the most glamorous buy, but absolutely necessary. If you haven’t swapped yours in the last 3-6 months, consider this your sign. The good news? You can trust the reviews when picking a replacement.
  • Shower Curtain Liners (80% reliable) — A purchase no one gets excited about, but at least you don’t have to stress over fake reviews. Swap out that soap-scummed liner with confidence.
  • Horizontal Blinds (79% reliable) — Let the light in and upgrade those dusty blinds. The reviews in this category shouldn’t leave you in the dark.
  • Tools & Home Improvement (79% reliable) — Whether you’re stocking a new toolbox or tackling a weekend project, these reviews are as solid as the tools themselves.
  • Dish Cloths & Towels (75% reliable) — TikTok loves Swedish dishcloths, and if you’re thinking about making the switch, the reviews here will steer you in the right direction.
  • All-Purpose Cleaners (74% reliable) — Restock your go-to cleaner or try that trendy pink goop—either way, the reviews are scrubbing up strong.
 Two side-by-side lists comparing product categories. The left side shows categories with the most reliable reviews (e.g., furnace filters, shower curtain liners), marked with a smiley face and thumbs-up. The right side shows categories with the most unreliable reviews (e.g., ultrasonic repellers, steam cleaners), marked with red flags.

Categories to be wary of

The reliability of these product reviews may be questionable. With each product category with a high number of unreliable reviews, we’ve included a product with an A or B review grade.  

  • Ultrasonic Repellers (65% unreliable) — Uninvited critters moved in over the winter? Bad news: more than half of the reviews on these so-called solutions might be unreliable. While we can recommend a reliable alternative such as a trap or spray, this might be a case for the exterminator.
  • Pressure Washers (45% unreliable) — The outside of your home deserves a quality clean just as well as the inside. Unfortunately, buying this online might not lead you to the best possible washer.
    • Reliable Alternative: Westinghouse Electric Pressure Washer
    • Review Grade:
    • Review highlight: “Super easy to assemble, user manual included, easy to use with excellent power to do any job!” 
  • Stick Vacuums & Electric Brooms (51% unreliable) — A dupe might look tempting, but with reviews this sketchy, you might just be sweeping your money away.
    • Reliable Alternative: Dyson Cordless Vacuum 
    • Review Grade: A
    • Review highlight: “From the moment I took it out of the box, it was clear that this vacuum was built well.” 
  • Handheld Vacuums (43% unreliable) — Turns out, the smaller the vacuum, the bigger the review problem.
    • Reliable Alternative: Black & Decker Dustbuster Handheld Vacuum
    • Review Grade: A
    • Review highlight: “The pivoting nozzle is extendable to get into nooks and crannies, able to lock into the position needed.” 
  • Personal Fans (45% unreliable) — Want a breeze? Don’t let unreliable reviews blow you off course.
    • Reliable Alternative: Vornado Mid Size Room Fan 
    • Review Grade: A
    • Review highlight: “If you want a quality fan that doesn’t rattle, doesn’t have a high pitched whine, and moves air without sounding like a boeing 737 on takeoff, this is the fan for you.” 
  • Electric Space Heaters (43% unreliable) — When it comes to heating, reliability is everything—too bad the reviews don’t measure up.

How Fakespot helps you shop smarter

Fakespot’s free browser extension cuts through the noise of misleading reviews, helping you make smarter purchasing decisions. The Review Grade system works on an A-F scale:

  • A & B: Reliable reviews
  • C: Mixed reliability, approach with caution
  • D & F: Unreliable, buyer beware

Before you stock up on seasonal necessities, download Fakespot on Firefox, Chrome, or Safari to shop with confidence. Make informed choices and keep your home refresh hassle-free!

The post Spring cleaning? Watch out for these product categories while online shopping appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 136

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 136 release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

  • like Karan Yadav who fixed the box model dimensions for element with display:none (#1007374), made it possible to save a single Network request to HAR (#1513984) and fixed the “offline” setting in the throttling menu or the Responsive Design Mode (#1873929).
  • Meike [:mei] added the pt unit in the Fonts panel (#1940009)

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Highlights

  • Show profile details in the network throttling menu items (#1770932)
  • Throttling menu in Network monitor. The different throttling simulations indicate the download, upload and ping data being simulated (for example, for GPRS, it says download 50kbps, upload 20Kbps, ping 500ms)
  • The JSON Viewer is parsing the JSON string it’s displaying, which was causing some troubles when some values can’t be accurately parsed in JS (for example, JSON.parse('{"large": 1516340399466235648}') returns { large: 1516340399466235600 }). In such case, we now properly show the source value, as well as a badge that show the JS parsed value to avoid any confusion (#1431808)
  • Firefox JSON viewer showing a JSON file with multiple items, some of them having a special badge prefixed with JS. For example, there's a `big: 1516340399466235648` property, and next to it a badge with: `JS:1516340399466235600` A tooltip is displayed with the text: "Javascript parsed value"
  • Links to MDN were added in the Network Monitor for Cross-Origin-* headers (#1943610)
  • We made the “Raw” network response toggle persist: once you check it, all the request you click on will show you the raw response (until you uncheck it) (#1555647)
  • We drastically improved the Network Monitor performance, especially when it has a lot of requests (#1942149, #1943339)
  • A couple issues were fixed in the Inspector Rules view autocomplete (#1184538, #1444772), as well as autocomplete for classes with non-alpha characters in the markup view search (#1220387)
  • Firefox 132 added support for CSSNestedDeclarations rules, which changed how declarations set after a nested declaration were handled. Previously, those declarations were “moved up”, before any nested declarations. This could be confusing and the specification was updated to better align with developers expectations. Unfortunately, this caused a few breakages in the Inspector when dealing with nested rules; for example when adding a declaration, those would appears twice and wouldn’t be placed at the right position. This should now be behaving correctly (#1946445), and we have a few other fixes coming around nested declaration.
  • We fixed an issue that was preventing to delete cookies with Domain attributes (#1947240)
  • Finally, after many months of hard work, we successfully migrated the Debugger to use codemirror 6 (#1942702)

That’s it for this month, thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

Full list of fixed bugs in DevTools for the Firefox 136 release:

Firefox Developer ExperienceFirefox DevTools Newsletter — 135

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 135 release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Highlights

  • You can now navigate the Debugger call stack panel using the keyboard up/down arrow keys (#1843324)
  • We added an option in the Debugger so you can control whether WebExtensions scripts should be visible in the sources tree (#1413872, #1933194)
The Debugger sources tree with a cog-icon button in the top-right corner. An anchored menu is displayed poiting to the button, and includes a "Show content script" checked item. In the sources tree we can see entries for a few webextension (for example uBlock Origin)
  • Did you know that you can set a name for Workers? The threads panel in the Debugger will now use this name when it’s defined for a worker (#1589908)
  • We fixed an issue where the Preview popup wouldn’t show the value for the hovered expression (#1941269)
  • File and data URI requests are now visible in the Network Monitor (#1903496, #972821)
  • The partition key for CHIPS cookies is now displayed in the storage panel (#1895215)
  • You probably know that the WebConsole comes with nice helpers to interact with the page. For example $$() is kind of an alias for document.querySelectorAll() (except it returns an Array, while the latter returns a NodeList). In Firefox 135, we added a $$$() helper which will returns element matching the passed selector, including elements in the shadow DOM.
  • Finally, we improved the stability of the toolbox, especially when your machine is under heavy load (#1918267)

That’s it for this month, thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

Full list of fixed bugs in DevTools for the Firefox 135 release:

Mitchell BakerGlobal AI Summit on Africa

Mitchell BakerTopic Areas for Building a Better World Through Technology

What does the evolution of open source from “radical” niche developers into the consumer mainstream teach us?

How does one use business as a tool to promote a mission, or to support a mission based organization?

Time of Disruption

Mitchell BakerBuilding a Better World Through Technology

Firefox Developer ExperienceFirefox DevTools Newsletter — 134

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 134 release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Highlights

  • The “Pause on caught exception” setting wasn’t persisted after DevTools was reloaded, this is now fixed (#1930687)
  • We made the information in the “Why Paused panel” (i.e. the section in the Debugger that indicates that the page is paused and why) properly accessible to screenreaders (e.g. NVDA) by making it a live region (#1843320, #1927108)
  • Still around accessibility, we improved the inspector markup view breadcrumb component, properly setting the aria-pressed attribute on the selected item
  • We fixed an annoying issue in the Inspector where clicking an element would impact horizontal scroll (#1926983)
  • Finally, we removed unhelpful (fallback document) URL element in WebExtensions toolboxes (#1928336)

That’s it for this month, thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 134 release:

The Mozilla BlogNolen Royalty, known as eieio, keeps the internet fun with experimental multiplayer games

Nolen Royalty, who's behind eieio games, smiling in front of water with gaming and heart icons on a pixel grid background.<figcaption class="wp-element-caption">Nolen Royalty is a game developer and software engineer who makes experimental massively multiplayer games and elaborate technical jokes at eieio.games. </figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Nolen Royalty, known as eieio, the creator of experimental multiplayer games like One Million Checkboxes and Stranger Video. He talks about the forums that shaped him, the deep dives he can’t get enough of, and why he still believes in an internet made for fun.

What is your favorite corner of the internet? 

Fun question!

Growing up, I spent a *ton* of time on internet forums for different games. I moved a lot as a kid and forums were a fun way to stay in touch with a consistent group of people as I moved. I was on too many forums to list, but I spent the most time on this site called TeamLiquid that was (at the time) a place where people gathered to talk about the Korean professional Starcraft scene. I spent over a decade regularly posting on TeamLiquid and have a bunch of real-life friends that I met via the site. I miss forums!!

These days my favorite corners are a couple of smaller chats for some communities that I’m involved in. I spend a lot of time in the Zulip (open-source Slack) for the Recurse Center (a writer’s retreat for programmers) and in some smaller chats with other friends that do creative internet tech stuff. It’s really really nice to have a place to share your work without the incentives of social media.

What is an internet deep dive that you can’t wait to jump back into?

One of my favorite things is when I find a new internet writer whose work I enjoy – especially if they’ve been at it for a while. I looove going through the entire archive of a new (to me) blog.

I think my most recent blog binge was Adrian Hon’s excellent games blog (mssv.net). But I’m always on the lookout for new writers!

What is the one tab you always regret closing?

Ooh I normally think about the tabs that I regret opening!

I think my biggest regular regret here is when I find a new tech-internet-artist person and close out their site without remembering to bookmark it 🙁

What can you not stop talking about on the internet right now?

I probably don’t *talk* about this on the internet as much as I should, but one thing I constantly think about is how social media has warped our understanding of what exists on the internet.

Sometimes when I make a game people tell me that it reminds them of “the old internet.” When they say this I think that they basically mean that I’ve made something fun and unmonetized largely for the joy of…making something fun. And because there’s a whole professional social media ecosystem that didn’t exist 20 years ago, it can feel like there are fewer people doing that now.

But I don’t think that’s true! There is SO much cool stuff out there on the internet – I think there’s way more than when I was a kid! It’s just that there’s way more stuff *period* on the internet these days. Going on the internet is much more a part of participating in society than it was in the 2000s, and so you have to search a little more for the good stuff.

But it’s out there!

What was the first online community you engaged with?

Definitely internet forums! I *think* the first forum I joined was either for fans of the site Homestar Runner or for this game I was really into called Kingdom of Loathing. This was ~20 years ago (!) so I would have been 12 or 13 at the time. I really miss pre-social media niche communities; there’s a totally different to making a whole account somewhere *just* to talk about your niche interest vs surfing between a million different niches on a big social platform.

If you could create your own corner of the internet, what would it look like?

In a lot of ways I feel like I already have this! I have my small communities (the Recurse Center, my little friend groups) where folks create and share cool work. And I have my site, where I build and write about fun things. And I love all of that.

But if I could wave a magic wand and change the internet or create something new, I’d think about how to create a social media site with better incentives. That is, I think most platforms encourage you to think in terms of likes or views when sharing your work. But in my experience those metrics aren’t always aligned with making “good” work — they’re often aligned with making work that is easy to share or to understand. And I think that affects the type of work people share on those platforms in big and small ways.

I care about this a lot because when I make a massively multiplayer website – which is my favorite thing to do – I *need* a bunch of players. A website like One Million Checkboxes doesn’t work if it’s just me. And the only way that I know how to find a massive player base is with social media.

What articles and/or videos are you waiting to read/watch right now?

After an eight-year hiatus one of my favorite things on the internet – the youtube channel Every Frame a Painting – has uploaded new videos! I got to see their first new video live at XOXO last year but I’ve been saving their remaining videos for the right moment – maybe a time where I need a little inspiration or a push to aim higher and create something beautiful. Although after writing this out maybe I’ll just go watch them right now…

How does making games for the 2000s internet shape the way you think about the future of creativity online?

I got at this a little above, but I think when people talk about “the old internet” they’re mostly talking about things that are personal, small, and created for the fun of it.

I think it’s getting easier to make stuff for the internet every year. And that’s great! So I *hope* that we see more and more people making stuff that feels like it belongs to the old internet (even if it’s using technology that wasn’t available to us back then).

For myself — I think these days I can really tell whether I’m making something for myself or whether I’m making something because I think it’s what other people want. And I try to push myself to stick to the former. It’s more fun that way.


Nolen Royalty is a game developer and software engineer who makes experimental massively multiplayer games and elaborate technical jokes at eieio.games. He’s interested in getting strangers to interact in fun ways over the internet (e.g. One Million Checkboxes, Stranger Video) and running games in surprising places (e.g. playing pong inside his unclosed browser tabs). He lives in Brooklyn and is determined to keep the internet fun.

The post Nolen Royalty, known as eieio, keeps the internet fun with experimental multiplayer games appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter 137

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 137 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 137, several contributors managed to land fixes and improvements in our codebase:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

Updated: input sources of type mouse and touch now support fractional numbers

From now on, for both WebDriver BiDi and Marionette input sources of type mouse and touch will support fractional numbers for x and y positions for the pointerMove action.

WebDriver BiDi

New: webExtension.install and webExtension.uninstall commands

Thanks to Krzysztof Jan Modras (chrmod) work WebDriver BiDi provides new webExtension.install and webExtension.uninstall commands, which allows clients to install and uninstall web extensions in the browser.

The webExtension.install command accepts one argument extensionData, which is an object containing the field type, which can have the following values:

  • archivePath – to install the web extension from the archive. Alongside with this type, a client has to provide the field path which leads to the extension archive.
  • base64 – to install the web extension from base64 string. In this case, the client has to provide the value with the base64 encoded representation of the web extension.
  • path – to install the web extension from the file. Here the client will have to provide the field path.

The command will return the web extension ID which can be used as an argument with the webExtension.uninstall command to delete the previously installed web extension.

Let’s look at the example of how it could work with the type base64:

-> {
  "method":"webExtension.install",
  "params":{
    "extensionData":{
      "type":"base64",
      "value":"UEsDBBQACAAIAAAAAAAAAAAAAAAAAAAAiMrC..AGAIMBAAAeHAAAAAA="
    }
  },
  "id": 2
}

<- { 
  "type": "success", 
  "result": { 
    "extension":"[email protected]"
  },
  "id": 2
}

Then deleting this web extension would look like this:

-> {
  "method": "webExtension.uninstall",
  "params": {
    "extension":"[email protected]"
  },
  "id": 3
}

<- { "type": "success", "id": 3, "result": {} }

New: userContexts argument for sessions.subscribe command

The sessions.subscribe command supports now a new userContexts argument, allowing clients to subscribe to events coming from certain user contexts (containers). This is especially helpful when clients want to limit subscriptions to certain browsing contexts but also want to cover the browsing contexts which are not created yet.

When userContexts argument is provided, the contexts parameter should not be present, otherwise an InvalidArgumentError will be raised.

Example of adding a subscription for a specific user context:

-> {
  "method": "session.subscribe",
  "params": {
    "events": [
      "log.entryAdded"
    ],
    "userContexts": [
      "736d454f-6745-4a2a-afae-a0beaf6341ff"
    ]
  },
  "id": 2
}

<- { 
  "type": "success", 
  "result": { 
    "subscription": "7d8fc09a-5fa6-42c1-a888-fa3e7d1e707d"
  },
  "id": 2
}

Also, it’s important to note that the only way to unsubscribe from this kind of subscription is by using the subscription ID returned by the session.subscribe command:

-> {
  "method": "session.unsubscribe",
  "params": {
    "subscriptions": ["7d8fc09a-5fa6-42c1-a888-fa3e7d1e707d"]
  },
  "id": 3
}

<- { "type": "success", "id": 3, "result": {} }

Updated: script.addPreloadScript throws an error when both contexts and userContexts arguments are provided

The script.addPreloadScript was updated to throw an invalid argument error when both contexts and userContexts arguments are provided.

Updated: browsingContext.navigate command does not return immediately anymore with wait argument equals none and beforeunload prompt opens.

The specification around the behavior of browsingContext.navigate when wait argument equals none was recently updated to match the timing of a new browsingContext.navigationCommitted event. As the first step to support this new behavior, we updated the browsingContext.navigate command to not return immediately when wait argument equals none and beforeunload prompt opens. More updates will follow.

Marionette

Bug fixes

Don MartiMore money and better stuff for people in the UK

Some good news last week: Meta settles UK ‘right to object to ad-tracking’ lawsuit by agreeing not to track plaintiff. Tanya O’Carroll, in the UK, has settled a case with Meta, and the company must stop using her data for ad targeting when she uses its services. It’s not a change for everyone, though, since the settlement is just for one person. O’Carroll said she is unable to disclose full details of the tracking-free access Meta will be providing in her case but she confirmed that she will not have to pay Meta.

The Open Rights Group now has a Meta opt-out page that anyone in the UK can use to do an opt out under the UK GDPR.

If you use any Meta products – Facebook, Instagram, Meta Quest or VR, Threads or WhatsApp – you can use our tool to request that they no longer collect or process your data for advertising. This is known as your right to object, which is enshrined in data protection law. Meta had tried to get around GDPR, but by settling Tanya’s case they have admitted that they need to give their users this right.

If you’re in the UK, you can either use the form on the site, or use the mailto link to open up a new regular email from your own account pre-populated with the opt out text. This is a win not just because it could mean less money for a transnational criminal organization and more money staying in the UK, but also because it’s going to mean better products and services for the people who do it.

Opt outs are one layer in the onion.

  • Don’t do a surveilled activity

  • Block the transfer of tracking data

  • Generate tracking data that is hard to link to you

  • Set an opt out while doing the surveilled activity

  • Send an opt out or Right to Delete after doing the surveilled activity

Having access to this new tool doesn’t mean not to do the others. Even if I could figure out how to use the Meta apps in a way that’s totally safe for me, it’s still a win to switch away because it helps build network effects for the alternatives and more safety for other people. So even if you do this opt out, it’s also a good idea to do the other effective privacy tips.

How this gets you more money and better stuff

Turning off the personalized ads is a bigger deal than it looks like. The arguments from advertising personalization fans don’t reflect the best research on the subject. Ad personalization systems, especially on Facebook, are designed to give some hard-to-overcome advantages to deceptive advertisers. Any limitations to personalization look like a big win, shopping-wise. In one study, turning on an Apple privacy setting reduced reported fraud losses by 4.7%.

The personalization of ads on Facebook helps vendors of crappy, misrepresented goods match their products to the shoppers who are most likely to fall for their bullshit. Yes, you can follow the advice in articles like Don’t Get Scammed! Tips For Spotting AI-Generated Fake Products Online on Bellingcat, but it’s a time-saver and an extra layer of protection not to get the scam ad in the first place.

Privacy tools and settings that limit ad personalization have been available for quite a while. If people who use them were buying worse stuff, the surveillance industry would have said so by now. Anyway, if you’re in the UK, go do the Meta opt-out.

In other countries, other effective privacy tips are still a win.

Related

Click this to buy better stuff and be happier Feeling left out because you’re not in the UK? This tip works everwhere that I know of.

Bonus links

THE WHITE COAT DIDN’T BETRAY YOU—THE PIXEL DID: Judge Keeps Florida Wiretap Case Against Hospital Alive by Blake Landis. (Interesting legal direction in the USA: many states have wiretapping laws that may or may not apply to the Meta Pixel and CAPI.)

Applying the Fundamental Axioms to Reduce Uncertainty by Joe Gregorio. Would it be bad form to point out that the entire edifice of “Agile” software development is built on a bed of lies? (I don’t know. Read the whole thing and make up your own mind? Related: Day-to-day experiences with bug futures

Why We Need Shortwave 2.0 by Kim Andrew Elliott on Radio World. Because Shortwave Radiogram is transmitted on a regular amplitude-modulated shortwave transmitter, it can be received on any shortwave radio, from inexpensive portable radios with no sideband capability, to more elaborate communications receivers, amateur transceivers (most of which nowadays have general coverage receivers), and software defined radios (SDRs). (Then you need a program to convert the encoded signal into text and/or images—or this functionality could be built into future inexpensive radios.)

Mozilla Open Policy & Advocacy BlogMozilla shares 2025 Policy Priorities and Recommendations for Creating an Internet Where Everyone Can Thrive

Mozilla envisions a future where the internet is a truly global public resource that is open, accessible, and safe for all. An internet that benefits people using online services, prioritizes the right to privacy, and enables economic dynamism. Our commitment to this vision stems from Mozilla’s foundational belief that the internet was built by people, for people and that its future should not be dictated by a few powerful organizations.

When technology is developed solely for profit, it risks causing real harm to people. True choice and control for Americans can only be achieved through a competitive ecosystem with a range of services and providers that foster innovation. However, today’s internet is far from this ideal state, and without action, is only set to become increasingly consolidated in the age of AI.

Today, Mozilla is setting out our 2025 – 2026 Policy Vision for the United States as we look to a new administration and a new congress. Our policy vision is anchored in our guiding principles for a healthy internet, and outlines policy priorities that we believe should be the ‘north star’ for U.S. policymakers and regulators. Some recommendations are long overdue, while others seek to ensure the development of a healthy and competitive internet moving forward.

Here’s how we can work together to make this happen.

Priority 1: Openness, Competition, and Accountability in AI

Promoting open source policies and approaches in AI has the potential not just to create technology that benefits individuals, but also to make AI systems safer and more transparent. Open approaches and public investment can spur increased research and development, create products that are more accessible and less vulnerable to cyberattacks, and help to catalyze investment, job creation, and a more competitive ecosystem. Mozilla’s key recommendations include:

  • Increase government use of, and support for, open-source AI. The U.S. federal government procures billions of dollars of software every year. The government should use these resources to promote and leverage open source AI when possible, to drive growth and innovation.
  • Develop and fund public AI infrastructure. Supporting initiatives like the National AI Research Resource (NAIRR) and the Department of Energy’s FASST program is crucial for developing public AI infrastructure that provides researchers and universities with access to AI tools, fosters innovation and ensures benefits for all.
  • Grow the AI talent ecosystem. It is critical that America invests in education programs to grow the domestic AI talent ecosystem. Without this talent, America will face serious difficulties competing globally.
  • Provide access to AI-related resource consumption data. At its current growth trajectory, AI could end up consuming tremendous amounts of natural resources. The government should work with the AI industry (from semiconductor developers to cloud providers to model deployers) to provide open access to resource consumption data and increase industry transparency; this can help to prevent expensive and dangerous grid failures and could lead to lower energy prices for consumers.
  • Clarify a federal position on open source AI export controls that maintains an open door for innovation. By affirming a federal position on open source AI export controls to reflect those of NTIA, and emphasizing the benefits of open models, the administration can spur further advancement in the field.

Priority 2: Protecting Online Privacy and Ensuring Accountability 

Today’s internet economy is powered by people’s information. While this data can deliver massive innovation and new services, it can also put consumers and trust at risk. Unchecked collection and opaque practices can leave people susceptible to deceptive and invasive practices, and discrimination online.  The rise of generative AI makes the issue of online privacy more urgent than ever.

Mozilla believes that privacy and innovation can coexist. With action and collaboration, policymakers can shift the internet economy toward one that prioritizes users’ rights, transparency, and trust — where privacy is not a privilege but a guarantee for all. We recommend that policymakers:

  • Pass strong comprehensive federal privacy legislation and support state efforts. Congress must enact a sufficiently strong comprehensive federal privacy law that addresses AI-specific privacy protections, upholds data minimization, ensures the security protections that encryption provides, and covers children and adults, setting a high bar for meaningful protections. This is how Congress can create an environment where people can truly benefit from the technologies they rely on without paying the premium of exploitation of their personal data. States should also move to enact strong laws to fill the gap and protect their constituents.
  • Support the adoption of privacy-enhancing technologies (PETs). This includes funding NIST and the National Science Foundation to advance fundamental and applied research, while establishing strong privacy protections that incentivize companies to prioritize PETs. The global standards development and consensus process is essential for privacy preserving technologies to develop in a sustainable manner, in particular around areas like advertising. Legislation can also incentivize companies to adopt more privacy-preserving business practices, ultimately benefiting users and supporting their right to privacy online.
  • Provide necessary resources and tools to data privacy regulators. Congress and the administration must enable and empower relevant federal regulators by providing additional resources and authorizations to facilitate privacy-related investigations and enforcement. Efforts should, in particular, target data brokers who traffic sensitive data.
  • Support critical independent research. Policymakers should ensure meaningful access to important data from major platforms for academia and civil society, enabling better research into big tech’s harms and stronger accountability. Transparency efforts like the bipartisan Platform Accountability and Transparency Act (PATA) are essential to advancing transparency while protecting public-interest research. Expanding such legislation to include AI platforms and model providers is critical to addressing privacy-related harms and ensuring accountability in the evolving digital landscape.
  • Respect browser opt-out signals. We encourage lawmakers at the state and federal level to support key privacy tools, like the Global Privacy Control (GPC) that Firefox uses. Mozilla supports bills like California’s AB 566, which would require browsers and mobile operating systems to include an opt-out setting. We encourage lawmakers at the state and federal level to advance this key privacy tool in law and meet the expectations that consumers rightly have about treatment of their personal information.

Priority 3: Increasing Choice for Consumers

Real choice and control for consumers require an open, fair, and competitive ecosystem where diverse services and providers can thrive.

Updated competition laws – and an understanding of the importance of competition at every layer of the ecosystem – are essential for the internet to be private, secure, interoperable, open, transparent, and to balance commercial profit with public benefit. Mozilla is committed to this future. To achieve this, we must advance the below.

  • Update antitrust legislation to address anti-competitive business practices, such as harmful self-preferencing, that stymie innovation and limit consumer choice. Congress must pass antitrust legislation that addresses these practices and provides the necessary resources, expertise, and authority to relevant regulatory agencies.
  • Tackle harmful design practices. Harmful deceptive design practices not only manifest at the interface level, but also deeper at the operating system level – particularly in cases of vertical integration of services and features. Deploying manipulative, coercive, and deceptive tactics such as aggressive and misleading prompts, messages, and pop-ups risk overriding user choice entirely. Policymakers must hold bad actors accountable.
  • Foster competition across the ecosystem. Independent browser and browser engine developers, like Mozilla, have a long history of innovating and offering privacy- and security-conscious users a meaningful alternative to big tech browser engines. Policymakers should recognize the importance of independent browsers and browser engines in maintaining a safe, open, and interoperable web that provides meaningful choice.

So what is the path forward?

We see our vision as a roadmap to a healthier internet. We recognize that achieving this vision means engaging with legislators, regulators, and the wider policy community. Mozilla remains committed to our mission to ensure the internet is a space for innovation, transparency, and openness for all.

Read our Vision for the United States: 2025 – 2026 for a more comprehensive look at our priorities and recommendations to protect the future of the internet. 

The post Mozilla shares 2025 Policy Priorities and Recommendations for Creating an Internet Where Everyone Can Thrive appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogAdopting the FLS

Adopting the FLS

Some years ago, Ferrous Systems and AdaCore worked together to assemble a description of Rust called the FLS1. Ferrous Systems has since been faithfully maintaining and updating this document for new versions of Rust, and they've successfully used it to qualify toolchains based on Rust for use in safety-critical industries. Seeing this success, others have also begun to rely on the FLS for their own qualification efforts when building with Rust.

The members of the Rust Project are passionate about shipping high quality tools that enable people to build reliable software at scale. Such software is exactly the kind needed by those in safety-critical industries, and consequently we've become increasingly interested in better understanding and serving the needs of these customers of our language and of our tools.

It's in that light that we're pleased to announce that we'll be adopting the FLS into the Rust Project as part of our ongoing specification efforts. This adoption is being made possible by Ferrous Systems. We're grateful to them for the work they've done in making the FLS fit for qualification purposes, in promoting its use and the use of Rust generally in safety-critical industries, and now, for working with us to take the next step and to bring it into the Project.

With this adoption, we look forward to better integrating the FLS with the processes of the Project and to providing ongoing and increased assurances to all those who use Rust in safety-critical industries and, in particular, to those who use the FLS as part of their qualification efforts.

This adoption would not have been possible without the efforts of the Rust Foundation, and in particular of Joel Marcey, the Director of Technology at the Foundation, who has worked tirelessly to facilitate this on our behalf. We're grateful to him and to the Foundation for this support. The Foundation has published its own post about this adoption.

I'm relying on the FLS today; what should I expect?

We'll be bringing the FLS within the Project, so expect some URLs to change. We plan to release updates to the FLS in much the same way as they have been happening up until now.

We're sensitive to the fact that big changes to this document can result in costs for those using it for qualification purposes, and we don't have any immediate plans for big changes here.

What's this mean for the Rust Reference?

The Reference is still the Reference. Adopting the FLS does not change the status of the Reference, and we plan to continue to improve and expand the Reference as we've been doing.

We'll of course be looking for ways that the Reference can support the FLS, and that the FLS can support the Reference, and in the long term, we're hopeful we can find ways to bring these two documents closer together.

  1. The FLS stood for the "Ferrocene Language Specification". The minimal fork of Rust that Ferrous Systems qualifies and ships to their customers is called "Ferrocene", hence the name. We'll be dropping the expansion and just calling it the FLS within the Project.

Hacks.Mozilla.OrgImproving Firefox Stability in the Enterprise by Reducing DLL Injection

Beginning in version 138, Firefox will offer an alternative to DLL injection for Data Loss Prevention (DLP) deployments in enterprise environments.

DLL Injection

DLL injection into Firefox is a topic we’ve covered on the Hacks blog before. In 2023, we blogged about the Firefox capability to let users block third-party DLLs from being loaded. We explained what DLL injection is, how we deal with problematic third-party modules, our about:third-party page, and our third-party injection policy. Earlier, in 2019, we released a study of DLL injection Firefox bugs in collaboration with Polytechnique Montréal. We return to this topic now, in the context of enterprise Firefox installations.

First, a reminder of what DLL injection is and why it continues to be problematic. DLL injection is the term we use to describe third-party Windows software injecting its own DLL module code into Firefox. Third parties develop DLLs for injecting into applications to extend their functionality in some way. This is prevalent in the Windows ecosystem. When third-party code is injected, the injected code interacts with the internals of the application. While it is not unusual for software to work together, and the internet is built on software interoperating over documented standards, DLL injection differs in that the undocumented internals of an application are not intended to be a stable interface. As such, they are a poor foundation to build software products on. When the underlying application is changed, it can result in incompatibilities, leading to crashes or other unexpected behavior. In a modern web browser like Firefox, new features and fixes, big and small, are developed and released on a monthly schedule. Normal browser development can therefore cause incompatibilities with injected software, resulting in Firefox crashes, bypassing of security features, or other unpredictable buggy behavior. When these problems arise, they require emergency troubleshooting and engineering of workarounds for users until the problems are addressed by software updates. This often requires collaboration between the browser and the third-party application’s developers. The type of software injected into Firefox varies from small open source projects to widely-deployed enterprise security products. In an attempt to eliminate some of the most difficult DLL injection issues, we’ve turned our attention to Data Loss Prevention enterprise applications.

Data Loss Prevention (DLP) in the Enterprise

Data Loss Prevention (DLP) products are a type of software that is widely deployed by organizations to prevent unintended leaks of private data. Examples of private data include customer records such as names, addresses, credit card information or company secrets. Much like how anti-virus software is deployed across a corporation’s fleet of laptops, so too is DLP software. These deployments have become increasingly common, in large part due to compliance and liability concerns.

How does this relate to Firefox? DLP software typically uses DLL injection to monitor applications such as Firefox for activity that might leak private data. This only applies to specific operations that can leak sensitive information such as file uploads, pasting (as in copy-and-paste), drag-and-drop, and printing.

DLP and Firefox Today

Today, DLP software typically monitors Firefox activity via DLL injection as described above. Firefox and web browsers are not unique in this respect, but they are heavily used and under constant development, making DLL injection more dangerous. DLP software is typically deployed to a fleet of corporate computers that are managed by an IT department. This includes deployment of the software that injects into applications. DLP vendors take efforts to ensure that their products are compatible with the latest version of Firefox by testing beta versions and updating their DLLs as needed, but problems still occur regularly. A common issue is that a problem is encountered by corporate users who report the problem to their IT department. Their IT staff then work to debug the problem. They may file a bug report with Firefox or the DLP vendor. When a Firefox bug is filed, it can be a challenge for Mozilla to determine that the bug was actually caused by external software. When we learn of such problems, we alert the vendor and investigate workarounds. In the interim, users have a poor experience and may have to work around problems or even use another browser. When the browser is not functional, the problem becomes a high severity incident where support teams work as quickly as possible to help restore functionality.

Browsing Privacy

When users browse on company-owned computers, their browsing privacy is often subject to corporate-mandated software. Different regions have different laws about this and the disclosures required, but on a technical level, when the device is controlled by a corporation, that corporation has a number of avenues at its disposal for monitoring activity at whatever level is dictated by corporate policy. Firefox is built on the principle that browsing activity belongs only to the user, but as an application, it cannot reasonably override the wishes of the device administrator. Insofar as that administrator has chosen to deploy DLP software, they will expect it to work with the other software on the device. If a well-supported mechanism is not available, they will either turn to opaque and error-prone methods like DLL injection, or replace Firefox with another browser.

What’s New – Reducing DLL Injection in the Enterprise

Starting with Firefox 138, DLP software can work with Firefox without the use of DLL injection. Firefox 138 integrates the Content Analysis SDK and it can be enabled with Enterprise Policies. The SDK, developed by Google and used in Chrome Enterprise, is a lightweight protocol between the browser and a DLP agent, with the implementation being browser-specific. In other words, Firefox has its own implementation of the protocol. The integration allows Firefox to interact with DLP software while reducing the injection of third-party code. This will improve the stability for Firefox users in the enterprise and, as more DLP vendors adopt the SDK, there will be less third-party code injected into Firefox. With vendors and browsers using the same SDK, vendors can know that a single DLP agent implementation will be compatible with multiple browsers. During development of the Firefox implementation, we’ve been working with some leading DLP vendors to ensure compatibility. In addition to stability, Firefox will display an indicator when the DLP SDK is used, providing more transparency for users.

For Enterprise Use

Firefox will only enable the Content Analysis SDK in configurations where a Firefox Enterprise Policy is used. Firefox Enterprise Policies are used by organizations to configure Firefox settings across a fleet of computers. They allow administrators to configure Firefox, for example, to limit which browser extensions can be installed, set security-related browser settings, configure network proxy settings, and more. You can learn more about Firefox Enterprise Policies on our support article Enforce policies on Firefox for Enterprise.

The post Improving Firefox Stability in the Enterprise by Reducing DLL Injection appeared first on Mozilla Hacks - the Web developer blog.

Niko MatsakisDyn you have idea for `dyn`?

Knock, knock. Who’s there? Dyn. Dyn who? Dyn you have ideas for dyn? I am generally dissatisfied with how dyn Trait in Rust works and, based on conversations I’ve had, I am pretty sure I’m not alone. And yet I’m also not entirely sure the best fix. Building on my last post, I wanted to spend a bit of time exploring my understanding of the problem. I’m curious to see if others agree with the observations here or have others to add.

Why do we have dyn Trait?

It’s worth stepping back and asking why we have dyn Trait in the first place. To my mind, there are two good reasons.

Because sometimes you want to talk about “some value that implements Trait

The most important one is that it is sometimes strictly necessary. If you are, say, building a multithreaded runtime like rayon or tokio, you are going to need a list of active tasks somewhere, each of which is associated with some closure from user code. You can’t build it with an enum because you can’t enumerate the set of closures in any one place. You need something like a Vec<Box<dyn ActiveTask>>.

Because sometimes you don’t need to so much code

The second reason is to help with compilation time. Rust land tends to lean really heavily on generic types and impl Trait. There are good reasons for that: they allow the compiler to generate very efficient code. But the flip side is that they force the compiler to generate a lot of (very efficient) code. Judicious use of dyn Trait can collapse a whole set of “almost identical” structs and functions into one.

These two goals are distinct

Right now, both of these goals are expressed in Rust via dyn Trait, but actually they are quite distinct. For the first, you really want to be able to talk about having a dyn Trait. For the second, you might prefer to write the code with generics but compile in a different mode where the specifics of the type involved are erased, much like how the Haskell and Swift compilers work.

What does “better” look like when you really want a dyn?

Now that we have the two goals, let’s talk about some of the specific issues I see around dyn Trait and what it might mean for dyn Trait to be “better”. We’ll start with the cases where you really want a dyn value.

Observation: you know it’s a dyn

One interesting thing about this scenario is that, by definition, you are storing a dyn Trait explicitly. That is, you are not working with a T: ?Sized + Trait where T just happens to be dyn Trait. This is important because it opens up the design space. We talked about this some in the previous blog post: it means that You don’t need working with this dyn Trait to be exactly the same as working with any other T that implements Trait (in the previous post, we took advance of this by saying that calling an async function on a dyn trait had to be done in a .box context).

Able to avoid the Box

For this pattern today you are almost certainly representing your task a Box<dyn Task> or (less often) an Arc<dyn Task>. Both of these are “wide pointers”, consisting of a data pointer and a vtable pointer. The data pointer goes into the heap somewhere.

In practice people often want a “flattened” representation, one that combines a vtable with a fixed amount of space that might, or might not, be a pointer. This is particularly useful to allow the equivalent of Vec<dyn Task>. Today implementing this requires unsafe code (the anyhow::Anyhow type is an example).

Able to inline the vtable

Another way to reduce the size of a Box<dyn Task> is to store the vtable ‘inline’ at the front of the value so that a Box<dyn Task> is a single pointer. This is what C++ and Java compilers typically do, at least for single inheritance. We didn’t take this approach in Rust because Rust allows implementing local traits for foreign types, so it’s not possible to enumerate all the methods that belong to a type up-front and put them into a single vtable. Instead, we create custom vtables for each (type, trait) pair.

Able to work with self methods

Right now dyn traits cannot have self methods. This means for example you cannot have a Box<dyn FnOnce()> closure. You can workaround this by using a Box<Self> method, but it’s annoying:

trait Thunk {
    fn call(self: Box<Self>);
}

impl<F> Thunk for F
where
    F: FnOnce(),
{
    fn call(self: Box<Self>) {
        (*self)()
    }
}

fn make_thunk(f: impl FnOnce()) -> Box<dyn Thunk> {
    Box::new(f)
}

Able to call Clone

One specific thing that hits me fairly often is that I want the ability to clone a dyn value:

trait Task: Clone {
    //      ----- Error: not dyn compatible
    fn method(&self);
}

fn clone_task(task: &Box<dyn Task>) {
    task.clone()
}

This is a hard one to fix because the Clone trait can only be implemented for Sized types. But dang it would be nice.

Able to work with (at least some) generic functions

Building on the above, I would like to have dyn traits that have methods with generic parameters. I’m not sure how flexible this can be, but anything I can get would be nice. The simplest starting point I can see is allowing the use of impl Trait in argument position:

trait Log {
    fn log_to(&self, logger: impl Logger); // <-- not dyn safe today
}

Today this method is not dyn compatible because we have to know the type of the logger parameter to generate a monomorphized copy, so we cannot know what to put in the vtable. Conceivably, if the Logger trait were dyn compatible, we could generate a copy that takes (effectively) a dyn Logger – except that this wouldn’t quite work, because impl Logger is short for impl Logger + Sized, and dyn Logger is not Sized. But maybe we could finesse it.

If we support impl Logger in argument position, it would be nice to support it in return position. This of course is approximately the problem we are looking to solve to support dyn async trait:

trait Signal {
    fn signal(&self) -> impl Future<Output = ()>;
}

Beyond this, well, I’m not sure how far we can stretch, but it’d be nice to be able to support other patterns too.

Able to work with partial traits or traits without some associated types unspecified

One last point is that sometimes in this scenario I don’t need to be able to access all the methods in the trait. Sometimes I only have a few specific operations that I am performing via dyn. Right now though all methods have to be dyn compatible for me to use them with dyn. Moreover, I have to specify the values of all associated types, lest they appear in some method signature. You can workaround this by factoring out methods into a supertrait, but that assumes that the trait is under your control, and anyway it’s annoying. It’d be nice if you could have a partial view onto the trait.

What does “better” look like when you really want less code?

So what about the case where generics are fine, good even, but you just want to avoid generating quite so much code? You might also want that to be under the control of your user.

I’m going to walk through a code example for this section, showing what you can do today, and what kind of problems you run into. Suppose I am writing a custom iterator method, alternate, which returns an iterator that alternates between items from the original iterator and the result of calling a function. I might have a struct like this:

struct Alternate<I: Iterator, F: Fn() -> I::Item> {
    base: I,
    func: F,
    call_func: bool,
}

pub fn alternate<I, F>(
    base: I,
    func: F,
) -> Alternate<I, F>
where
    I: Iterator,
    F: Fn() -> I::Item,
{
    Alternate { base, func, call_func: false }
}

The Iterator impl itself might look like this:

impl<I, F> Iterator for Alternate<I, F>
where
    I: Iterator,
    F: Fn() -> I::Item,
{
    type Item = I::Item;
    fn next(&mut self) -> Option<I::Item> {
        if !self.call_func {
            self.call_func = true;
            self.base.next()
        } else {
            self.call_func = false;
            Some((self.func)())
        }
    }
}

Now an Alternate iterator will be Send if the base iterator and the closure are Send but not otherwise. The iterator and closure will be able to use of references found on the stack, too, so long as the Alternate itself does not escape the stack frame. Great!

But suppose I am trying to keep my life simple and so I would like to write this using dyn traits:

struct Alternate<Item> { // variant 2, with dyn
    base: Box<dyn Iterator<Item = Item>>,
    func: Box<dyn Fn() -> Item>,
    call_func: bool,
}

You’ll notice that this definition is somewhat simpler. It looks more like what you might expect from Java. The alternate function and the impl are also simpler:

pub fn alternate<Item>(
    base: impl Iterator<Item = Item>,
    func: impl Fn() -> Item,
) -> Alternate<Item> {
    Alternate {
        base: Box::new(base),
        func: Box::new(func),
        call_func: false
    }
}

impl<Item> Iterator for Alternate<Item> {
    type Item = Item;
    fn next(&mut self) -> Option<Item> {
        // ...same as above...
    }
}

Confusing lifetime bounds

There a problem, though: this code won’t compile! If you try, you’ll find you get an error in this function:

pub fn alternate<Item>(
    base: impl Iterator<Item = Item>,
    func: impl Fn() -> Item,
) -> Alternate<Item> {...}

The reason is that dyn traits have a default lifetime bound. In the case of a Box<dyn Foo>, the default is 'static. So e.g. the base field has type Box<dyn Iterator + 'static>. This means the closure and iterators can’t capture references to things. To fix that we have to add a somewhat odd lifetime bound:

struct Alternate<'a, Item> { // variant 3
	 base: Box<dyn Iterator<Item = Item> + 'a>,
    func: Box<dyn Fn() -> Item + 'a>,
    call_func: bool,
}

pub fn alternate<'a, Item>(
    base: impl Iterator<Item = Item> + 'a,
    func: impl Fn() -> Item + 'a,
) -> Alternate<'a, Item> {...}

No longer generic over Send

OK, this looks weird, but it will work fine, and we’ll only have one copy of the iterator code per output Item type instead of one for every (base iterator, closure) pair. Except there is another problem: the Alternate iterator is never considered Send. To make it Send, you would have to write dyn Iterator + Send and dyn Fn() -> Item + Send, but then you couldn’t support non-Send things anymore. That stinks and there isn’t really a good workaround.

Ordinary generics work really well with Rust’s auto trait mechanism. The type parameters I and F capture the full details of the base iterator plus the closure that will be used. The compiler can thus analyze a Alternate<I, F> to decide whether it is Send or not. Unfortunately dyn Trait really throws a wrench into the works – because we are no longer tracking the precise type, we also have to choose which parts to keep (e.g., its lifetime bound) and which to forget (e.g., whether the type is Send).

Able to partially monomorphize (“polymorphize”)

This gets at another point. Even ignoring the Send issue, the Alternate<'a, Item> type is not ideal. It will make fewer copies, but we still get one copy per item type, even though the code for many item types will be the same. For example, the compiler will generate effectively the same code for Alternate<'_, i32> as Alternate<'_, u32> or even Alternate<'_, [u8; 4]>. It’d be cool if we could have the compiler go further and coallesce code that is identical.1 Even better if it can coallesce code that is “almost” identical but pass in a parameter: for example, maybe the compiler can coallesce multiple copies of Alternate by passing the size of the Item type in as an integer variable.

Able to change from impl Trait without disturbing callers

I really like using impl Trait in argument position. I find code like this pretty easy to read:

fn for_each_item<Item>(
    base: impl Iterator<Item = Item>,
    mut op: impl FnMut(Item),
) {
    for item in base {
        op(item);
    }
}

But if I were going to change this to use dyn I can’t just change from impl to dyn, I have to add some kind of pointer type:

fn for_each_item<Item>(
    base: &mut dyn Iterator<Item = Item>,
    op: &mut dyn Fn(Item),
) {
    for item in base {
        op(item);
    }
}

This then disturbs callers, who can no longer write:

for_each_item(some_iter, |item| process(item));

but now must write this

for_each_item(&mut some_iter, &mut |item| process(item));

You can work around this by writing some code like this…

fn for_each_item<Item>(
    base: impl Iterator<Item = Item>,
    mut op: impl FnMut(Item),
) {
    for_each_item_dyn(&mut base, &mut op)
}

fn for_each_item_dyn<Item>(
    base: &mut dyn Iterator<Item = Item>,
    op: &mut dyn FnMut(Item),
) {
    for item in base {
        op(item);
    }
}

but to me that just begs the question, why can’t the compiler do this for me dang it?

Async functions can make send/sync issues crop up in functions

In the iterator example I was looking at a struct definition, but with async fn (and in the future with gen) these same issues arise quickly from functions. Consider this async function:

async fn for_each_item<Item>(
    base: impl Iterator<Item = Item>,
    op: impl AsyncFnMut(Item),
) {
    for item in base {
        op(item).await;
    }
}

If you rewrite this function to use dyn, though, you’ll find the resulting future is never send nor sync anymore:

async fn for_each_item<Item>(
    base: &mut dyn Iterator<Item = Item>,
    op: &mut dyn AsyncFnMut(Item),
) {
    for item in base {
        op(item).box.await; // <-- assuming we fixed this
    }
}

Conclusions and questions

This has been a useful mental dump, I found it helpful to structure my thoughts.

One thing I noticed is that there is kind of a “third reason” to use dyn – to make your life a bit simpler. The versions of Alternate that used dyn Iterator and dyn Fn felt simpler to me than the fully parameteric versions. That might be best addressed though by simplifying generic notation or adopting things like implied bounds.

Some other questions I have:

  • Where else does the Send and Sync problem come up? Does it combine with the first use case (e.g., wanting to write a vector of heterogeneous tasks each of which are generic over whether they are send/sync)?
  • Maybe we can categorize real-life code examples and link them to these patterns.
  • Are there other reasons to use dyn trait that I didn’t cover? Other ergonomic issues or pain points we’d want to address as we go?

  1. If the code is byte-for-byte identical, In fact LLVM and the linker will sometimes do this today, but it doesn’t work reliably across compilation units as far as I know. And anyway there are often small differences. ↩︎

Mozilla Open Policy & Advocacy BlogMozilla Respond to the White House’s RFI on AI

The Future of AI Must Be Open, Competitive, and Accountable

The internet has always thrived on openness, access, and broad participation. But as we enter the AI era, these core principles are at risk. A handful of dominant tech companies are positioned to control major AI systems, threatening both competition and innovation. At Mozilla, we believe AI should serve the public interest—not just corporate bottom lines.

Earlier this month, we responded to the White House Office of Science and Technology Policy’s request for input on AI policy, where we offered a roadmap for a more open and trustworthy AI future (view Mozilla’s full submission here). Here’s what we think should happen.

1. AI Policy Must Prioritize Openness, Competition, and Accountability

Right now, too much AI development stays behind closed doors. Proprietary models dominate, creating a landscape where users and developers have little insight—or control—over the AI systems shaping our digital lives. If we want AI that benefits everyone, we need strong policies that promote:

  • Openness: Encouraging open-source AI development ensures transparency, security, and accessibility.
  • Competition: Preventing monopolistic control keeps AI innovation dynamic and diverse.
  • Accountability: Effective governance can mitigate AI’s risks while fostering responsible development.

By advancing these principles, we can build an AI ecosystem that empowers users rather than locking them into closed, corporate-controlled systems.

2. The Government Should Support Public AI Infrastructure

AI’s future shouldn’t be dictated solely by private companies. Public investment is key to ensuring broad access to AI tools and research. We support initiatives like the National AI Research Resource (NAIRR), which would provide universities, researchers, and small businesses with AI computing power and resources. We hope to see federal, state, and local governments increasingly adopt open source AI models in their workflows, which can help save taxpayers money, increase efficiency, and prevent vendor lock-in. Public AI infrastructure levels the playing field, allowing more voices to shape AI’s future and facilitating innovation across America, not just in a few tech hubs.

3. Open-Source AI Should Be Encouraged, Not Restricted

Discussions about restricting open-source AI through export controls often miss the point about how to ensure national leadership in AI. Open-source AI fosters innovation, improves security, and lowers costs—critical benefits for businesses, researchers, and everyday users around the world. For the United States, promoting open source AI means more global products would be built on top of American AI innovation.

A 2025 McKinsey report, “Open source in the age of AI,” created in collaboration with Mozilla, found that 60% of decision-makers reported lower implementation costs with open-source AI compared to proprietary tools. Restricting open models would stifle progress and put the U.S. at a competitive disadvantage. Instead, we urge policymakers to support the open-source AI ecosystem and resist governance approaches that restrict AI models overall rather than making more targeted and precise interventions for AI harms.

4. AI Energy Consumption Needs Transparency

AI systems consume enormous amounts of energy, and this demand is only growing. To prevent AI from straining our power grids and driving up costs, we need better transparency into AI’s resource consumption so that we can plan infrastructure development more effectively. The federal government should work with the industry to collect and share data on AI energy use. By understanding AI’s impact on infrastructure, we can promote sustainable innovation.

5. The U.S. Must Invest in AI Talent Development

AI leadership isn’t just about technology—it’s about people. To remain competitive, the U.S. needs a strong, diverse workforce of AI researchers and practitioners. That means investing in:

  • Community colleges and public universities to train the next generation of AI professionals.
  • Apprenticeship and retraining programs to help workers adapt to AI-driven industries and adopt AI in every type of business across the economy from manufacturing to retail.
  • Public-private partnerships that create novel education pathways for students, like Dakota State University’s collaboration with ArmyCyber.

By growing the AI talent ecosystem, we ensure that AI works for people—not the other way around.

The Path Forward

AI is one of the most transformative technologies of our time. But without strong policies, it risks becoming yet another tool for big tech consolidation and unchecked corporate power.

At Mozilla, we believe in an AI future that is open, competitive, and accountable. We call on policymakers to take bold steps—supporting open-source AI, investing in public infrastructure, and fostering fair competition—to ensure AI works for everyone.

The post Mozilla Respond to the White House’s RFI on AI appeared first on Open Policy & Advocacy.

Niko MatsakisDyn async traits, part 10: Box box box

This article is a slight divergence from my Rust in 2025 series. I wanted to share my latest thinking about how to support dyn Trait for traits with async functions and, in particular how to do so in a way that is compatible with the soul of Rust.

Background: why is this hard?

Supporting async fn in dyn traits is a tricky balancing act. The challenge is reconciling two key things people love about Rust: its ability to express high-level, productive code and its focus on revealing low-level details. When it comes to async function in traits, these two things are in direct tension, as I explained in my first blog post in this series – written almost four years ago! (Geez.)

To see the challenge, consider this example Signal trait:

trait Signal {
    async fn signal(&self);
}

In Rust today you can write a function that takes an impl Signal and invokes signal and everything feels pretty nice:

async fn send_signal_1(impl_trait: &impl Signal) {
    impl_trait.signal().await;
}

But what I want to write that same function using a dyn Signal? If I write this…

async fn send_signal_2(dyn_trait: &dyn Signal) {
    dyn_trait.signal().await; //   ---------- ERROR
}

…I get an error. Why is that? The answer is that the compiler needs to know what kind of future is going to be returned by signal so that it can be awaited. At minimum it needs to know how big that future is so it can allocate space for it1. With an impl Signal, the compiler knows exactly what type of signal you have, so that’s no problem: but with a dyn Signal, we don’t, and hence we are stuck.

The most common solution to this problem is to box the future that results. The async-trait crate, for example, transforms async fn signal(&self) to something like fn signal(&self) -> Box<dyn Future<Output = ()> + '_>. But doing that at the trait level means that we add overhead even when you use impl Trait; it also rules out some applications of Rust async, like embedded or kernel development.

So the name of the game is to find ways to let people use dyn Trait that are both convenient and flexible. And that turns out to be pretty hard!

The “box box box” design in a nutshell

I’ve been digging back into the problem lately in a series of conversations with Michal Goulet (aka, compiler-errors) and it’s gotten me thinking about a fresh approach I call “box box box”.

The “box box box” design starts with the call-site selection approach. In this approach, when you call dyn_trait.signal(), the type you get back is a dyn Future – i.e., an unsized value. This can’t be used directly. Instead, you have to allocate storage for it. The easiest and most common way to do that is to box it, which can be done with the new .box operator:

async fn send_signal_2(dyn_trait: &dyn Signal) {
    dyn_trait.signal().box.await;
    //        ------------
    // Results in a `Box<dyn Future<Output = ()>>`.
}

This approach is fairly straightforward to explain. When you call an async function through dyn Trait, it results in a dyn Future, which has to be stored somewhere before you can use it. The easiest option is to use the .box operator to store it in a box; that gives you a Box<dyn Future>, and you can await that.

But this simple explanation belies two fairly fundamental changes to Rust. First, it changes the relationship of Trait and dyn Trait. Second, it introduces this .box operator, which would be the first stable use of the box keyword2. It seems odd to introduce the keyword just for this one use – where else could it be used?

As it happens, I think both of these fundamental changes could be very good things. The point of this post is to explain what doors they open up and where they might take us.

Change 0: Unsized return value methods

Let’s start with the core proposal. For every trait Foo, we add inherent methods3 to dyn Foo reflecting its methods:

  • For every fn f in Foo that is dyn compatible, we add a <dyn Foo>::f that just calls f through the vtable.
  • For every fn f in Foo that returns an impl Trait value but would otherwise be dyn compatible (e.g., no generic arguments4, no reference to Self beyond the self parameter, etc), we add a <dyn Foo>::f method that is defined to return a dyn Trait.
    • This includes async fns, which are sugar for functions that return impl Future.

In fact, method dispatch already adds “pseudo” inherent methods to dyn Foo, so this wouldn’t change anything in terms of which methods are resolved. The difference is that dyn Foo is only allowed if all methods in the trait are dyn compatible, whereas under this proposal some non-dyn-compatible methods would be added with modified signatures.

Change 1: Dyn compatibility

Change 0 only makes sense if it is possible to create a dyn Trait even though it contains some methods (e.g., async functions) that are not dyn compatible. This revisits RFC #255, in which we decided that the dyn Trait type should also implement the trait Trait. I was a big proponent of RFC #255 at the time, but I’ve sinced decided I was mistaken5. Let’s discuss.

The two rules today that allow dyn Trait to implement Trait are as follows:

  1. By disallowing dyn Trait unless the trait Trait is dyn compatible, meaning that it only has methods that can be added to a vtable.
  2. By requiring that the values of all associated types be explicitly specified in the dyn Trait. So dyn Iterator<Item = u32> is legal but not dyn Iterator on its own.

“dyn compatibility” can be powerful

The fact that dyn Trait implements Trait is at times quite powerful. It means for example that I can write an implementation like this one:

struct RcWrapper<T: ?Sized> { r: Rc<RefCell<T>> }

impl<T> Iterator for RcWrapper<T>
where
    T: ?Sized + Iterator,
{
    type Item = T::Item;
    
    fn next(&mut self) -> Option<T::Item> {
        self.borrow_mut().next()
    }
}

This impl makes RcWrapper<I> implement Iterator for any type I, including dyn trait types like RcWrapper<dyn Iterator<Item = u32>>. Neat.

“dyn compatibility” doesn’t truly live up to its promise

Powerful as it is, the idea of dyn Trait implementing Trait doesn’t quite live up to its promise. What you really want is that you could replace any impl Trait with dyn Trait and things would work. But that’s just not true because dyn Trait is ?Sized. So actually you don’t get a very “smooth experience”. What’s more, although the compiler gives you a dyn Trait: Trait impl, it doesn’t give you impls for references to dyn Trait – so e.g. given this trait

trait Compute {
    fn compute(&self);
}

If I have a Box<dyn Compute>, I can’t give that to a function that takes an impl Compute

fn do_compute(i: impl Compute) {
}

fn call_compute(b: Box<dyn Compute>) {
    do_compute(b); // ERROR
}

To make that work, somebody has to explicitly provide an impl like

impl<I> Compute for Box<I>
where
    I: ?Sized,
{
    // ...
}

and people often don’t.

“dyn compatibility” can be limiting

However, the requirement that dyn Trait implement Trait can be limiting. Imagine a trait like

trait ReportError {
    fn report(&self, error: Error);
    
    fn report_to(&self, error: Error, target: impl ErrorTarget);
    //                                ------------------------
    //                                Generic argument.
}

This trait has two methods. The report method is dyn-compatible, no problem. The report_to method has an impl Trait argument is therefore generic, so it is not dyn-compatible6 (well, at least not under today’s rules, but I’ll get to that).

(The reason report_to is not dyn compatible: we need to make distinct monomorphized copies tailored to the type of the target argument. But the vtable has to be prepared in advance, so we don’t know which monomorphized version to use.)

And yet, just because report_to is not dyn compatible mean that a dyn ReportError would be useless. What if I only plan to call report, as in a function like this?

fn report_all(
    errors: Vec<Error>,
    report: &dyn ReportError,
) {
    for e in errors {
        report.report(e);
    }
}

Rust’s current rules rule out a function like this, but in practice this kind of scenario comes up quite a lot. In fact, it comes up so often that we added a language feature to accommodate it (at least kind of): you can add a where Self: Sized clause to your feature to exempt it from dynamic dispatch. This is the reason that Iterator can be dyn compatible even when it has a bunch of generic helper methods like map and flat_map.

What does all this have to do with AFIDT?

Let me pause here, as I imagine some of you are wondering what all of this “dyn compatibility” stuff has to do with AFIDT. The bottom line is that the requirement that dyn Trait type implements Trait means that we cannot put any kind of “special rules” on dyn dispatch and that is not compatible with requiring a .box operator when you call async functions through a dyn trait. Recall that with our Signal trait, you could call the signal method on an impl Signal without any boxing:

async fn send_signal_1(impl_trait: &impl Signal) {
    impl_trait.signal().await;
}

But when I called it on a dyn Signal, I had to write .box to tell the compiler how to deal with the dyn Future that gets returned:

async fn send_signal_2(dyn_trait: &dyn Signal) {
    dyn_trait.signal().box.await;
}

Indeed, the fact that Signal::signal returns an impl Future but <dyn Signal>::signal returns a dyn Future already demonstrates the problem. All impl Future types are known to be Sized and dyn Future is not, so the type signature of <dyn Signal>::signal is not the same as the type signature declared in the trait. Huh.

Associated type values are needed for dyn compatibility

Today I cannot write a type like dyn Iterator without specifying the value of the associated type Item. To see why this restriction is needed, consider this generic function:

fn drop_all<I: ?Sized + Iterator>(iter: &mut I) {
    while let Some(n) = iter.next() {
        std::mem::drop(n);
    }
}

If you invoked drop_all with an &mut dyn Iterator that did not specify Item, how could the type of n? We wouldn’t have any idea how much space space it needs. But if you invoke drop_all with &mut dyn Iterator<Item = u32>, there is no problem. We don’t know which next method is being called, but we know it’s returning a u32.

Associated type values are limiting

And yet, just as we saw before, the requirement to list associated types can be limiting. If I have a dyn Iterator and I only call size_hint, for example, then why do I need to know the Item type?

fn size_hint(iter: &mut dyn Iterator) -> bool {
    let sh = iter.size_hint();
}

But I can’t write code like this today. Instead I have to make this function generic which basically defeats the whole purpose of using dyn Iterator:

fn size_hint<T>(iter: &mut dyn Iterator<Item = T>) -> bool {
    let sh = iter.size_hint();
}

If we dropped the requirement that every dyn Iterator type implements Iterator, we could be more selective, allowing you to invoke methods that don’t use the Item associated type but disallowing those that do.

A proposal for expanded dyn Trait usability

So that brings us to full proposal to permit dyn Trait in cases where the trait is not fully dyn compatible:

  • dyn Trait types would be allowed for any trait.7
  • dyn Trait types would not require associated types to be specified.
  • dyn compatible methods are exposed as inherent methods on the dyn Trait type. We would disallow access to the method if its signature references associated types not specified on the dyn Trait type.
  • dyn Trait that specify all of their associated types would be considered to implement Trait if the trait is fully dyn compatible.8

The box keyword

A lot of things get easier if you are willing to call malloc.

– Josh Triplett, recently.

Rust has reserved the box keyword since 1.0, but we’ve never allowed it in stable Rust. The original intention was that the term box would be a generic term to refer to any “smart pointer”-like pattern, so Rc would be a “reference counted box” and so forth. The box keyword would then be a generic way to allocate boxed values of any type; unlike Box::new, it would do “emplacement”, so that no intermediate values were allocated. With the passage of time I no longer think this is such a good idea. But I do see a lot of value in having a keyword to ask the compiler to automatically create boxes. In fact, I see a lot of places where that could be useful.

boxed expressions

The first place is indeed the .box operator that could be used to put a value into a box. Unlike Box::new, using .box would allow the compiler to guarantee that no intermediate value is created, a property called emplacement. Consider this example:

fn main() {
    let x = Box::new([0_u32; 1024]);
}

Rust’s semantics today require (1) allocating a 4KB buffer on the stack and zeroing it; (2) allocating a box in the heap; and then (3) copying memory from one to the other. This is a violation of our Zero Cost Abstraction promise: no C programmer would write code like that. But if you write [0_u32; 1024].box, we can allocate the box up front and initialize it in place.9

The same principle applies calling functions that return an unsized type. This isn’t allowed today, but we’ll need some way to handle it if we want to have async fn return dyn Future. The reason we can’t naively support it is that, in our existing ABI, the caller is responsible for allocating enough space to store the return value and for passing the address of that space into the callee, who then writes into it. But with a dyn Future return value, the caller can’t know how much space to allocate. So they would have to do something else, like passing in a callback that, given the correct amount of space, performs the allocation. The most common cased would be to just pass in malloc.

The best ABI for unsized return values is unclear to me but we don’t have to solve that right now, the ABI can (and should) remain unstable. But whatever the final ABI becomes, when you call such a function in the context of a .box expression, the result is that the callee creates a Box to store the result.10

boxed async functions to permit recursion

If you try to write an async function that calls itself today, you get an error:

async fn fibonacci(a: u32) -> u32 {
    match a {
        0 => 1,
        1 => 2,
        _ => fibonacci(a-1).await + fibonacci(a-2).await
    }
}

The problem is that we cannot determine statically how much stack space to allocate. The solution is to rewrite to a boxed return value. This compiles because the compiler can allocate new stack frames as needed.

fn fibonacci(a: u32) -> Pin<Box<impl Future<Output = u32>>> {
    Box::pin(async move {
        match a {
            0 => 1,
            1 => 2,
            _ => fibonacci(a-1).await + fibonacci(a-2).await
        }
    })
}

But wouldn’t it be nice if we could request this directly?

box async fn fibonacci(a: u32) -> u32 {
    match a {
        0 => 1,
        1 => 2,
        _ => fibonacci(a-1).await + fibonacci(a-2).await
    }
}

boxed structs can be recursive

A similar problem arises with recursive structs:

struct List {
    value: u32,
    next: Option<List>, // ERROR
}

The compiler tells you

error[E0072]: recursive type `List` has infinite size
 --> src/lib.rs:1:1
  |
1 | struct List {
  | ^^^^^^^^^^^
2 |     value: u32,
3 |     next: Option<List>, // ERROR
  |                  ---- recursive without indirection
  |
help: insert some indirection (e.g., a `Box`, `Rc`, or `&`) to break the cycle
  |
3 |     next: Option<Box<List>>, // ERROR
  |                  ++++    +

As it suggestes, to workaround this you can introduce a Box:

struct List {
    value: u32,
    next: Option<Box<List>>,
}

This though is kind of weird because now the head of the list is stored “inline” but future nodes are heap-allocated. I personally usually wind up with a pattern more like this:

struct List {
    data: Box<ListData>
}

struct ListData {
    value: u32,
    next: Option<List>,
}

Now however I can’t create values with List { value: 22, next: None } syntax and I also can’t do pattern matching. Annoying. Wouldn’t it be nice if the compiler just suggest adding a box keyword when you declare the struct:

box struct List {
    value: u32,
    next: Option<List>, // ERROR
}

and have List { value: 22, next: None } automatically allocate the box for me? The ideal is that the presence of a box is now completely transparent, so I can pattern match and so forth fully transparently:

box struct List {
    value: u32,
    next: Option<List>, // ERROR
}

fn foo(list: &List) {
    let List { value, next } = list; // etc
}

boxed enums can be recursive and right-sized

Enums too cannot reference themselves. Being able to declare something like this would be really nice:

box enum AstExpr {
    Value(u32),
    If(AstExpr, AstExpr, AstExpr),
    ...
}

In fact, I still remember when I used Swift for the first time. I wrote a similar enum and Xcode helpfully prompted me, “do you want to declare this enum as indirect?” I remember being quite jealous that it was such a simple edit.

However, there is another interesting thing about a box enum. The way I imagine it, creating an instance of the enum would always allocate a fresh box. This means that the enum cannot be changed from one variant to another without allocating fresh storage. This in turn means that you could allocate that box to exactly the size you need for that particular variant.11 So, for your AstExpr, not only could it be recursive, but when you allocate an AstExpr::Value you only need to allocate space for a u32, whereas a AstExpr::If would be a different size. (We could even start to do “tagged pointer” tricks so that e.g. AstExpr::Value is stored without any allocation at all.)

boxed enum variants to avoid unbalanced enum sizes

Another option would to have particular enum variants that get boxed but not the enum as a whole:

enum AstExpr {
    Value(u32),
    box If(AstExpr, AstExpr, AstExpr),
    ...
}

This would be useful in cases you do want to be able to overwrite one enum value with another without necessarily reallocating, but you have enum variants of widely varying size, or some variants that are recursive. A boxed variant would basically be desugared to something like the following:

enum AstExpr {
    Value(u32),
    If(Box<AstExprIf>),
    ...
}

struct AstExprIf(AstExpr, AstExpr, AstExpr);

clippy has a useful lint large_enum_variant that aims to identify this case, but once the lint triggers, it’s not able to offer an actionable suggestion. With the box keyword there’d be a trivial rewrite that requires zero code changes.

box patterns and types

If we’re enabling the use of box elsewhere, we ought to allow it in patterns:

fn foo(s: box Struct) {
    let box Struct { field } = s;
}

Frequently asked questions

Isn’t it unfortunate that Box::new(v) and v.box would behave differently?

Under my proposal, v.box would be the preferred form, since it would allow the compiler to do more optimization. And yes, that’s unfortunate, given that there are 10 years of code using Box::new. Not really a big deal though. In most of the cases we accept today, it doesn’t matter and/or LLVM already optimizes it. In the future I do think we should consider extensions to make Box::new (as well as Rc::new and other similar constructors) be just as optimized as .box, but I don’t think those have to block this proposal.

Is it weird to special case box and not handle other kinds of smart pointers?

Yes and no. On the one hand, I would like the ability to declare that a struct is always wrapped in an Rc or Arc. I find myself doing things like the following all too often:

struct Context {
    data: Arc<ContextData>
}

struct ContextData {
    counter: AtomicU32,
}

On the other hand, box is very special. It’s kind of unique in that it represents full ownership of the contents which means a T and Box<T> are semantically equivalent – there is no place you can use T that a Box<T> won’t also work – unless T: Copy. This is not true for T and Rc<T> or most other smart pointers.

For myself, I think we should introduce box now but plan to generalize this concept to other pointers later. For example I’d like to be able to do something like this…

#[indirect(std::sync::Arc)]
struct Context {
    counter: AtomicU32,
}

…where the type Arc would implement some trait to permit allocating, deref’ing, and so forth:

trait SmartPointer: Deref {
    fn alloc(data: Self::Target) -> Self;
}

The original plan for box was that it would be somehow type overloaded. I’ve soured on this for two reasons. First, type overloads make inference more painful and I think are generally not great for the user experience; I think they are also confusing for new users. Finally, I think we missed the boat on naming. Maybe if we had called Rc something like RcBox<T> the idea of “box” as a general name would have percolated into Rust users’ consciousness, but we didn’t, and it hasn’t. I think the box keyword now ought to be very targeted to the Box type.

How does this fit with the “soul of Rust”?

In my [soul of Rust blog post], I talked about the idea that one of the things that make Rust Rust is having allocation be relatively explicit. I’m of mixed minds about this, to be honest, but I do think there’s value in having a property similar to unsafe – like, if allocation is happening, there’ll be a sign somewhere you can find. What I like about most of these box proposals is that they move the box keyword to the declaration – e.g., on the struct/enum/etc – rather than the use. I think this is the right place for it. The major exception, of course, is the “marquee proposal”, invoking async fns in dyn trait. That’s not amazing. But then… see the next question for some early thoughts.

If traits don’t have to be dyn compatible, can we make dyn compatibility opt in?

The way that Rust today detects automatically whether traits should be dyn compatible versus having it be declared is, I think, nogr eat. It creates confusion for users and also permits quiet semver violations, where a new defaulted method makes a trait no longer be dyn compatible. It’s also a source for a lot of soundness bugs over time.

I want to move us towards a place where traits are not dyn compatible by default, meaning that dyn Trait does not implement Trait. We would always allow dyn Trait types and we would allow individual items to be invoked so long as the item itself is dyn compatible.

If you want to have dyn Trait implement Trait, you should declare it, perhaps with a dyn keyword:

dyn trait Foo {
    fn method(&self);
}

This declaration would add various default impls. This would start with the dyn Foo: Foo impl:

impl Foo for dyn Foo /*[1]*/ {
    fn method(&self) {
        <dyn Foo>::method(self) // vtable dispatch
    }

    // [1] actually it would want to cover `dyn Foo + Send` etc too, but I'm ignoring that for now
}

But also, if the methods have suitable signatures, include some of the impls you really ought to have to make a trait that is well-behaved with respect to dyn trait:

impl<T> Foo for Box<T> where T: ?Sized { }
impl<T> Foo for &T where T: ?Sized { }
impl<T> Foo for &mut T where T: ?Sized { }

In fact, if you add in the ability to declare a trait as box, things get very interesting:

box dyn trait Signal {
    async fn signal(&self);
}

I’m not 100% sure how this should work but what I imagine is that dyn Foo would be pointer-sized and implicitly contain a Box behind the scenes. It would probably automatically Box the results from async fn when invoked through dyn Trait, so something like this:

impl Foo for dyn Signal {
    async fn bar(&self) {
        <dyn Signal>::signal(self).box.await
    }
}

I didn’t include this in the main blog post but I think together these ideas would go a long way towards addressing the usability gaps that plague dyn Trait today.


  1. Side note, one interesting thing about Rust’s async functions is that there size must be known at compile time, so we can’t permit alloca-like stack allocation. ↩︎

  2. The box keyword is in fact reserved already, but it’s never been used in stable Rust. ↩︎

  3. Hat tip to Michael Goulet (compiler-errors) for pointing out to me that we can model the virtual dispatch as inherent methods on dyn Trait types. Before I thought we’d have to make a more invasive addition to MIR, which I wasn’t excited about since it suggested the change was more far-reaching. ↩︎

  4. In the future, I think we can expand this definition to include some limited functions that use impl Trait in argument position, but that’s for a future blog post. ↩︎

  5. I’ve noticed that many times when I favor a limited version of something to achieve some aesthetic principle I wind up regretting it. ↩︎

  6. At least, it is not dyn compatible under today’s rules. Convievably it could be made to work but more on that later. ↩︎

  7. This part of the change is similar to what was proposed in RFC #2027, though that RFC was quite light on details (the requirements for RFCs in terms of precision have gone up over the years and I expect we wouldn’t accept that RFC today in its current form). ↩︎

  8. I actually want to change this last clause in a future edition. Instead of having dyn compatibility be determined automically, traits would declare themselves dyn compatible, which would also come with a host of other impls. But that’s worth a separate post all on its own. ↩︎

  9. If you play with this on the playground, you’ll see that the memcpy appears in the debug build but gets optimized away in this very simple case, but that can be hard for LLVM to do, since it requires reordering an allocation of the box to occur earlier and so forth. The .box operator could be guaranteed to work. ↩︎

  10. I think it would be cool to also have some kind of unsafe intrinsic that permits calling the function with other storage strategies, e.g., allocating a known amount of stack space or what have you. ↩︎

  11. We would thus finally bring Rust enums to “feature parity” with OO classes! I wrote a blog post, “Classes strike back”, on this topic back in 2015 (!) as part of the whole “virtual structs” era of Rust design. Deep cut! ↩︎

William DurandFirefox AI & WebExtensions

I gave an introduction to the Firefox AI runtime and WebExtensions at a French local conference this month. This article is a loose transcript of what I said.

Let’s talk about Firefox, AI, and WebExtensions.

Browser extensions

Browser extensions are tiny applications that modify and/or add features to a web browser. Nowadays, these small programs can be written in such a way that they should be compatible with different browsers.

That’s because there exists a cross-browser system called “WebExtensions”, which – among other things – provides a set of common APIs that browser extensions can use. In addition to that, browsers can also expose their own APIs, and we’ll see that in a moment.

You’ll find a lot more information on this MDN page.

Note: During my talk, I used the Borderify extension to walk the audience through an example of a web extension. I then concluded that it’s super easy to get started but also very powerful. Extensions like uBlock Origin, Dark Reader, 1Password, etc. are rather powerful and sophisticated features.

Firefox AI runtime

Firefox has a new component based on Transformers.js and the ONNX runtime to perform local inference directly in the browser. In short, this runtime allows to use a model from Hugging Face (like GitHub but for Machine Learning models) directly in Firefox1, without the need to send data to any servers. Every operation is performed on the user’s machine once the model files have been downloaded by Firefox.

While every website could technically load Transformers.js and a model, this isn’t very efficient. Say two websites use the same model, you end up with two copies of the model files. And those aren’t exactly small.

This Firefox component – also known as the Firefox (AI) runtime – addresses this problem by ensuring that models are shared. In addition, this runtime takes care of managing resources, and model inference is isolated from the rest of Firefox in an inference process.

Note: During my talk, I mentioned that – while we call this “AI” now – Mozilla has been doing Machine Learning (ML) for a very long time2. For instance, Firefox Translations isn’t exactly new, and whether you like it or not, this is clearly an application of Generative AI3. Same thing, still.

Anyway, let’s see how we can interact with this runtime. We’re going to generate text that describes an image in the rest of this section.

The “hacker’s approach” is probably to open a Browser console in Nightly, and run some privileged JavaScript code:

const { createEngine } = ChromeUtils.importESModule(
  "chrome://global/content/ml/EngineProcess.sys.mjs"
);

const options = {
  taskName: "image-to-text",
  modelHub: "mozilla",
};
const engine = await createEngine(options);

const [res] = await engine.run({
  args: ["https://williamdurand.fr/images/posts/2014/12/brussels.jpg"],
});
// res => { generated_text: "A large poster of a man on a wall." }

As mentioned previously, the Firefox runtime is based on Transformers.js, which is why the code looks familiar when you know Transformers already. For instance, instead of passing an actual model here, we pass a task name. That’s an abstraction coming from Transformers. Don’t worry, we can also pass a model name and a lot of other (pipeline) options!

For those looking for a more graphical approach to play with this AI runtime, Firefox Nightly provides an about:inference page that looks like this:

Screenshot of the about:inference page

That’s cool but… Why? Well, it turns out this example isn’t a random example. This code snippet is an overly simplified version of a feature in Firefox’s PDF reader (PDF.js ❤️): alt text generation4.

Screenshot of an “Edit alt text” dialog in PDF.js (inside Firefox): the image description has been automatically generated.

Note: During my talk, someone asked a question about the use of GPU, for which I didn’t have the answer. I do have it now, though. The Firefox AI runtime runs on CPU by default, but it is possible to run on GPU via WebGPU. It’s worth mentioning that this runtime doesn’t feel as fast as a more “native” solution (like Ollama) yet but the team at Mozilla is working on it!

Anyhow, let’s move to the final part of this talk article.

WebExtensions ML API

The best of both worlds, yay!

We shipped an experimental WebExtensions API to allow extensions to do local inference (docs), leveraging the Firefox AI runtime under the hood. Expect things to evolve, it’s bleeding edge technology!

At the time of writing this, we can rewrite the example from the previous section into “extension code” as follows:

const options = {
  taskName: "image-to-text",
  modelHub: "mozilla",
};
await browser.trial.ml.createEngine(options);

const [res] = await browser.trial.ml.runEngine({
  args: ["https://williamdurand.fr/images/posts/2014/12/brussels.jpg"],
});
// res => { generated_text: "A large poster of a man on a wall." }

This looks similar, right? That’s on purpose. For extension developers, the WebExtensions API namespace is trial.ml, and the associated permission is named trialML, which extensions must request at runtime.

What can we do with that, though? Well, what if we were to provide the alt-text-generation feature not just in PDFs but for any image on any website?

That’s what we have done in a demo extension (code, docs), which we can see in action in the screencast below:

  • At 00:00, the demo extension has been loaded in Firefox Nightly.
  • At 00:02, we open the context menu on an image in the current web page, and we click on “Generate Alt Text”. This menu entry has been added by the extension using the menus API by the way.
  • At 00:05, we can see that Firefox is downloading the model files (the UI is provided by the extension, which receives events from Firefox). This means the model was not used before so the Firefox AI runtime has to download the model files first. This step is only necessary when Firefox doesn’t already have the model used by the extension.
  • At 00:09, the model inference starts.
  • At 00:12, the result of the inference, which is the description of the image in this case, is returned to the extension, and the extension shows it to the user.

Previously, I mentioned that browser extensions can be cross-browser. They can also run on different platforms as well. In Bug 1954362, I updated this demo extension so that it can run on Firefox for Android 😎

The screencast below shows the same extension running in Firefox for Android:

  • At 00:00, we can see a dialog because the extension has just been installed.
  • At 00:01, the extension opened a page in a new tab to request permission to interact with the Firefox AI runtime. This is pretty standard for browser extensions to request permissions ahead of time.
  • At 00:06, we load a web page with an image.
  • At 00:10, we use long-press on the image to trigger the extension because the menus API is not supported on Android yet (Bug 1595822).
  • At 00:12, similar to the previous screencast, Firefox starts by downloading the model files. This takes a lot of time because my emulator isn’t exactly fast. Do remember that this is only needed once, though.
  • At 01:09, the model inference starts.
  • At 01:12, the result of the inference, which is – again – the description of the image, is returned to the extension, and the extension shows it to the user.

And that’s basically it.

I am personally looking forward to see what extension developers could do with this new capability in Firefox! And since this is related to my work at Mozilla, feel free to get in touch if you have questions.

  1. Mozilla has its own “hub” too, so it isn’t just from Hugging Face. 

  2. I wrote a bit about my use of ML at work in 2019 in this article

  3. Firefox Translations generates text so this can be considered Generative AI. A major difference is that this doesn’t rely on a Large Language Model (LLM). Instead, Translations uses (Marian) Neural Machine Translation (NMT) models and the Bergamot runtime. 

  4. My colleague Tarek wrote an extensive Hacks article about this in 2024

Don Martipower moves, signaling, and a helpful book for understanding Big Tech

I’m still waiting for my copy of Careless People by Sarah Wynn-Williams, so I don’t have anything more on the content of the book than what I have seen in other reviews. The local bookstore had a stack—more than they normally order for new hardcovers—but I hesitated and they were gone next time I went in there. So yes, I am a little behind on this.

But come on, people.

Careless People is a best-seller because Meta decision-makers want it to be a best-seller.

In other Big Tech news, Google is delivering ads for obvious malware, with a landing page featuring an unauthorized copy of one of Google’s own logos. Even worse, they got spotted placing ads on Child Sexual Abuse Material. At first these look like embarrassing self-owns, especially for a company that’s contending for favorable PR in the AI business. Is their AI really that bad at classifying landing pages, extension listings, and the content of sites where their ads appear? The search ad malware thing is particularly egregious—the whole point of the deceptive ads that are all over Google Search now is to impersonate some well-known company. It should be a high school level coding project to filter out some of these.

But Big Tech’s apparent eagerness to appear in bad news makes sense when you look at the results. Out of all the people who read and were outraged by Careless People over the weekend, how many are going to come in to work on Monday and delete their Meta tracking pixel or turn off Meta CAPI? And how many people concerned about Google’s malware, CSAM, and infringing content problems are going to switch to inclusion lists and validated SupplyChain objects and stop with the crappy, often illegal ad placements that Google recommends and legit ad agencies don’t? For Big Tech, doing crimes in an obvious way is a power move, a credible, costly signal. If there were a Meta alternative that didn’t do genocide, or an honest alternative to Google search advertising, then advertising decision-makers would have switched to them already. All these embarrassing-looking stories are a signal: don’t waste your time looking for an alternative to paying us. The publisher’s page for Careless People has a Meta pixel on it.

I do have a book recommendation that might be a little easier to get a hold of. Codes of the Underworld by Diego Gambetta was the weekly book recommendation on A Collection of Unmitigated Pedantry. I’m glad to see that it is still in print, because it’s a useful way to help understand the Big Tech companies. Actions that might not have made sense in a company’s old create more value than you capture days are likely to be easier to figure out after understanding the considerations applied by other criminal organizations.

Codes of the Underworld by Diego Gambetta

Criminals have hard-to-satisfy communications needs, such as the need to convey a credible threat to a victim without attracting the attention of enforcers. This is related to the signaling problem faced by honest advertisers, but in reverse. How can a representative of a protection racket indicate to a small business that they represent a true threat, and aren’t just bluffing? Gambetta digs into a variety of signaling problems. It’s a 2009 book, so many of the Big Tech firms were still legit when it came out, but a lot of the communications methods from back then apply to the companies of today.

Is there a solution? As Gambetta points out, real-life organized crime perpetrators tend to copy from the movies, and today they’re copying the partnership with a friendly government subplot from The Godfather Part II. Maybe it’s time to watch that movie again.

Related

Update 12 Apr 2025: tante/Jürgen Geuter makees a similar point, in Vulgar Display of Power. It is a display of power: You as an artist, an animator, an illustrator, a writer, any creative person are powerless. We will take what we want and do what we want. Because we can.

imho AI-generated images used to illustrate a blog post (and not specifically to discuss AI images) usually send a louder message than the writing does. Gareth Watkins: AI: The New Aesthetics of Fascism

some ways that Facebook ads are optimized for deceptive advertising

trying to think about European tech policy in context

Advertisers Aren’t Thrilled With Zuckerberg’s Embrace Of Hate Speech by Mike Masnick. (But did they pull their Meta Pixels? Nope—for Meta, the heinous stuff is a flex.)

Bonus links

Meta settles UK right to object to ad-tracking lawsuit by agreeing not to track plaintiff by Natasha Lomas. (No details on how to replicate this result, though.) (discuss)

Google and Its Confederate AI Platforms Want Retroactive Absolution For AI Training Wrapped in the American Flag by Chris Castle. (If only they could be this patriotic when it’s time to pay their damn taxes.)

Privacy-Respecting European Tech Alternatives by Jonah Aragon. [T]he United States certainly does not have a monopoly on the best technologies, and many of our favorite recommended tools come from Europe and all over the world. Tools from the European Union also generally benefit from much stronger data protection laws, thanks to the EU’s General Data Protection Regulation (GDPR). Related: But how to get to that European cloud?

Please stop externalizing your costs directly into my face by Drew DeVault. Whether it’s cryptocurrency scammers mining with FOSS compute resources or Google engineers too lazy to design their software properly or Silicon Valley ripping off all the data they can get their hands on at everyone else’s expense…

Instead of F-35, Portugal turns to Europe in search of new fighter by Ricardo Meier. Melo stated that the predictability of our allies is a greater asset to take into account. We have to believe that, in all circumstances, these allies will be on our side.

Niko MatsakisRust in 2025: Language interop and the extensible compiler

For many years, C has effectively been the “lingua franca” of the computing world. It’s pretty hard to combine code from two different programming languages in the same process–unless one of them is C. The same could theoretically be true for Rust, but in practice there are a number of obstacles that make that harder than it needs to be. Building out silky smooth language interop should be a core goal of helping Rust to target foundational applications. I think the right way to do this is not by extending rustc with knowledge of other programming languages but rather by building on Rust’s core premise of being an extensible language. By investing in building out an “extensible compiler” we can allow crate authors to create a plethora of ergonomic, efficient bridges between Rust and other languages.

We’ll know we’ve succeeded when…

When it comes to interop…

  • It is easy to create a Rust crate that can be invoked from other languages and across multiple environments (desktop, Android, iOS, etc). Rust tooling covers the full story from writing the code to publishing your library.
  • It is easy1 to carve out parts of an existing codebase and replace them with Rust. It is particularly easy to integrate Rust into C/C++ codebases.

When it comes to extensibility…

  • Rust is host to wide variety of extensions ranging from custom lints and diagnostics (“clippy as a regular library”) to integration and interop (ORMs, languages) to static analysis and automated reasoning^[math].

Lang interop: the least common denominator use case

In my head, I divide language interop into two core use cases. The first is what I call Least Common Denominator (LCD), where people would like to write one piece of code and then use it in a wide variety of environments. This might mean authoring a core SDK that can be invoked from many languages but it also covers writing a codebase that can be used from both Kotlin (Android) and Swift (iOS) or having a single piece of code usable for everything from servers to embedded systems. It might also be creating WebAssembly components for use in browsers or on edge providers.

What distinguishes the LCD use-case is two things. First, it is primarily unidirectional—calls mostly go from the other language to Rust. Second, you don’t have to handle all of Rust. You really want to expose an API that is “simple enough” that it can be expressed reasonably idiomatically from many other languages. Examples of libraries supporting this use case today are uniffi and diplomat. This problem is not new, it’s the same basic use case that WebAssembly components are targeting as well as old school things like COM and CORBA (in my view, though, each of those solutions is a bit too narrow for what we need).

When you dig in, the requirements for LCD get a bit more complicated. You want to start with simple types, yes, but quickly get people asking for the ability to make the generated wrapper from a given language more idiomatic. And you want to focus on calls into Rust, but you also need to support callbacks. In fact, to really integrate with other systems, you need generic facilities for things like logs, metrics, and I/O that can be mapped in different ways. For example, in a mobile environment, you don’t necessarily want to use tokio to do an outgoing networking request. It is better to use the system libraries since they have special cases to account for the quirks of radio-based communication.

To really crack the LCD problem, you also have to solve a few other problems too:

  • It needs to be easy to package up Rust code and upload it into the appropriate package managers for other languages. Think of a tool like maturin, which lets you bundle up Rust binaries as Python packages.
  • For some use cases, download size is a very important constraint. Optimizing for size right now is hard to start. What’s worse, your binary has to include code from the standard library, since we can’t expect to find it on the device—and even if we could, we couldn’t be sure it was ABI compatible with the one you built your code with.

Needed: the “serde” of language interop

Obviously, there’s enough here to keep us going for a long time. I think the place to start is building out something akin to the “serde” of language interop: the serde package itself just defines the core trait for serialization and a derive. All of the format-specific details are factored out into other crates defined by a variety of people.

I’d like to see a universal set of conventions for defining the “generic API” that your Rust code follows and then a tool that extracts these conventions and hands them off to a backend to do the actual language specific work. It’s not essential, but I think this core dispatching tool should live in the rust-lang org. All the language-specific details, on the other hand, would live in crates.io as crates that can be created by anyone.

Lang interop: the “deep interop” use case

The second use case is what I call the deep interop problem. For this use case, people want to be able to go deep in a particular language. Often this is because their Rust program needs to invoke APIs implemented in that other language, but it can also be that they want to stub out some part of that other program and replace it with Rust. One common example that requires deep interop is embedded developers looking to invoke gnarly C/C++ header files supplied by vendors. Deep interop also arises when you have an older codebase, such as the Rust for Linux project attempting to integrate Rust into their kernel or companies looking to integrate Rust into their existing codebases, most commonly C++ or Java.

Some of the existing deep interop crates focus specifically on the use case of invoking APIs from the other language (e.g., bindgen and duchess) but most wind up supporting bidirectional interaction (e.g., pyo3, [npapi-rs][], and neon). One interesting example is cxx, which supports bidirectional Rust-C++ interop, but does so in a rather opinionated way, encouraging you to make use of a subset of C++’s features that can be readily mapped (in this way, it’s a bit of a hybrid of LCD and deep interop).

Interop with all languages is important. C and C++ are just more so.

I want to see smooth interop with all languages, but C and C++ are particularly important. This is because they have historically been the language of choice for foundational applications, and hence there is a lot of code that we need to integrate with. Integration with C today in Rust is, in my view, “ok” – most of what you need is there, but it’s not as nicely integrated into the compiler or as accessible as it should be. Integration with C++ is a huge problem. I’m happy to see the Foundation’s Rust-C++ Interoperability Initiative as well a projects like Google’s crubit and of course the venerable cxx.

Needed: “the extensible compiler”

The traditional way to enable seamless interop with another language is to “bake it in” i.e., Kotlin has very smooth support for invoking Java code and Swift/Zig can natively build C and C++. I would prefer for Rust to take a different path, one I call the extensible compiler. The idea is to enable interop via, effectively, supercharged procedural macros that can integrate with the compiler to supply type information, generate shims and glue code, and generally manage the details of making Rust “play nicely” with another language.

In some sense, this is the same thing we do today. All the crates I mentioned above leverage procedural macros and custom derives to do their job. But procedural macrods today are the “simplest thing that could possibly work”: tokens in, tokens out. Considering how simplistic they are, they’ve gotten us remarkably, but they also have distinct limitations. Error messages generated by the compiler are not expressed in terms of the macro input but rather the Rust code that gets generated, which can be really confusing; macros are not able to access type information or communicate information between macro invocations; macros cannot generate code on demand, as it is needed, which means that we spend time compiling code we might not need but also that we cannot integrate with monomorphization. And so forth.

I think we should integrate procedural macros more deeply into the compiler.2 I’d like macros that can inspect types, that can generate code in response to monomorphization, that can influence diagnostics3 and lints, and maybe even customize things like method dispatch rules. That will allow all people to author crates that provide awesome interop with all those languages, but it will also help people write crates for all kinds of other things. To get a sense for what I’m talking about, check out F#’s type providers and what they can do.

The challenge here will be figuring out how to keep the stabilization surface area as small as possible. Whenever possible I would look for ways to have macros communicate by generating ordinary Rust code, perhaps with some small tweaks. Imagine macros that generate things like a “virtual function”, that has an ordinary Rust signature but where the body for a particular instance is constructed by a callback into the procedural macro during monomorphization. And what format should that body take? Ideally, it’d just be Rust code, so as to avoid introducing any new surface area.

Not needed: the Rust Evangelism Task Force

So, it turns out I’m a big fan of Rust. And, I ain’t gonna lie, when I see a prominent project pick some other language, at least in a scenario where Rust would’ve done equally well, it makes me sad. And yet I also know that if every project were written in Rust, that would be so sad. I mean, who would we steal good ideas from?

I really like the idea of focusing our attention on making Rust work well with other languages, not on convincing people Rust is better 4. The easier it is to add Rust to a project, the more people will try it – and if Rust is truly a better fit for them, they’ll use it more and more.

Conclusion: next steps

This post pitched out a north star where

  • a single Rust library can be easily used across many languages and environments;
  • Rust code can easily call and be called by functions in other languages;
  • this is all implemented atop a rich procedural macro mechanism that lets plugins inspect type information, generate code on demand, and so forth.

How do we get there? I think there’s some concrete next steps:

  • Build out, adopt, or extend an easy system for producing “least common denominator” components that can be embedded in many contexts.
  • Support the C++ interop initiatives at the Foundation and elsewhere. The wheels are turning: tmandry is the point-of-contact for project goal for that, and we recently held our first lang-team design meeting on the topic (this document is a great read, highly recommended!).
  • Look for ways to extend proc macro capabilities and explore what it would take to invoke them from other phases of the compiler besides just the very beginning.
    • An aside: I also think we should extend rustc to support compiling proc macros to web-assembly and use that by default. That would allow for strong sandboxing and deterministic execution and also easier caching to support faster build times.

  1. Well, as easy as it can be. ↩︎

  2. Rust’s incremental compilation system is pretty well suited to this vision. It works by executing an arbitrary function and then recording what bits of the program state that function looks at. The next time we run the compiler, we can see if those bits of state have changed to avoid re-running the function. The interesting thing is that this function could as well be part of a procedural macro, it doesn’t have to be built-in to the compiler. ↩︎

  3. Stuff like the diagnostics tool attribute namespace is super cool! More of this! ↩︎

  4. I’ve always been fond of this article Rust vs Go, “Why they’re better together”↩︎

The Rust Programming Language BlogAnnouncing Rust 1.85.1

The Rust team has published a new point release of Rust, 1.85.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.85.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.85.1

Fixed combined doctest compilation

Due to a bug in the implementation, combined doctests did not work as intended in the stable 2024 Edition. Internal errors with feature stability caused rustdoc to automatically use its "unmerged" fallback method instead, like in previous editions.

Those errors are now fixed in 1.85.1, realizing the performance improvement of combined doctest compilation as intended! See the backport issue for more details, including the risk analysis of making this behavioral change in a point release.

Other fixes

1.85.1 also resolves a few regressions introduced in 1.85.0:

Contributors to 1.85.1

Many people came together to create Rust 1.85.1. We couldn't have done it without all of you. Thanks!

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 135-137)

Hello everyone,

Matthew here from the SpiderMonkey team. As the weather whipsaws from cold to hot to cold, I have elected to spend some time whipping together a too brief newsletter, which will almost certainly not capture the best of what we’ve done these last few months. Nevertheless, onwards!

🧑‍🎓Outreachy

We hosted an Outreachy intern, Serah Nderi, for the most recent Outreachy cycle, with Dan as her mentor. Serah worked on implementing the Iterator.range proposal as well as a few other things. We were happy to host her, and grateful to her for joining. Read about her internship project here.

🥯HYTRADBOI: Have You Tried Rubbing a Database On It

HYTRADBOI is an interesting independent online only conference, which this year had a strong programming languages track. Iain from the SpiderMonkey team was able to produce a stellar video talk called A quick ramp-up on ramping up quickly, where he helps the audience reinvent our baseline interpreter in 10 minutes. The talk is fun and short, so go forth and watch it!

👷🏽‍♀️ New features & In Progress Standards Work

We have done a whole bunch of shipping work this cycle. By far the most important thing is that Temporal has now been shipped on Nightly. We must extend our enormous gratitude to André Bargull, who has been implementing this proposal for years, providing reams of feedback to champions, and making it possible for us to ship so early. We’ve also been working on improving error messages reported to developers, and have a list of “good first bugs” available for people interested in getting started contributing to SpiderMonkey or Firefox.

In addition to Temporal, Dan has worked on shipping a number of our complete proposal implementations:

and Atomics.pause.

🚀 Performance

🚉 SpiderMonkey Platform Improvements

Don Martiprivacy laws for slacker states

It has come to my attention that there are still 15 or so states in the USA without privacy laws. This is understandable. We all have a lot of stuff to deal with. And of course there’s the problem of privacy law compliance turning into a time-suck for small businesses. The more that the laws and regulations pile up, the harder to pick out everything you need to do from all those damn PDFs. And it’s not just small companies. Honda just got around to dealing with some obvious differences between GDPR compliance and CCPA compliance that I pointed out back in 2020. And that’s an old PDF and a big company.

But the good news for slacker states is that doing the most work, cranking out the most lines of code, or the most pages of PDFs, or whatever, does not necessarily produce the best results. Given the amount of work that other states, and juridictions like the European Union, have already done on privacy, a slacker state can, right now, get not just the best privacy protection but also save a lot of time and grief for state employees and for business people in your state.

You need two laws. And we know that people are going to print them out, so please keep them short. (Maybe do a printer ink right to refill law next year?)

First, surveillance licenses for Big Tech. This gets you a few benefits.

  • Focus on the riskiest companies with the most money and staff for compliance—don’t put extra work on small local businesses.

  • Save your state’s attorney general and their staff a bunch of time. They’re not Big Tech’s support department. If a Big Tech company drops the ball on user support, just suspend their surveillance license until they clean up their act, like a problem bar and their liquor license.

  • You can define surveillance really briefly in the law and make the big out-of-state companies do the work of describing their surveillance practices in their license application.

That one is pretty easy to do as long as you focus purely on inbound data, the surveillance part, and don’t touch anything that sounds like speech from the company to others. And you can push most of the work off onto Big Tech and a new surveillance licensing board. I’m sure every state has people who would be willing to get on one of those.

Second, copy all the details from other states and countries. The other law would be focused on maximum privacy, minimum effort. The goal is to make a law that small business people can comply with, without even reading it, because they already had to do some privacy thing for somewhere else. Two parts.

  • Any privacy feature offered in some other jurisdiction must be offered here, too. A company only breaks the law if someone out-of-state gets a privacy feature that someone in-state doesn’t.

  • This law may be enforced by anyone except a state employee. (Borrow the Texas S.B. 8 legal hack, to protect yourself from Big Tech industry groups trying to block the law by starting an expensive case.)

A small business that operates purely locally can just do their thing. But if they already have some your California privacy rights feature or whatever, they just turn it on for this state too. Easier compliance project for the companies, better privacy for the users, no enforcement effort for the state, it’s a win-win-win. After all, state legislators don’t get paid by the page, and we each only get one set of carpal tunnels.

Related

there ought to be a law (Some stuff you can do to look busy next year maybe?)

advertising personalization: good for you?

predictions for 2025

Bonus links

Meta, Apparently, Really Wants Everyone To Read This Book (By Trying To Ban It) by Mike Masnick. Macmillan showed up just long enough to point out the blazingly obvious: they never signed any agreement with Meta and thus can’t be bound by arbitration. The arbitrator, displaying basic common sense, had to admit they had no jurisdiction over Macmillan.

Micah Lee writes, Not only is Substack right-wing broligarchy garbage, it’s way more expensive than Ghost Substack takes a 10% cut of every transaction, while Ghost doesn’t take any cut at all. Instead, Ghost charges based on the number of newsletter subscribers you have.

AI Search Has A Citation Problem by Klaudia Jaźwińska and Aisvarya Chandrasekar. Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead. (related: fix Google Search)

It’s Official: the Cybertruck is More Explosive than the Ford Pinto Update: In case you were wondering, are these sample sizes statistically significant? The resident scientist over at Some Weekend Reading demonstrates: yes they are! Tesla Cybertruck vs Ford Pinto: Which is the Bigger Fire-Trap? (The fatality rate may also be related to the electric doors problem: Testimony Reveals Doors Would Not Open on Cybertruck That Caught Fire in Piedmont, Killing Three. It’s possible that some of the people listed as victims would have survived if they had been able to exit.)

Don MartiLinks for 14 March 2025: autonomous drones in the news

How Ukraine integrates machine vision in battlefield drones by Oleksandr Matviienko, Bohdan Miroshnychenko & Zoriana Semenovych. In November 2024, the government procured 3,000 FPV drones with machine vision and targeting technologies. Reports also suggested that the procurement would be expanded to 10,000 units.

Preparing for the next European war by Azeem Azhar. One challenge will be the simple rate of innovation in the actual battlefield. Drone warfare in Ukraine has shown iteration cycles measuring weeks not years. So any systems procured today need to be future-proofed for those dynamics.

Thread by Trent Telenko The logistical facts are that the FM-MAG machine gun, the 60 mm & 81mm mortars, LAWS, Javelins, any infantry crew served weapon you care to name are all going to be most to fully replaced with drones and drone operators, because of the logistical leverage drones represent on the battlefield.

Long-range drone strikes weakening Russia’s combat ability, senior Ukrainian commander says by Deborah Haynes. Some of the drones are remotely piloted, others work via autopilot. Russia’s war has forced Ukraine to use technology and innovation to fight back against its far more powerful foe. It has accelerated the use of autonomous machines in an irreversible transformation of the warzone that everyone is watching and learning from. Brigadier Shchygol said: Right now, Ukraine’s battlefield experience is essentially a manual for the world.

Ukraine Drives Next Gen Robotic Warfare by Mick Ryan. Another more interesting trend has arisen which will force policy makers and military strategists to undertake an even more careful analysis of Ukraine war trends, and how these trends apply in other theatres, particularly the Pacific. This trend, robotic teaming, has emerged over the past year with the advent on drone-on-drone combat in the air and on the ground. In particular, several recent combat actions in Ukraine provide insights that need to be studied and translated for their employment in the massive ocean expanses, tens of thousands of kilometres of littoral, thousands of large and small islands and at least three continents that constitute the Pacific theatre.

DEEP DIVE: Taiwan miltech aims to undermine Chinese components by Tim Mak. Taiwan has learnt the central tech lesson from the war in Ukraine: the next global conflicts will heavily feature cheap, small drones—and in large numbers. So as an electronics and hardware component giant—especially relative to its size and diplomatic status—it is trying not only to develop a domestic industry, but also become an arsenal for the free world, building drones and devices for allied militaries worldwide.

Why America fell behind in drones, and how to catch up again by Cat Orman and Jason Lu. Also Building Drones for Developers: A uniquely open architecture on the F-11 means that every part of the drone is truly built around the [NVIDIAn] Orin [GPU]. This enables sophisticated autonomy applications in which ML models are able to not only analyze data obtained in-flight, but actually use that analysis to inform flight actions in real time.

Mozilla ThunderbirdVIDEO: The Thunderbird Design System

In this month’s Community Office Hours, Laurel Terlesky, Design Manager, is talking about the new Thunderbird Design System. In her talk from FOSDEM, “Building a Cross-Platform, Scalable, Open-Source Design System,” Laurel describes the Thunderbird design journey. If you are interested in how the desktop and mobile apps have gotten their new look, or in the open source design process (and how to take part), this talk is for you!

Next month, we’ll be chatting with Vineet Deo, a Software Engineer on the Desktop team who will walk us through the new Account Hub on the Desktop app. If you want a sneak peak at this new streamlined experience, you can find it in the Daily channel now and the Beta channel starting March 25.

February Office Hours: The Thunderbird Design System

As Thunderbird has grown over the past few years, so has its design needs. The most recent 115 and 128 releases, Supernova and Nebula, have introduced a more modern, streamlined look to the Thunderbird desktop application. Likewise, the Thunderbird for Android app has incorporated Material 3 in its development from the K-9 Mail app. When we begin working on the iOS app, we’ll need to work with Apple’s Human Interface Guidelines. Thus, Laurel and her team have built a design system that provides consistency across our existing and future products. This system’s underlying principles also embrace user choice and privacy while emphasizing human collaboration and high design standards.

Watch, Read, and Get Involved

We’re so grateful to Laurel for joining us! We hope this video helps explain more about how we design our Thunderbird products. Want to know more about this new Thunderbird design system? Want to find out how to contribute to the design process? Watch the video and check out our resources below!

VIDEO (Also on Peertube):

Thunderbird Design Resources:

The post VIDEO: The Thunderbird Design System appeared first on The Thunderbird Blog.

Mozilla Security BlogEnhancing CA Practices: Key Updates in Mozilla Root Store Policy, v3.0

Mozilla remains committed to fostering a secure, agile, and transparent Web PKI ecosystem. The new Mozilla Root Store Policy (MRSP) v3.0, effective March 15, 2025, introduces critical updates to strengthen Certificate Authority (CA) practices and enhance compliance.

A major focus of MRSP v3.0 is tackling the long-standing challenge of delayed certificate revocation—an issue that has historically weakened the security and reliability of TLS certificate management. The updated policy establishes clearer revocation expectations, improved incident reporting, subscriber education by CAs, revocation planning, and automated certificate issuance to ensure that certificate replacement and revocation can be handled promptly and at scale.

Beyond improving revocation, MRSP v3.0 also introduces policies to move CA operators toward dedicated hierarchies for TLS and S/MIME certificates and to enhance CA private key security with improved lifecycle tracking. All of these updates raise the bar for CA operations, reinforcing security and trust across the broader internet ecosystem.

Addressing Delayed Certificate Revocation

One of the most persistent challenges in certificate management has been ensuring that TLS server certificates are revoked quickly when necessary. Many website operators struggle to replace certificates efficiently, while security advocates emphasize the need for rapid revocation and automated certificate lifecycle management to reduce risks.

To strike the right balance between security, stability, and operational feasibility, MRSP v3.0 introduces several key changes towards clearer and more comprehensive revocation expectations.

No Exceptions to Revocation Requirements

Previously, some CA operators expressed uncertainty about whether Mozilla could grant exceptions to revocation timelines in certain situations. MRSP v3.0 explicitly reiterates that Mozilla does not grant exceptions to the TLS Baseline Requirements for revocation. This will promote more consistent enforcement of revocation policy.

Stronger Subscriber Communication and Contractual Clarity

CA operators must proactively warn subscribers about the risks of relying on publicly trusted certificates in environments that cannot tolerate timely revocation. Additionally, Subscriber Agreements must explicitly require cooperation with revocation timelines, ensuring CA operators can act without unnecessary delays.

Mass Revocation Preparedness

Historically, large-scale certificate revocations have been challenging, leading to operational slowdowns, and ecosystem-wide risks when urgent action is required. To prevent revocation delays, MRSP v3.0 mandates mass revocation readiness to help ensure that CA operators proactively plan for such scenarios. CA operators will be required to develop, maintain, and test comprehensive plans to revoke large numbers of certificates quickly when necessary. And, to further strengthen mass revocation preparedness, MRSP v3.0 introduces a third-party assessment requirement. Assessors will verify that CA operators:

  • Maintain well-documented, actionable plans for large-scale revocation,
  • Demonstrate feasibility through regular testing, and
  • Continuously improve their approach based on lessons learned.

These measures ensure CA operators are fully prepared for high-impact security events.

By strengthening mass revocation preparedness–and investing in CRLite–Mozilla is working to make certificate revocation a reliable security control.

Enhancing Automation in Certificate Issuance and Renewal

Automation plays a critical role in ensuring certificates can be replaced in a timely manner. To further encourage adoption of automation, MRSP v3.0 introduces new requirements for CA operators seeking root inclusion with the “websites” trust bit enabled, including: offering automation options for Domain Control Validation (DCV), certificate issuance, and renewal (demonstrated by a publicly accessible test website demonstrating automated certificate replacement at least every 30 days). Test website details must be disclosed in the Common CA Database (CCADB), adding transparency to this requirement. This push for more automation aligns with industry best practices, reducing reliance on manual processes, improving security, and minimizing mismanagement risks.

Phasing Out Dual-Purpose (TLS and S/MIME) Root CAs 

A significant change introduced in MRSP v3.0 is the phase-out of dual-purpose root CAs—those with both the “websites” trust bit and the “email” trust bit enabled. The industry is already moving toward separating TLS and S/MIME hierarchies due to their distinct security needs. Keeping these uses separate at the root certificate level ensures more focused compliance, increases CA agility, reduces complexity, and enhances security.  Going forward, Mozilla’s Root Store will require that new root CA certificates are dedicated to either TLS or S/MIME, and CA operators with existing dual-purpose roots will need to submit a transition plan to Mozilla by April 15, 2026, and complete a full migration to separate roots by December 31, 2028. This move enhances clarity and security by ensuring TLS and S/MIME compliance requirements remain distinct and enforceable.

Strengthening CA Key Security with “Cradle-to-Grave” Monitoring

Another major enhancement in MRSP v3.0 is the introduction of stricter key lifecycle monitoring to protect “parked” CA private keys. A “parked” key is a private key that the CA operator has generated for future use, but not yet used in a CA certificate. MRSP v3.0 adds mandatory reporting of parked key public hashes (corresponding to the parked CA private key) in annual audits. By enforcing transparency and accountability, Mozilla strengthens protections against undetected key compromise or misuse.

Conclusion

MRSP v3.0 represents a major step forward in ensuring stronger CA accountability with more reliable certificate revocation processes, better automation and operational resilience, and enhanced security for CA private keys. In all, these changes help modernize the Web PKI and ensure that CA operations will remain transparent, accountable, and secure.  We encourage you to engage with the Mozilla community and to contribute to these efforts and our shared mission of ensuring a secure and trustworthy online experience for all users.

The post Enhancing CA Practices: Key Updates in Mozilla Root Store Policy, v3.0 appeared first on Mozilla Security Blog.

Firefox NightlyTurn Tabs To Their Side – These Weeks in Firefox: Issue 177

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
    • Fixed a regression related to using the mouse to select text inside an extension options page embedded in about:addons – Bug 1939206
    • Fixed a leak triggered by the private browsing checkbox included in the install dialog (caught by shutdown leaks failure hit by one of our tests) – Bug 1943031
    • Starting from Firefox 136, users will be able to undo the installation of a new theme directly from the notification popup – Bug 1931402
  • Thanks to Becca King for contributing this enhancement 🎉
WebExtensions Framework
  • A new background.preferred_environment manifest property has been introduced in Firefox 136 in order to let cross-browser Manifest V3 extensions use a background service worker on Chrome and a background event page on Firefox with the same manifest file – Bug 1930334

DevTools

WebDriver BiDi

Lint, Docs and Workflow

Information Management

New Tab Page

Performance

Performance Tools (aka Firefox Profiler)

Search and Navigation

Address bar Scotch Bonnet Project
  • Improved feature callout for the Unified Search Button, enabled Scotch Bonnet by default in Beta, and re-enabled scotch bonnet in performance tests. (Bug 1937666, Bug 1942357, Bug 1923381)
  • Removed redundant default search engine favicon in the Unified Search Button. (Bug 1908929)
Suggest
  • AMP & Dismissal Fixes: Hooked up AMP-suggestion-matching strategy and fixed dismissals for Merino suggestions when Suggest features are disabled. (Bug 1942222, Bug 1942435)
Address bar
  • Set deduplication threshold to zero, fixed Ctrl-Alt-b / Ctrl-Alt-f on Mac to properly move the cursor. (Bug 1943946,  Bug 1481157)
  • Added support for exponents for calculator and updated UrlbarUtils icon protocol list. (Bug 1942623, Bug 1943272)
Search 
  • Search improvements with Nimbus: Refactored the Search Nimbus set-up to split Search experiments between two nimbus feature sections – one for search feature related experiments, the other one for search configuration related experiments. (Bug 1830056, Bug 1883685, Bug 1942099, Bug 1942658)
Places
  • Added support for exporting interactions via CSV and resolved process crash caused by missing chrome/resource URLs. (Bug 1940782, Bug 1945152)

Mozilla ThunderbirdThunderbird Monthly Development Digest – February 2025

Hello again Thunderbird Community! Despite the winter seeming to last forever and the world being in a state of flux, the Thunderbird team has been hard at work both in development and planning strategic projects. Here’s the latest from the team dedicated to making Thunderbird better each day:

Monthly Releases are here!

The concept of a stable monthly release channel has been in discussion for many years and I’m happy to share that we recently changed the default download on Thunderbird.net to point at our most feature-rich and up-to-date stable version. A lot of work went into this release channel, but for good reason – it brings the very latest in performance and UX improvements to users with a frequent cadence of updates. Meaning that you don’t have to wait a year to benefit from features that have been tested and already spent time on our more experimental Daily and Beta release channels. Some examples of features that you’ll find on the monthly release channel (but not on ESR) are:

  • Linux System Tray
  • Dark reader Support
  • Folder compaction improvements
  • Hundreds of UI enhancements
  • ICS Import
  • Calendar printing improvements
  • Appearance settings UI
  • Many, many more

Download it over the top of your ESR installation and get the benefits today!

Developing Standards

As privacy and security legislation evolves, the Thunderbird team often finds itself in the heart of discussions that have the potential to define industry solutions to emerging problems. In addition to the previously-mentioned research underway to develop post-quantum encryption support, we’re also currently considering solutions to EU laws (EU NIS2) that require multi-factor authentication be in place for critical digital infrastructure and services. We’re committed to solving these issues in a way that gives users and system administrators other options besides Google & Microsoft, and we’ll be sharing our thoughts on the matter soon, with the resulting decisions documented in our new ADR process.

For now, you can follow a healthy and colourful discussion on the topic of OAuth2 Dynamic Client Registration here.

Calendar UI Rebuild is underway

The long awaited UI/UX rebuild of the calendar has begun, with our first step being a new event dialog that we’re hoping to get into the hands of users on Daily via a preference switch. Turning the pref on will allow the existing calendar interface to launch the new dialog once complete. The following pieces of work have already landed:

  • Dialog container
  • Generic row container
  • Calendar row
  • Close button
  • Generic subview
  • Title

Keep track of feature delivery via the [meta] bug 

Exchange Web Services support in Rust

A big focus for February has been to grow our team so we’ve been busy interviewing and evaluating the tremendously talented individuals who have stepped forward to show interest in joining the team. In the remaining time, the team has managed to deliver another set of features and is heading toward a release on Daily that will result in most email features being made available for testing. Here’s what landed and started in February:

  • Display refactor
  • Basic testing framework
  • Sync folder – delete
  • Sync folder read/unread
  • Integration testing
  • Complete composition support (reply/forward)

Keep track of feature delivery here.

Account Hub

Since my last update, tasks related to density and font awareness, the exchange add-on and keyboard navigation were completed, with the details of each step available to view in our Meta bug & progress tracking. Watch out for this feature being rolled out as the default experience for the Daily build this week and on beta after the next merge on March 25th!

Global Message Database

The New Zealand team are in the middle of a work week to shout at the code together, have a laugh and console each other plan out work for the next several weeks. Their focus has been a sprint to prototype the integration of the new database with existing interfaces with a positive outcome meaning we’re a little closer to producing a work breakdown that paints a more accurate picture of what lies ahead. Onward!

In-App Notifications

Phase 3 of the project is underway to finalize our uplift stack and add in last-minute features! It is expected that our ESR version will have this new feature enabled for a small percentage of users at some point in April. If you use the ESR release, watch out for an introductory notification!

 Meta Bug & progress tracking.

New Features Landing Soon

Several requested features and fixes have reached our Daily users and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

Toby Pilling
Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – February 2025 appeared first on The Thunderbird Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter 136

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 136 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 136, several contributors managed to land fixes and improvements in our codebase:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

Bug Fixes

  • Firefox now handles WebSocket port conflicts for the RemoteAgent more efficiently. If the port specified via the --remote-debugging-port command-line argument cannot be acquired within 5 seconds, such as when another Firefox process is already using it, Firefox will now shut down instead of hanging.
  • Navigations using the HTTP scheme, triggered by the WebDriver:Navigate command in Marionette or browsingContext.navigate in WebDriver BiDi, will no longer be automatically upgraded to HTTPS. These requests will now remain on HTTP, as originally intended.

WebDriver BiDi

Updated: session.subscribe now returns a subscription ID

The return value of the session.subscribe command has been updated to include a session.Subscription which is a unique ID identifying the event subscription.

Example: adding a global subscription for network events:

-> {
  "method": "session.subscribe",
  "params": {
    "events": ["network"]
  },
  "id": 1
}

<- {
  "type": "success",
  "id": 1,
  "result": {
    "subscription": "9a29c77b-5e7b-43f1-bc6f-20b5228bf207"
  }
}

New: subscriptions parameter for session.unsubscribe

The subscription IDs can be used with the new subscriptions parameters of session.unsubscribe to remove specific event subscriptions. Previously, it was necessary to unsubscribe using the same attributes provided to session.subscribe. And if there were several calls to session.unsubscribe performed from different areas of the test, there could be unexpected side-effects.

With the new subscriptions parameter, only the subscriptions matching the provided subscription IDs will be removed. The parameter must be an array of subscription IDs. If any ID doesn’t match a known (and not yet removed) subscription ID, an InvalidArgumentError is raised.

When subscriptions is provided, the events or contexts parameters should not be present, otherwise an InvalidArgumentError will be raised.

Example: adding a second subscription for a specific context:

-> {
  "method": "session.subscribe",
  "params": {
    "events": ["browsingContext.contextCreated"],
    "contexts": ["304a76e3-e177-4355-b598-9f5f02808556"]
  },
  "id": 2
}

<- {
  "type": "success",
  "id": 2,
  "result": {
    "subscription": "e1a54927-dc48-4301-b219-812ceca13496"
  }
}

Now removing both subscriptions at once:

-> {
  "method": "session.unsubscribe",
  "params": {
    "subscriptions": ["9a29c77b-5e7b-43f1-bc6f-20b5228bf207", "e1a54927-dc48-4301-b219-812ceca13496"]
  },
  "id": 3
}

<- { "type": "success", "id": 3, "result": {} }

Using the subscriptions parameter allows to remove global and context-specific subscriptions simultaneously, without unexpected side effects.

Deprecated: contexts parameter for session.unsubscribe

It remains possible to globally unsubscribe by using only the events parameter. However the additional contexts parameter is now deprecated and will be removed in a future version. To remove context-specific subscriptions, please use the subscriptions parameter instead.

New: userContexts parameter for script.addPreloadScript

The script.addPreloadScript command now supports a userContexts parameter, allowing clients to specify in which user contexts (containers) a script should be automatically loaded — including new browsing contexts created within those user contexts.

Note: If userContexts is provided, the contexts parameter must not be included. Using both at the same time will raise an InvalidArgumentError.

Example: adding a preload script for a specific user context:

-> {
  "method": "script.addPreloadScript",
  "params": {
    "arguments": [], 
    "functionDeclaration": "(arg) => { console.log('Context created in test user context'); }",
    "userContexts": ["528545b6-89d6-486e-a85c-517212654c13"]
  },
  "id": 4
}

<- { "type": "success", "id": 4, "result": { "script": "168cff9c-9de0-44e7-af95-0c478ad2596f" } }

Updated: browsingContext.contextDestroyed event returns full tree

Thanks to Liam’s contribution, the browsingContext.contextDestroyed event now returns the full serialized tree of destroyed contexts — including all its child contexts.

Example: closing a tab with two iframes:

{
  "type": "event",
  "method": "browsingContext.contextDestroyed",
  "params": {
    "children": [
      {
        "children": [],
        "context": "30064771075",
        "originalOpener": null,
        "url": "https://example.bidi.com/frame1",
        "userContext": "1c88a947-3297-4733-9cd7-c16a863a602d"
      },
      {
        "children": [],
        "context": "30064771076",
        "originalOpener": null,
        "url": "about:blank",
        "userContext": "1c88a947-3297-4733-9cd7-c16a863a602d"
      }
    ],
    "context": "ab8c33a7-6bc8-45cc-a068-7cdcef518577",
    "originalOpener": null,
    "url": "https://example.bidi.com/",
    "userContext": "1c88a947-3297-4733-9cd7-c16a863a602d",
    "parent": null
  }
}

Mozilla Addons BlogRoot certificate will expire on 14 March — users need to update Firefox to prevent add-on breakage

Firefox logoUPDATE – 19 March

We’ve discovered a bug impacting some older extensions for users on Firefox 128+ or ESR 115 (in this case even updating Firefox won’t resolve the root certificate expiration). We’re working on fixes that will ship soon.

Otherwise, we recommend keeping Firefox and your extensions up to date. This will resolve the vast majority of root certification issues.


On 14 March a root certificate (the resource used to prove an add-on was approved by Mozilla) will expire, meaning Firefox users on versions older than 128 (or ESR 115) will not be able to use their add-ons. We want developers to be aware of this in case some of your users are on older versions of Firefox that may be impacted.

Should you see bug reports or negative reviews reflecting the effects of the certificate expiration, we recommend alerting your users to this support article that summarizes the issue and guides them through the process of updating Firefox so their add-ons work again.

The post Root certificate will expire on 14 March — users need to update Firefox to prevent add-on breakage appeared first on Mozilla Add-ons Community Blog.

Niko MatsakisRust in 2025: Targeting foundational software

Rust turns 10 this year. It’s a good time to take a look at where we are and where I think we need to be going. This post is the first in a series I’m calling “Rust in 2025”. This first post describes my general vision for how Rust fits into the computing landscape. The remaining posts will outline major focus areas that I think are needed to make this vision come to pass. Oh, and fair warning, I’m expecting some controversy along the way—at least I hope so, since otherwise I’m just repeating things everyone knows.

My vision for Rust: foundational software

I see Rust’s mission as making it dramatically more accessible to author and maintain foundational software. By foundational I mean the software that underlies everything else. You can already see this in the areas where Rust is highly successful: CLI and development tools that everybody uses to do their work and which are often embedded into other tools1; cloud platforms that people use to run their applications2; embedded devices that are in the things around (and above) us; and, increasingly, the kernels that run everything else (both Windows and Linux!).

Foundational software needs performance, reliability—and productivity

The needs of foundational software have a lot in common with all software, but everything is extra important. Reliability is paramount, because when the foundations fail, everything on top fails also. Performance overhead is to be avoided because it becomes a floor on the performance achievable by the layers above you.

Traditionally, achieving the extra-strong requirements of foundational software has meant that you can’t do it with “normal” code. You had two choices. You could use C or C++3, which give great power but demand perfection in response4. Or, you could use a higher-level language like Java or Go, but in a very particular way designed to keep performance high. You have to avoid abstractions and conveniences and minimizing allocations so as not to trigger the garbage collector.

Rust changed the balance by combining C++’s innovations in zero-cost abstractions with a type system that can guarantee memory safety. The result is a pretty cool tool, one that (often, at least) lets you write high-level code with low-level performance and without fear of memory safety errors.

Empowerment and lowering the barrier to entry

In my Rust talks, I often say that type systems and static checks sound to most developers like “spinach”, something their parents forced them to eat because it was “good for them”, but not something anybody wants. The truth is that type systems are like spinach—popeye spinach. Having a type system to structure your thinking makes you more effective, regardless of your experience level. If you are a beginner, learning the type system helps you learn how to structure software for success. If you are an expert, the type system helps you create structures that will catch your mistakes faster (as well as those of your less experienced colleagues). Yehuda Katz sometimes says, “When I’m feeling alert, I build abstractions that will help tired Yehuda be more effective”, which I’ve always thought was a great way of putting it.

What about non-foundational software?

When I say that Rust’s mission is to target foundational software, I don’t mean that’s all it’s good for. Projects like Dioxus, Tauri, and Leptos are doing fascinating, pioneering work pushing the boundaries of Rust into higher-level applications like GUIs and Webpages. I don’t believe this kind of high-level development will ever be Rust’s sweet spot. But that doesn’t mean I think we should ignore them—in fact, quite the opposite.

Stretch goals are how you grow

The traditional thinking goes that, because foundational software often needs control over low-level details, it’s not as important to focus on accessibility and ergonomics. In my view, though, the fact that foundational software needs control over low-level details only makes it more important to try and achieve good ergonomics. Anything you can do to help the developer focus on the details that matter most will make them more productive.

I think projects that stretch Rust to higher-level areas, like Dioxus, Tauri, and Leptos, are a great way to identify opportunities to make Rust programming more convenient. These opportunities then trickle down to make Rust easier to use for everyone. The trick is to avoid losing the control and reliability that foundational applications need along the way (and it ain’t always easy).

Cover the whole stack

There’s another reason to make sure that higher-level applications are pleasant in Rust: it means that people can build their entire stack using one technology. I’ve talked to a number of people who expected just to use Rust for one thing, say a tail-latency-sensitive data plane service, but they wound up using it for everything. Why? Because it turned out that, once they learned it, Rust was quite productive and using one language meant they could share libraries and support code. Put another way, simple code is simple no matter what language you build it in.5

“Smooth, iterative deepening”

The other lesson I’ve learned is that you want to enable what I think of as smooth, iterative deepening. This rather odd phrase is the one that always comes to my mind, somehow. The idea is that a user’s first experience should be simple–they should be able to get up and going quickly. As they get further into their project, the user will find places where it’s not doing what they want, and they’ll need to take control. They should be able to do this in a localized way, changing one part of their project without disturbing everything else.

Smooth, iterative deepening sounds easy but is in fact very hard. Many projects fail either because the initial experience is hard or because the step from simple-to-control is in fact more like scaling a cliff, requiring users to learn a lot of background material. Rust certainly doesn’t always succeed–but we succeed enough, and I like to think we’re always working to do better.

What’s to come

This is the first post of the series. My current plan6 is to post four follow-ups that cover what I see as the core investments we need to make to improve Rust’s fit for foundational software. In my mind, the first three talk about how we should double down on some of Rust’s core values:

  1. achieving smooth language interop by doubling down on extensibility;
  2. extending the type system to achieve clarity of purpose;
  3. leveling up the Rust ecosystem by building out better guidelines, tools, and leveraging the Rust Foundation.

After that, I’ll talk about the Rust open-source organization and what I think we should be doing there to make contributing to and maintaining Rust as accessible and, dare I say it, joyful as we can.


  1. Plenty of people use ripgrep, but did you know that when you do full text search in VSCode, you are also using ripgrep? And of course Deno makes heavy use of Rust, as does a lot of Python tooling, like the uv package manager. The list goes on and on. ↩︎

  2. What do AWS, Azure, CloudFlare, and Fastly all have in common? They’re all big Rust users. ↩︎

  3. Rod Chapman tells me I should include Ada. He’s not wrong, particularly if you are able to use SPARK to prove strong memory safety (and stronger properties, like panic freedom or even functional correctness). But Ada’s never really caught on broadly, although it’s very successful in certain spaces. ↩︎

  4. Alas, we are but human. ↩︎

  5. Well, that’s true if the language meets a certain base bar. I’d say that even “simple” code in C isn’t all that simple, given that you don’t even have basic types like vectors and hashmaps available. ↩︎

  6. I reserve the right to change it as I go! ↩︎

The Servo BlogThis month in Servo: new elements, IME support, delegate API, and more!

Servo now supports more HTML and CSS features:

servoshell showing new support for <details>, <meter>, and <progress> elements, plus layout support for <slot> elements

Plus several new web API features:

<slot> elements are now fully supported including layout (@simonwuelker, #35220, #35519), and we’ve also landed support for the ‘::slotted’ selector (@simonwuelker, #35352). Shadow roots are now supported in devtools (@simonwuelker, #35294), and we’ve fixed some bugs related to shadow DOM trees (@simonwuelker, #35276, #35338), event handling (@simonwuelker, #35380), and custom elements (@maxtidev, @jdm, #35382).

We’ve landed layout improvements around ‘border-collapse’ (@Loirooriol, #35219), ‘align-content: normal’ (@rayguo17, #35178), ‘place-self’ with ‘position: absolute’ (@Loirooriol, #35208), the intrinsic sizing keywords (@Loirooriol, #35413, #35469, #35471, #35630, #35642, #35663, #35652, #35688), and ‘position: absolute’ now works correctly in a ‘position: relative’ grid item (@stevennovaryo, #35014).

Input has also been improved, with better IME support (@dklassic, #35535, #35623) and several fixes to touch input (@kongbai1996, @jschwe, @shubhamg13, #35450, #35031, #35550, #35537, #35692).

Servo-the-browser (servoshell)

Directory listings are now enabled for local files (@mrobinson, #35317).

servoshell showing a local directory listing

servoshell’s dialogs now use egui (@chickenleaf, #34823, #35399, #35464, #35507, #35564, #35577, #35657, #35671), rather than shelling out to a program like zenity (@chickenleaf, #35674), making them more secure and no longer falling back to terminal input.

egui-based dialogs for alert(), confirm(), prompt(), and HTTP authentication

We’ve also fixed a bug when closing a tab other than the current one (@pewsheen, #35569).

Servo-the-engine (embedding)

We’ve simplified our embedding API by merging all input event delivery into WebView::notify_input_event (@mrobinson, @mukilan, #35430), making bluetooth optional (@jdm, @webbeef, #35479, #35590), making the “background hang monitor” optional (@jdm, #35256), and eliminating the need to depend on webxr (@mrobinson, #35229). We’ve also moved some servoshell-only options out of Opts (@mrobinson, #35377, #35407), since they have no effect on Servo’s behaviour.

We’ve landed our initial delegate-based API (@delan, @mrobinson, @mukilan, #35196, #35260, #35297, #35396, #35400, #35544, #35579, #35662, #35672), which replaces our old message-based API for integrating Servo with your app (@mrobinson, @delan, @mukilan, #35284, #35315, #35366). By implementing WebViewDelegate and ServoDelegate and installing them, you can have Servo call back into your app’s logic with ease.

We’ve simplified the RenderingContext trait (@wusyong, @mrobinson, #35251, #35553) and added three built-in RenderingContext impls (@mrobinson, @mukilan, #35465, #35501), making it easier to set up a context Servo can render to:

We’ve heavily reworked and documented our webview rendering model (@mrobinson, @wusyong, @mukilan, #35522, #35621), moved image output and shutdown logic out of the compositor (@mrobinson, @wusyong, #35538), and removed some complicated logic around synchronous repaints when a window is resized (@mrobinson, #35283, #35277). These changes should make it a lot clearer how to get Servo’s webviews onto your display.

One part of this model that we’re starting to move away from is the support for multiple webviews in one rendering context (@mrobinson, @wusyong, #35536). First landed in #31417, this was an expedient way to add support for multiple webviews, but it imposed some serious limitations on how webviews could be interleaved with other app content, and the performance and security was inadequate.

We’ve updated our winit_minimal example to take advantage of these changes (@webbeef, #35350, #35686), simplify it further (@robertohuertasm, #35253), and fix window resizing (@webbeef, #35691).

Perf and stability

The compositor now notifies the embedder of new frames immediately (@mrobinson, #35369), not via the constellation thread.

Servo’s typical memory usage has been reduced by over 1% thanks to Node object optimisations (@webbeef, #35592, #35554), and we’ve also improved our memory profiler (@webbeef, #35618, #35607).

We’ve fixed a bug causing very high CPU usage on sites like wikipedia.org (@webbeef, #35245), as well as bugs affecting requestAnimationFrame (@mukilan, #35387, #35435).

You can now configure our tracing-based profiler (--features tracing) with servo --tracing-filter instead of SERVO_TRACING (@jschwe, #35370).

We’ve continued reducing our use of unsafe in script (@nscaife, @stephenmuss, #35351, #35360, #35367, #35411), and moving parts of script to script_bindings (@jdm, #35279, #35280, #35292, #35457, #35459, #35578, #35620). Breaking up our massive script crate is absolutely critical for reducing Servo’s build times.

We’ve fixed crashes that happen when moving windows past the edge of your monitor (@webbeef, #35235), when unpaired UTF-16 surrogates are sent to the DOM (@mrobinson, #35381), when focusing elements inside shadow roots (@simonwuelker, #35606), and when calling getAsString() on a DataTransferItem (@Gae24, #35699). We’ve also continued working on static analysis that will help catch crashes due to GC borrow hazards (@augustebaum, @yerke, @jdm, @Gae24, #35541, #35593, #35565, #35610, #35591, #35609, #35601, #35596, #35595, #35594, #35597, #35622, #35604, #35616, #35605, #35640, #35647, #35646).

Donations

Thanks again for your generous support! We are now receiving 4363 USD/month (+13.7% over January) in recurring donations. This helps cover the cost of our self-hosted CI runners and Outreachy internships.

Servo is also on thanks.dev, and already 21 GitHub users (+5 over January) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4363 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Don MartiPro tips: links for 9 March 2025

Jason Lefkowitz cövers höw to set up the Cømpose key (and make everything you type awesöme™), in Make special characters stupid easy: meet the compose key

switching.software offers Ethical, easy-to-use and privacy-conscious alternatives to well-known software

Pro tip: avoid generative AI images in blog posts (even if your CMS says you should have one for SEO purposes) unless you want to make a political statement: AI: The New Aesthetics of Fascism by Gareth Watkins

Got third-party tracking scripts or pixels on your site? Avoid legal grief, take them off. Caught with Their Hand in the Cookie Jar: CNN’s Privacy Lawsuit is Served Fresh and the Court is Taking a Bite by Blake Landis. (Highest priority is to get rid of the Meta pixel. That’s not just a pro-evil-dictator tattoo for your web site, it’s really easy for lawyers to check for.)

Add data poisoning for AI scrapers hitting your GitHub Pages site: Trapping AI from the Algorithmic Sabotage Research Group (ASRG)

Got a small business? Like riding bikes? Relocating to the Netherlands with DAFT

If you need an integer and all you have is four 2s, Eli Bendersky has some math advice: Making any integer with four 2s

Nearly a Year Later, Mozilla is Still Promoting OneRep (Part of the Mozilla Monitor Plus service. Protip: check Have I Been Pwned directly)

Why you need a radio (yes, you!) by Audrey Eschright

The Linux kernel project can’t use code from sanctioned countries. Other projects need to check compliance with sanctions, too. US Blocks Open Source ‘Help’ From These Countries by Steven J. Vaughan-Nichols

Jake Archibald covers The case against self-closing tags in HTML (you don’t need <br /> just <br>.

John D. Cook makes rounding numbers much easier (if you use balanced ternary) in A magical land where rounding equals truncation

Understanding the legal issues for small community sites under UK law: #2: Five things you need if you run a small, low-risk user-to-user service by Rachel Coldicutt

Don Martiadvertising personalization: good for you?

A new paper is out, collecting some of the top arguments in favor of personalized advertising: The Intended and Unintended Consequences of Privacy Regulation for Consumer Marketing by Jean-Pierre Dubé, John G. Lynch, Dirk Bergemann, Mert Demirer, Avi Goldfarb, Garrett Johnson, Anja Lambrecht, Tesary Lin, Anna Tuchman, Catherine E. Tucker It’s probably going to get cited this privacy law season. But, as an Internet optimist, I’m still not buying the argument that personalized advertising has important benefits that need to be balanced with privacy. Looking at the literature, it is more likely that certain risks are inherent to personalization as such and that reducing personalization is more likely to be a bonus benefit of privacy protection than a trade-off.

Some notes and links follow.

p. 3 We do not consider legal arguments for consumer privacy as a fundamental right or concerns about access to personal data by malign actors or governments.

Avoiding malign actors is the big reason for restricting personalized ads. And malign actors are numerous. The high-profile national security threats are already in the news, but most people will encounter miscellaneous malware, scams, rip-offs and other lesser villainy enabled by ad personalization more often than they have to deal with state or quasi-state adversaries. There is no hard line between malign actors and totally legit sellers—not only does the personalized ad business have plenty of halfway crooks, you can find n/m-way crooks for arbitrary values of n and m.

Ad personalization gives a bunch of hard-to-overcome advantages to deceptive sellers. Although scams are generally illegal and/or against advertising platform policies, personalization makes the rules easier to evade, as we see with some ways that Facebook ads are optimized for deceptive advertising. Most personalized ads aren’t clustered at the good (high-quality pair of shoes in your size, on sale, next door!) or bad (malware pre-configured for your system) ends of the spectrum. Advertisers at all levels of quality and honesty are present, so any framework for thinking about ad personalization needs to take that variability into account.

p. 3 Some privacy advocates assume, incorrectly, that personalized marketing based on granular consumer data is automatically harmful…

Treating personalized advertising as harmful by default is not an assumption, but a useful heuristic based on both theoretical models and real-world experience. personally, I don’t pay attention to your ad if it’s personalized to me—it’s as credible as a cold call. But I might pay attention to your ad if it’s run in a place where the editors of sites that cover your industry would see it, or your mom would. Yes, it is possible for professors to imagine a hypothetical world in which personalization is beneficial, but that only works if you make the unrealistic simplifying assumption that all sellers are honest and that the only impact of personalization is to show people ads that are more or less well matched to them. The theoretical arguments in favor of personalized advertising break down as soon as you level up your economic model to consider the presence of both honest and deceptive advertisers in a market.

See Gardete and Bart, Tailored Cheap Talk: The Effects of Privacy Policy On Ad Content and Market Outcomes. Our research suggests that another peril of sharing very high quality targeting information with advertisers is that ad content may become less credible and persuasive to consumers. An advertising medium that allows for personalization is incapable of conveying as much information from an honest seller to a potential buyer as an advertising medium that does not support personalization.

Mustri et al., in Behavioral Advertising and Consumer Welfare, find that products found in behaviorally targeted ads are likely to be associated with lower quality vendors and higher product prices compared to competing alternatives found among search results.

p. 8 Which Consumers Care Most About Privacy, and Do Privacy Policies Unintentionally Favor the Privileged?

Lots of studies show that, basically, some people really want cross-context personalized advertising, some people don’t, and for the largest group in the middle, it depends how you ask. (references at the 30-40-30 rule). But the difference in consumer preferences is not about privilege, it’s about information level. See Turow et. al, Americans Reject Tailored Advertising and Three Activities That Enable It. That study includes a survey of privacy preferences before and after informing the participants about data practices—and people were more likely to say they do not want tailored advertising after getting the additional information.

In the Censuswide study Who’s In the Know: The Privacy Pulse Report, the experienced advertisers surveyed in the USA (people with 5 or more years of ad experience) were more likely than average to use an ad blocker (66% > 52%), and privacy is now the number one reason for people to use one. It is reasonable for policy-makers to consider the preferences of better-informed people—which is already a thing in fields such as transportation safety and public health.

p. 11 Poorer consumers live in data deserts (Tucker 2023), causing algorithmic exclusion due to missing or fragmented data. This exclusion thwarts marketing outreach and may deprive them of offers, exacerbating data deserts and marginalization.

Instead of speculating about this problem, personalized advertising proponents who are concerned about some people not being tracked enough can already look at other good examples of possibly under-surveilled consumers. Early adopters of privacy tools and preferences are helpfully acting as the experimental group for a study that the surveillance business hasn’t yet run. If people on whom less data is collected are getting fewer win-win offers, then the privacy early adopters should have worse consumer outcomes than people who leave the personalization turned on. For example, Apple iOS users with App Tracking Transparency (ATT) set to allow tracking should be reporting higher satisfaction and doing fewer returns and chargebacks. So far, this does not seem to be happening. (For a related result, see Bian et al., Consumer Surveillance and Financial Fraud. Consumers who deliberately placed themselves in a data desert by changing ATT to disallow tracking reported less fraud.) Click this to buy better stuff and be happier

And there’s little evidence to suggest that if a personalized ad system knows someone to be poor, that they’ll receive more of the kind of legit, well-matched offers that are targeted to the more affluent. Poor people tend to receive more predatory finance and other deceptive offers, so may be better off on average with ads less well matched to their situation.

p. 13 More broadly, without cross-site/app identity, consumers enjoy less free content

This depends on how you measure content and how you define enjoy. The Kircher and Foerderer paper states that, although children’s games for Android got fewer updates on average after a targeted advertising policy change by Google,

Only exceptionally well-rated and demanded games experienced more feature updates, which could be interpreted as a sign of opportunity due to better monetization potential or weakened competition. However, considering that we observed these effects only for games in the highest decile of app quality and demand and given that the median user rating of a game is 4.1 of 5, our findings suggest widespread game abandonment.

By Sturgeon’s Law, a policy change that benefits the top 10% of games but not the bottom 90% (which, in total, account for a small fraction of total installs and an even smaller fraction of gameplay) is a win for the users.

Another relevant paper is Kox, H., Straathof, B., and Zwart, G. (2014). Targeted advertising, platform competition and privacy.

We find that more targeting increases competition and reduces the websites’ profits, but yet in equilibrium websites choose maximum targeting as they cannot credibly commit to low targeting. A privacy protection policy can be beneficial for both consumers and websites.

When both personalized and non-personalized ad impressions are available in the same market, the personalized impressions tend to go for about double the non-personalized. But it doesn’t work to artificially turn off some data collection for a fraction of ad impressions, observe that revenue for those impressions is lower (compared to impressions with the data that are still available), and then extrapolate the revenue difference to a market in which no impressions have the data available.

It is also important to consider the impact of extremely low-quality and/or illegal content in the personalized advertising market. Much of the economic role of ad personalization is not to match the right ad to the right user but to monetize a higher-value user on lower-value content. The surveillance economy is more like the commodification economy. Surveillance advertising companies are willing to pursue content commodification even to the point of taking big reputational risks from feeding ad money to the worst people on the Internet (Hiding in Plain Sight: The Ad-Supported Piracy Ring Driving Over a Billion Monthly Visits - deepsee.io, Senators Decry Adtech Failures as Ads Appear On CSAM Site). If advertising intermediaries were more limited in their ability to put a good ad on a bad site using user tracking, the higher-quality content sites would enjoy significantly increased market power.

p. 14 Restrictions to limit the effectiveness of digital advertising would likely disproportionately disadvantage small businesses, since nine out of ten predominantly use digital advertising, especially on Meta

Are small businesses really better off in the surveillance advertising era? Although personalized Big Tech advertising is the main ad medium available to small businesses today, there is clearly some survivorship bias going on here. The Kerrigan and Keating paper states that, While entrepreneurship has rebounded since the Great Recession and its aftermath, startup activity remains weak by historical standards. This period of time overlaps with the golden age of personalized advertising, after widespread adoption of smartphones but before Apple’s ATT, the EU’s GDPR, and California’s CCPA. If personalized advertising is so good for small businesses, where are the extra small businesses enabled by it? We should have seen a small business boom in the second half of the 2010s, after most people in the USA got smartphones but before CCPA and ATT.

Jakob Nielsen may have provided the best explanation in 2006’s Search Engines as Leeches on the Web, which likely applies not just to search, but to other auction-based ad placements like social media advertising. An auction-based advertising platform like those operated by Google and Meta is able to dynamically adjust its advertising rates to capture all of the expected incremental profits from the customers acquired through it.

Part of the missing small business effect may also be caused by platform concentration. If, instead of an advertising duopoly, small businesses had more options for advertising, the power balance between platform (rentier) and small business (entrepreneur) might shift more toward the latter. See also Crawford et al., The antitrust orthodoxy is blind to real data harms. Policy makers might choose to prioritize pro-competition privacy legislation such as surveillance licensing for the largest, riskiest platforms in order to address competition concerns in parallel with privacy ones.

p. 15 Since PETs are costly for firms to implement, forward-looking regulation should consider how to incentivize PET adoption and innovation further.

In a section about how so-called privacy-enhancing technologies (PETs) have equal perceived privacy violation and bigger competition issues than conventional personalization, why recommend incentivizing PETs? The works cited would better support a recommendation to have a more detailed or informative consent experience for PETs than for cookie-based tracking. Because PETs obfuscate real-world privacy problems such as fraud and algorithmic discrimination, it would be more appropriate to require additional transparency, and possibly licensing, for PETs.

PETs, despite their mathematical appeal to many at Big Tech firms, have a long list of problems when applied to the real world. The creeped-out attitude of users toward PETs is worth paying attention to, as people who grow up in market economies generally develop good instincts about information in markets—just like people who grow up playing ball games can get good at catching a ball without consciously doing calculus. Policymakers should pay more attention to user perceptions—which are based on real-world market activity—than to mathematical claims about developers’ PET projects. PETs should be considered from the point of view of regulators investigating discrimination and fraud complaints, which are often difficult to spot on large platforms. Because PETs have the effect of shredding the evidence of platform misdeeds, enabling the existing problems of adtech, just in a harder-to-observe way, they need more scrutiny, not incentivization.

Coming soon: a useful large-scale experiment

Policymakers may soon be able to learn from what could be the greatest experiment on the impact of ad personalization ever conducted.

If Meta is required to offer Facebook users in the European Union a meaningfully de-personalized ad experience (and not just the less personalized ads option that still allows for personalization using fraud risk factors like age, gender, and location) then there will be a chance to measure what happens when users can choose personalized or de-personalized ads on a service that is otherwise the same.

Personally, I bet that users with the personalization turned off will have better outcomes as consumers, but we’ll see. I’m pretty confident that personalized ads will turn out to be worse because tools and settings that tend to make personalization less effective have been available for a while, and if choosing the privacy option made you buy worse stuff, the surveillance companies would have said so by now.

Conclusion

I put these links and notes together to help myself out when someone drops a link to the Dubé et al. paper into an Internet argument, and put them up here in the hope that they will help others. Hardly anyone will read all the literature in this field, but a lot of the most interesting research is still found in corners of the library that Big Tech isn’t actively calling attention to.

Thanks to Fengyang Lin for reviewing a draft of this post.

Related

Protecting Privacy, Empowering Small Business: A Path Forward with S.71 by Melanie Ensign, Founder and CEO, Discernible Inc. Small business owner testimony on S.71, the Vermont Data Privacy and Online Surveillance Act.

Mozilla ThunderbirdThunderbird for Android January/February 2025 Progress Report

Hello, everyone, and welcome to the first Android Progress Report of 2025. We’re ready to hit the ground running improving Thunderbird for Android experience for all of our users. Our January/February update involves a look at improvements to the account drawer and folders on our roadmap, an update on Google and K-9 Mail, and explores our first step towards Thunderbird on iOS.

Account Drawer Improvements

As we noted in our last post on the blog, improving the account drawer experience is one of our top priorities for development in 2025. We heard your feedback and want to make sure we provide an account drawer that lets you navigate between accounts easily and efficiently. Let’s briefly go into the most common feedback:

  • The accounts on the same domains or with similar names are difficult to distinguish from the two letters provided.
  • It isn’t clear how the account name influences the initials.
  • The icons seemed to be jumping around, especially obvious with 3–5 accounts.
  • There is a lot of spacing in the new drawer.
  • Users would like more customization options, such as an account picture or icon.
  • Some users would like to see a broader view that shows the whole account name.
  • With just one account, the accounts sidebar isn’t very useful.

Our design folks are working on some mockups on where the journey is taking us. We’re going to share them on the beta topicbox where you can provide more targeted feedback, but for a sneak peek here is a medium-fidelity mockup of what the new drawer and settings could look like:

On the technical side, we’ve integrated an image loader for the upcoming pictures. We now need to gradually adapt the mockups. We will begin with the settings screen changes and then adapt the drawer itself to follow.

Notifications and Error States

Some of you had the feeling your email was not arriving quick enough. While email delivery is reliable, there are a few settings in Thunderbird for Android and K-9 mail that aren’t obvious leading to confusion. When permissions are not granted, functionality is simply turned off instead of telling the user they actually need to grant the alarms permission for us to do a regular sync. Or maybe the sync interval is simply set to the default of 1 hour.

We’re still in the process of mapping out the best experience here, but will have more updates soon. See the notifications support article in case you are experiencing issues. A few things we’re aiming for this year:

  • Show an indicator in foreground service notification when push isn’t working for all configured folders
  • Show more detailed information when foreground service notification is tapped
  • Move most error messages from the system notifications to an area in-app to clearly identify when there is an error
  • Make authentication errors, certificate errors, and persistent connectivity issues use the new in-app mechanism
  • Make the folder synchronization settings more clear (ever wondered why there is “sync” and “push” and if you should have both enabled or not?)
  • Prompt for permissions when they are needed, such as aforementioned alarms permission
  • Indicate to the user if permissions are missing for their folder settings.
  • Better debug tool in case of notification issues.

Road(map) to the Highway

Our roadmap is currently under review from the Thunderbird council. Once we have their final approval, we’ll update the roadmap documentation. While we’re waiting, we would like to share some of the items we’ve proposed:

  • Listening to community feedback on Mozilla Connect and implementing HTML signatures and quick filter actions, similar to the Thunderbird Desktop
  • Backend refactoring work on the messages database to improve synchronization
  • Improving the message display so that you’ll see fewer prompts to download additional messages
  • Adding Android 15 compatibility, which is mainly Edge to Edge support
  • Improving the QR code import defaults (relates to notification settings as well)
  • Making better product decisions by (re-)introducing a limited amount of opt-in telemetry.

Does that sound exciting to you? Would you like to be a part of this but don’t feel you have the time? Are you good at writing Android apps in Kotlin and have an interest in muti-platform work? Well, do I have a treat for you! We’re hiring an Android Senior Software Engineer to work on Thunderbird for Android!

K-9 Mail Blocked from Gmail

We briefly touched on this in the last update as well: some of our users on K-9 Mail have noticed issues with an “App Blocked” error when trying to log into certain Gmail accounts. Google is asking K-9 Mail to go through a new verification process and has introduced some additional requirements that were not needed before. Users that are already logged in or have logged in recently should not be affected currently.

Meeting these requirements depended on several factors beyond our control, so we weren’t able to resolve this immediately.

If you are experiencing this issue on K-9 Mail, the quickest workaround is to migrate to Thunderbird for Android, or check out one of the other options on the support page. For those interested, more technical details can be found in issue 8598. We’re using keys on this application that have so far not been blocked. Our account import feature will make this transition pretty seamless. 

We’ve been able to make some major progress on this, we have a vendor for the required CASA review and expect the letter of validation to be shared soon. We’re still hitting a wall with Google, as they are giving us inconsistent information on the state of the review, and making some requirements on the privacy policy that sound more like they are intended for web apps. We’ve made an effort to clarify this further and hope that Google will accept our revised policy.

If all goes well we’ll get approval by the end of the month, and then need to make some changes to the key distribution so that Thunderbird and K-9 use the intended keys. 

Our Plans for Thunderbird on iOS

If you watched the Thunderbird Community Office Hours for January, you might have noticed us talking about iOS. You heard right – our plans for the Thunderbird iOS app are getting underway! We’ve been working on some basic architectural decisions and plan to publish a barebones repository on GitHub soon. You can expect a readme and some basic tools, but the real work will begin when we’ve hired a Senior Software Engineer who will lead development of a Thunderbird app for the iPhone and iPad. Interviews for some candidates have started and we wish them all the best!

With this upcoming hire, we plan to have alpha code available on Test Flight by the end of the year. To set expectations up front, functionality will be quite basic. A lof of work goes into writing an email application from scratch. We’re going to be focusing on a basic display of email messages, and then expanding to triage actions. Sending basic emails is also on our list.

FOSDEM

Our team recently attended FOSDEM in Brussels, Belgium. For those unfamiliar with FOSDEM, it’s the Free and Open Source Software Developers’ European Meeting—an event where many open-source enthusiasts come together to connect, share knowledge and ideas, and showcase the projects they’re passionate about.

We received a lot of valuable feedback from the community on Thunderbird for Android. Some key areas of feedback included the need for Exchange support, improvements to the folder drawer, performance enhancements, push notifications (and some confusion around their functionality), and much more.

Our team was highly engaged in listening to this feedback, and we will take all of it into account as we plan our future roadmap. Thunderbird has always been a project developed in tandem with our community and it was exciting for us to be at FOSDEM to connect with our users, contributors and friends.

In other news…

As always, you can join our Android-related mailing lists on TopicBox. And if you want to help us test new features, you can become a beta tester.

This blog post talks a lot about the exciting things we have planned for 2025. We’re also hiring for two positions, and may have a third one later in the year. While our software is free and open source, creating a world class email application isn’t without a cost. If you haven’t already made a contribution in January,  please consider supporting our work with a financial contribution. Thunderbird for Android relies entirely on user funding, so without your support we could likely only get to a fraction of what you see here. Making a contribution is really easy if you have Thunderbird for Android or K-9 Mail installed, just head over to the settings and sign up directly from your device. 

See you next month,

The post Thunderbird for Android January/February 2025 Progress Report appeared first on The Thunderbird Blog.

Don Martimlp-2025-03-06

It is time to make the Online Safety Act 2023 fit for purpose

AI: The New Aesthetics of Fascism

https://news.sky.com/story/bluesky-13320824

CAUGHT WITH THEIR HAND IN THE COOKIE JAR?: CNN’s Privacy Lawsuit is Served Fresh and the Court is Taking a Bite Firefox deletes promise to never sell personal data, asks users not to panic https://nofreeviewnoreview.org/ https://algorithmic-sabotage.github.io/asrg/trapping-ai/ Stick with the Weirdos: Marie Davidson’s Favourite Books “Emergent Misalignment” in LLMs https://v.st/daft https://futurism.com/microsoft-ceo-ai-generating-no-value A US federal judge dismisses a lawsuit against TikTok and YouTube over “choking challenge” videos, saying the case failed on its merits and citing Section 230 The Real Goal of the Trump Economy Making any integer with four 2s Fyre Festival 2 is coming, and it already sounds bananas (and not in a good way) 1,000 artists release ‘silent’ album to protest UK copyright sell-out to AI Why Data Minimization Is A Very Big Deal For Ad Tech https://www.npr.org/2025/02/25/nx-s1-5307965/consumer-confidence-sentiment-inflation-trump-tariffs https://switching.software/ DeepSeek goes beyond “open weights” AI with plans for source code release Profiles In Cowardice: The Nobody Saw This Coming Brigade Tesla sales are tanking in Europe. Is Musk to blame? Shadow IT, shadow research, and democratizing research Time for a Reset https://wallethub.com/blog/google-quality-issues-report/147091 https://www.joanwestenberg.com/you-dont-have-to-monetize-the-things-you-love/ No, Privacy is Not Dead: Beware the All-or-Nothing Mindset Open source LLMs hit Europe’s digital sovereignty roadmap The EU AI Act is Coming to America What the US’ first major AI copyright ruling might mean for IP law This Is the Age of the Coward Meta in Myanmar (full series) Wallfacing Baking Soda Is the Key to Perfectly Browned Ground Beef Ukraine Drives Next Gen Robotic Warfare The prophet of parking Google faked a Gemini AI answer in its Super Bowl commercial ‘I am overwhelmed’: Luigi Mangione sends Valentine’s Day message as he launches fan site Advertisers Are Losing Trust In TAG and MRC After Damning CSAM Report The Worst Place To Show An Ad the decline of kids’ creativity https://newsletter.counteroffensive.pro/p/deep-dive-taiwan-miltech-aims-to-undermine-chinese-components https://webonastick.com/fonts/routed-gothic/ ‘The Hardest Working Font in Manhattan’ Privacy Loves Company Nearly a Year Later, Mozilla is Still Promoting OneRep Apple’s app tracking privacy framework could fall foul of German antitrust rules A win at last: Big-time blow to AI world in training data copyright scrap Ad Buyers Blast Google, Amazon, and Others After Ads Appear on Site Hosting Child Abuse Content Jason Snell Went There, Calls for iOS to Follow the Mac Model for Software Distribution Why you need a radio (yes, you!) Antoine Beaupré: Qalculate hacks Pop-up Ads in Your Jeep, the Latest Stellantis Innovation How WikiTok Was Created Metal Panels Fall Off of Cybertrucks, Revealing the Limits of a Glued-Together Truck It’s Official: the Cybertruck is More Explosive than the Ford Pinto ‘Torrenting From a Corporate Laptop Doesn’t Feel Right’ With the Support of Check My Ads Institute’s Advocacy, Congress Launches Bipartisan Inquiry into Adtech Firms’ Monetization of Child Abuse ICE gaming GOOGLE to create mirage of mass deportations… Lessons of Trump’s First Trade War https://hbr.org/2020/01/advertising-makes-us-unhappy The New DVD Bargain Bin Scrum Doesn’t Say… Why it makes perfect sense for this bike to have two gears and two chains Mob Rule What Happened Here https://www.crummy.com/2025/02/02/0 Elon Musk’s X is suing more advertisers over ad ‘boycott’ Quoted in Ars Technica’s article on tarpits for AI crawlers Reading newsletters via an RSS reader is still great ↦ Advertisers Aren’t Thrilled With Zuckerberg’s Embrace Of Hate Speech US Blocks Open Source ‘Help’ From These Countries “We ran out of columns” - The best, worst codebase The Concrete Club. Ars Technica – Democrat teams up with movie industry to propose website-blocking law Take Control of Your Data: Practical Tips for Data Privacy Week 2025 Ukraine prepares to showcase game-changing defense tech innovations at February forum in Kyiv The questions the Chinese government doesn’t want DeepSeek AI to answer People’s Privacy Act Introduced in Washington State dial down Coalition of Jewish groups say they’re leaving X over Musk’s behavior Democrats Flip Trump +21 District in Iowa Can we get the benefits of transitive dependencies without undermining security? https://medium.com/%40colin.fraser/generative-ai-is-a-hammer-and-no-one-knows-what-is-and-isnt-a-nail-4c7f3f0911aa Did you know its only $55 to get a lifetime license to Microsoft Office (that comes with Windows 11 Pro) https://www.adexchanger.com/online-advertising/people-managing-google-ad-campaigns-are-getting-their-accounts-seized-by-scammers/ notoriously vague term breaking compatibility while independent alternatives keep on going How a top Chinese AI model overcame US sanctions The case against self-closing tags in HTML A magical land where rounding equals truncation https://blog.logrocket.com/getting-started-pico-css/ bargaining Dave’s linkblog feed https://therecord.media/texas-probes-four-more-car-companies-data-collection-sharing “I prefer to meet people where they are” says reasonable-sounding white dude holding court at a table in the back of a Nazi Bar, redux. Authors Seek Meta’s Torrent Client Logs and Seeding Data in AI Piracy Probe https://www.nathanrabin.com/happy-place/2020/11/9/the-short-sad-strange-life-of-mr-delicious How the United States Learned to Love Internet Censorship Supreme Court upholds the TikTok ban. Profiting from addiction Whoops! Facebook trained Llama AI model on pirate site LibGen, with Zuckerberg’s OK Malicious extensions circumvent Google’s remote code ban Bot-ily Harm. Apple’s AI helpfully rewords scam messages to make them look legitimate https://www.crikey.com.au/2025/01/08/apple-new-artificial-intelligence-rewords-scam-messages-look-legitimate/ https://www.canarymedia.com/articles/virtual-power-plants/sonnen-solrite-to-offer-free-batteries-and-solar-to-texas-homeowners Interim note 5: web media and web dev employment Hotel chain ditches Google search for DuckDuckGo — ‘subjected to fraud attempts daily’ https://www.washingtonpost.com/home/2025/01/13/online-shopping-product-roundups/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzM2NzQ0NDAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzM4MTI2Nzk5LCJpYXQiOjE3MzY3NDQ0MDAsImp0aSI6IjA0M2ZlMjY0LTBjYzItNDgyMC04MTlmLTY1Njc5YjNkNTY2YiIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS9ob21lLzIwMjUvMDEvMTMvb25saW5lLXNob3BwaW5nLXByb2R1Y3Qtcm91bmR1cHMvIn0.vR-rq38Ql_H984trq0Ja79Ft4CqcqERIqCw7IzoN9E4 Congestion Tolls versus Congestion Pricing #2: Five things you need if you run a small, low-risk user-to-user service How extensions trick CWS search See What WebAssembly Can Do in 2025 Chromium - Policy List More Than Half of All Google Search Takedowns Now Come from Link-Busters My PhD advisor rewrote himself in bash (2010) Why Individual Rights Can’t Protect Privacy https://russ.garrett.co.uk/2024/12/17/online-safety-act-guide/ https://natlawreview.com/article/hashing-it-out-jornayas-data-tech-victory-over-cipa-claims https://www.marketingbrew.com/stories/2024/12/09/meta-plans-crackdown-on-health-related-user-data

Long-range drone strikes weakening Russia’s combat ability, senior Ukrainian commander says

Spidermonkey Development BlogImplementing Iterator.range in SpiderMonkey

In October 2024, I joined Outreachy as an Open Source contributor and in December 2024, I joined Outreachy as an intern working with Mozilla. My role was to implement the TC39 Range Proposal in the SpiderMonkey JavaScript engine. Iterator.range is a new built-in method proposed for JavaScript iterators that allows generating a sequence of numbers within a specified range. It functions similarly to Python’s range, providing an easy and efficient way to iterate over a series of values:

for (const i of Iterator.range(0, 43)) console.log(i); // 0 to 42

But also things like:

function* even() {
  for (const i of Iterator.range(0, Infinity)) if (i % 2 === 0) yield i;
}

In this blog post, we will explore the implementation of Iterator.range in the SpiderMonkey JavaScript engine.

Understanding the Implementation

When I started working on Iterator.range, the initial implementation had been done, ie; adding a preference for the proposal and making the builtin accessible in the JavaScript shell.

The Iterator.range simply returned false, a stub indicating that the actual implementation of Iterator.range was under development or not fully implemented, which is where I came in. As a start, I created a CreateNumericRangeIterator function that delegates to the Iterator.range function. Following that, I implemented the first three steps within the Iterator.range function. Next, I initialised variables and parameters for the NUMBER-RANGE data type in the CreateNumericRangeIteratorfunction.

I focused on implementing sequences that increase by one, such as Iterator.range(0, 10).Next, I created an IteratorRangeGenerator* function (ie, step 18 of the Range proposal), that when called doesn’t execute immediately, but returns a generator object which follows the iterator protocol. Inside the generator function you have yield statements which represents where the function suspends its execution and provides value back to the caller. Additionaly, I updated the CreateNumericRangeIterator function to invoke IteratorRangeGenerator* with the appropriate arguments, aligning with Step 19 of the specification, and added tests to verify its functionality.

The generator will pause at each yield, and will not continue until the next method is called on the generator object that is created. The NumericRangeIteratorPrototype (Step 27.1.4.2 of the proposal) is the object that holds the iterator prototype for the Numeric range iterator. The next() method is added to the NumericRangeIteratorPrototype, when you call the next() method on an object created from NumericRangeIteratorPrototype, it doesn’t directly return a value, but it makes the generator yield the next value in the series, effectively resuming the suspended generator.

The first time you invoke next() on the generator object created via IteratorRangeGenerator*, the generator will run up to the first yield statement and return the first value. When you invoke next() again, theNumericRangeIteratorNext() will be called.

This method uses GeneratorResume(this), which means the generator will pick up right where it left off, continuing to iterate the next yield statement or until iteration ends.

Generator Alternative

After discussions with my mentors Daniel and Arai, I transitioned from a generator-based implementation to a more efficient slot-based approach. This change involved defining slots to store the state necessary for computing the next value. The reasons included:

  • Efficiency: Directly managing iteration state is faster than relying on generator functions.
  • Simplified Implementation: A slot-based approach eliminates the need for generator-specific handling, making the code more maintainable.
  • Better Alignment with Other Iterators: Existing built-in iterators such as StringIteratorPrototype and ArrayIteratorPrototype do not use generators in their implementations.

Perfomance and Benchmarks

To quantify the performance improvements gained by transitioning from a generator-based implementation to a slot-based approach, I conducted comparative benchmarks using a test in the current bookmarks/central, and in the revision that used generator-based approach. My benchmark tested two key scenarios:

  • Floating-point range iteration: Iterating through 100,000 numbers with a step of 0.1
  • BigInt range iteration: Iterating through 1,000,000 BigInts with a step of 2

Each test was run 100 times to eliminate anomalies. The benchmark code was structured as follows:

// Benchmark for Number iteration
var sum = 0;
for (var i = 0; i < 100; ++i) {
  for (num of Iterator.range(0, 100000, 0.1)) {
    sum += num;
  }
}
print(sum);

// Benchmark for BigInt iteration
var sum = 0n;
for (var i = 0; i < 100; ++i) {
  for (num of Iterator.range(0n, 1000000n, 2n)) {
    sum += num;
  }
}
print(sum);

Results

Implementation Execution Time (ms) Improvement
Generator-based 8,174.60 -
Slot-based 2,725.33 66.70%

The slot-based implementation completed the benchmark in just 2.7 seconds compared to 8.2 seconds for the generator-based approach. This represents a 66.7% reduction in execution time, or in other words, the optimized implementation is approximately 3 times faster.

Challenges

Implementing BigInt support was straightforward from a specification perspective, but I encountered two blockers:

1. Handling Infinity Checks Correctly

The specification ensures that start is either a Number or a BigInt in steps 3.a and 4.a. However, step 5 states:

  • If start is +∞ or -∞, throw a RangeError.

Despite following this, my implementation still threw an error stating that start must be finite. After investigating, I found that the issue stemmed from using a self-hosted isFinite function.

The specification requires isFinite to throw a TypeError for BigInt, but the self-hosted Number_isFinite returns false instead. This turned out to be more of an implementation issue than a specification issue.

See Github discussion here.

  • Fix: Explicitly check that start is a number before calling isFinite:
// Step 5: If start is +∞ or -∞, throw a RangeError.
if (typeof start === "number" && !Number_isFinite(start)) {
  ThrowRangeError(JSMSG_ITERATOR_RANGE_START_INFINITY);
}

2. Floating Point Precision Errors

When testing floating-point sequences, I encountered an issue where some decimal values were not represented exactly due to JavaScript’s floating-point precision limitations. This caused incorrect test results.

There’s a GitHub issue discussing this in depth. I implemented an approximatelyEqual function to compare values within a small margin of error.

  • Fix: Using approximatelyEqual in tests:
const resultFloat2 = Array.from(Iterator.range(0, 1, 0.2));
approximatelyEqual(resultFloat2, [0, 0.2, 0.4, 0.6, 0.8]);

This function ensures that minor precision errors do not cause test failures, improving floating-point range calculations.

Next Steps and Future Improvements

There are different stages a TC39 proposal goes through before it can be shipped. This document shows the different stages that a proposal goes through from ideation to consumption. The Iterator.range proposal is currently at stage 1 which is the Draft stage. Ideally, the proposal should advance to stage 3 which means that the specification is stable and no changes to the proposal are expected, but some necessary changes may still occur due to web incompatibilities or feedback from production-grade implementations.

Currently, this implementation is in it’s early stages of implementation. It’s only built in Nightly and disabled by default until such a time the proposal is in stage 3 or 4 and no further revision to the specification can be made.

Final Thoughts

Working on the Iterator.range implementation in SpiderMonkey has been a deeply rewarding experience. I learned how to navigate a large and complex codebase, collaborate with experienced engineers, and translate a formal specification into an optimized, real-world implementation. The transition from a generator-based approach to a slot-based one was a significant learning moment, reinforcing the importance of efficiency in JavaScript engine internals.

Beyond technical skills, I gained a deeper appreciation for the standardization process in JavaScript. The experience highlighted how proposals evolve through real-world feedback, and how early-stage implementations help shape their final form.

As Iterator.range continues its journey through the TC39 proposal stages, I look forward to seeing its adoption in JavaScript engines and the impact it will have on developers. I hope this post provides useful insights into SpiderMonkey development and encourages others to contribute to open-source projects and JavaScript standardization efforts.

If you’d like to read more, here are my blog posts that I made during the project: