Browser Elements Part 2: Worklets and Script Runners

March 4, 2024
Chapter 2: Browser Elements

Introduction to Worklets and Script Runners

This post covers the next unique element in the browser that has been adapted for the Google Privacy Sandbox: worklets.  Actually, not worklets per se. A special version of worklets developed specifically for the Google Privacy Sandbox called script runners, which unless you read the HTML version of the Protected Audiences API spec carefully you can completely miss.  99%  of the documentation around Protected Audiences API uses the term ‘worklets’ when it actually means ‘script runners’.  I have argued with the powers that be that they should convert references to ‘worklets’ in the documentation to ‘script runners’, but have had no luck so far.  My guess is developers are more familiar with the worklets concept, so referring to script runners in that fashion makes it easier for developers to understand what is happening, even if it means the business folks get confused.  Go figure.

Worklets were introduced in Chrome 61 (2017) specifically for performance-critical tasks related to audio processing, video manipulation, and animation. They:

  • allow for multi-threaded execution off the main Javascript thread.
  • were designed for tight integration with browser APIs.
  • have restricted capabilities to ensure security and minimize attack vectors.  

The main driver for their development was the need to handle highly specialized tasks within the browser engine with strong security measures for sensitive operations.  

Worklets have been adapted into script runners by the Google Privacy Sandbox for three specific uses:

  • Running auctions
  • Bidding on auctions
  • Reporting on the results of auctions

We deal only superficially with these use cases in this post.  It sets the stage for later discussions delving into script runner functionalities in greater detail.  What this post should help you understand is why Google chose worklets and script runners as the best technology to implement those use cases.

To discuss script runners, we have to wend our way first through worklets and their unique features. And before that, there are browser elements called web workers from which worklets were themselves derived. So we start the discussion there.

What are Web Workers?

To understand web workers, it is important to go back in time to the early 2000s.  Web sites were relatively simple then and ran an amount of JavaScript that could be processed in the main thread without unduly impacting the browser’s rendering speed. Over time, developers started to develop more computationally expensive applications in the browser, for example large image processing.  The result was an obvious need for some mechanism allowing these computationally-expensive elements to run in a way that reduced performance impacts on the main Javascript thread to maintain an acceptable rendering speed.  The Web Worker API was the solution.   It was developed in the W3C Web Hypertext Application Technology Working Group (WHATWG) in 2009 as part of HTML5.  Web workers are now part of the main HTML specification.

Web workers perform computationally intensive or long-running tasks in a separate thread, improving responsiveness of the main thread.  They were intended to be used for long-running scripts with high startup performance costs and high per-instance memory costs, that are not interrupted by scripts that respond to user-generated interactions.  This allows these long-running tasks to execute without yielding computational priority, thus keeping a web page responsive.  Workers were always considered to be relatively heavyweight.  They are supposed to be used sparingly for any given application.  

Figure 1 - A Simple Example of a Web Worker. You define the worker first, then can send or post messages to the worker as it runs in parallel with the main thread

Web workers are general purpose and handle a wide range of functionalities.  They access the DOM in a limited way and interact with network resources like fetching data or making AJAX requests. Communication is primarily through a postMessage call in JavaScript. postMessage requires data to be serialized, which limits the size of data that can be transferred without impacting performance.  Their DOM access is also only indirect through the postMessage call, which reduces the risk of manipulating the main page content.

Besides limitations on DOM access, web workers have other security restrictions that help reduce certain attack vectors:

  • Limited API Access. While they have access to some APIs, they lack access to sensitive APIs like localStorage or geolocation.
  • Same-Origin Policy. Web workers are subject to the same-origin policy, meaning they cannot access resources from different origins unless explicitly allowed.

These relatively limited security restrictions are a major reason why web workers are not adequate for use in the Google Privacy Sandbox.

What are Worklets?

As mentioned in a prior post, worklets are a new concept  that was part of  the CSS Houdini specification and were released in Chrome 61 in 2017.  Worklets are a lightweight version of web workers geared to specific use cases.  They allow developers to extend the CSS rendering engine to handle custom CSS properties, functions and animations.  Worklets are similar to web workers in that some types of worklets, specifically audio and animation worklets, can run scripts independent of the main JavaScript execution environment. 

Worklets were specifically designed to provide developers more control of how browsers render pages.  It allows them to extend beyond the limitations of CSS.  Instead of using declarative rules to render a specific element, worklets allow the developer to write code that produces the actual pixels on the page.

Before delving into worklets, you may be wondering how something designed for managing UI and content elements applies to backend processing functionality like auctions, bidding, and reporting.  This is where things get a bit hazy.  Nowhere online can I find a discussion of how, when, and why worklets began being used for use cases other than rendering.  Yet at some point, developers realized that the enhanced security and isolation provided by worklets, as well as some of their other features, made them the best choice for running processes unrelated to rendering.  You might call this an “off-specification use.”  

The best guess regarding how worklet use cases evolved comes from the Chromium documentation and Mozilla main documentation pages on worklets.  The Chromium page identifies four types of worklets broken into two classes:

  • Main thread worklets (Paint Worklets and Layout Worklets): A worklet of this type runs on the main thread.
  • Threaded worklets (Audio Worklets and Animation Worklets): A worklet of this type runs on a worker thread.

The Mozilla main documentation page on worklets, on the other hand, has a table (Table 1) that identifies the following types of worklets:

Table 1 - Types of Worklets in Mozilla Worklets Documentation Page

API Description Location Specification
AudioWorklet
For audio processing with custom AudioNodes. Web Audio render thread Web Audio API
AnimationWorklet For creating scroll-linked and other high performance procedural animations. Compositor thread CSS Animation Worklet API
LayoutWorklet For defining the positioning and dimensions of custom elements. CSS Layout API
SharedStorageWorklet
For running private operations on cross-site data, without risk of data leakage. Main thread Shared Storage API
Note: Paint worklets, defined by the CSS Painting API, don't subclass Worklet. They are accessed through a regular Worklet object obtained using CSS.paintWorklet.

Source: https://developer.mozilla.org/en-US/docs/Web/API/Worklet 

Notice the last row of the table - for Shared Storage worklets.  These are part of the Shared Storage API, which is one storage type specifically used by the Google Privacy Sandbox.  We will deep dive into the Shared Storage API in a later post on the Privacy Sandbox’s storage elements.  This is a new API, currently still in draft, that was developed as a complement to  storage partitioning, which was described in our last post

Storage partitioning was designed to reduce the likelihood of cross-site tracking.  The problem with partitioned storage is that there exist legitimate AdTech use cases that require some form of shared storage to implement.  The Shared Storage API (shown as a storage service in our services architecture diagram in a prior post) is used for two very specific purposes in the Google Privacy Sandbox:

  • Reporting data across auctions, advertisers, and publishers in a manner that prevents cross-site leakage.  The worklet uses a number of technologies, including adding noise to the data that is pulled from storage, to prevent recombining data across sites that would allow for cross-site leakage.
  • Rendering of the winning ad from an auction into a fenced frame using cross-site data in a way that limits the potential for mixing data between two entities. The developer uses JavaScript to select a URL (in this case an opaque URL) pointing to ad creative from a list of available ads that were placed in shared storage during the bidding process.   The developer can then use the API to render the ad from the winning bidder into a fenced frame.

The intention of the Shared Storage API is to not partition storage by top-frame site, although elements like iFrames and fenced frames would still be partitioned by origin.  How then to prevent cross-site re-identification of users?  Basically, the designers require that data located in shared storage can only be read in a restricted environment that has carefully constructed ways in which the data is shared.  

Thus was born the notion of shared storage worklets.  This is because their fundamental design  provides an excellent mechanism to allow shared storage and while minimizing the attack surface for potential cross-site re-identification of users.  

Chrome 86 (released in April 2020) introduced shared storage worklets as an experimental feature. They still remain experimental, according to Mozilla.  They allow developers to run private operations on cross-site data without the risk of data leakage. This is particularly useful for scenarios like fenced frames where isolation and privacy are crucial.  As an experimental API, the Shared Storage API has limited documentation (in the W3C draft Shared Storage API specification and the Shared Storage API explainer in the Github repository),  and its availability and functionality might differ across browsers and could change in the future.

The Shared Storage worklet is the first official indication we have that worklets can do more than just improve the performance of audio and CSS rendering.  We will study it in greater detail in the post about shared storage.  For now, note that extending worklets beyond their original use cases has already been considered and implemented as part of the Privacy Sandbox. 

Unique Features of Worklets

Let’s now turn back to the differences between web workers and worklets.  There are some core differences between the two elements that make worklets the best platform for background processes in the Privacy Sandbox.

  • Worklets have stronger isolation versus a web worker.  Web workers run in a separate thread, providing isolation from the main thread and other web workers. This prevents JavaScript code running on the main thread from directly modifying data or interfering with the worker's execution. However, they still have DOM access, can share data through message passing, and potentially leak information through side-channels. Worklets have restricted access to the DOM, significantly reducing the risk of manipulating the main page content or leaking information through DOM elements.
  • Worklets have a reduced API surface.  Worklets restrict access to a number of APIs.  Many of these APIs, available to web workers, have access that could provide opportunities for potential information leakage through side-channels.  Table 2 shows the list of restricted APIs and why those restrictions are in place.

Table 2 - API Restrictions in Worklets vs. Web Workers

API Web Workers Worklets Reason for Restriction in Worklets
DOM manipulation (e.g., document.getElementById)
Yes No Prevents unauthorized modification of the main page content and potential information leakage
Access to browser location (e.g., navigator.geolocation) Yes No Protects user location data and prevents tracking activities
Access to browser history (e.g., history.pushState) Yes No Protects user browsing history and prevents tracking individuals based on their browsing behavior.
Access to cookies (e.g., document.cookie) Yes No Protects user data stored in cookies and prevents unauthorized access. (This will obviously go away once cookies are deprecated)
Full access to fetch API Yes Potentially limited access Has restrictions on specific URLs or data types to prevent data exfiltration
Access to certain Web APIs like WebSockets Yes Potentially limited access May be restricted to specific use cases aligned with the worklet's purpose
Access to specific web components or custom elements Yes No Prevents interaction with UI elements on the main page, reducing potential information leakage
Access to window object properties (e.g., window.navigator) Yes Potentially limited access Restrictions on specific properties that could provide access to sensitive information.

  • Worklets are thread-agnostic. Worklets are not designed to run on a dedicated separate thread, like each worker is. Implementations can run worklets wherever they choose (including on the main thread).  This feature allows the Sandbox to utilize worklets within the main thread without compromising isolation. The reduced need for dedicated worker threads simplifies the isolation management within the Sandbox environment.

    This is important from a performance perspective.  The browser can leverage the main thread's existing resources for less intensive worklets, potentially improving overall responsiveness. 
  • Worklets are able to have multiple duplicate instances of the global scope created, for the purpose of parallelism. While traditional web workers have a single global scope, worklets allow creating multiple instances with the same global scope. This enables parallelism within a single worklet instance.

    In a later post we will discuss that this can be critically important for auctions and bidding.  It could, for example, allow a bidder to bid on multiple auctions on a single page without having to create separate worklets and the computational and memory overhead they represent.
  • Worklets do not use an event-based API. Instead, classes are registered on the global scope, whose methods are invoked by the user agent. This design choice potentially simplifies the security model for worklets as it reduces the attack surface compared to event-based communication, which involves registering and processing various event listeners.

    This feature is important to the Privacy Sandbox because registering and managing numerous event listeners, potentially across different objects, could allow malicious code to register for events it shouldn't, or poorly designed code to handle them incorrectly, providing a potential side-channel for information leakage.  

    A class-based API, on the other hand, has a well-defined set of methods exposed to the user agent. This reduces the attack surface, as attackers have fewer entry points to exploit vulnerabilities.  In the context of the Google Privacy Sandbox, Sandbox implementations might define specific classes and methods around use cases that would be allowed within the worklet versus other use cases that would not be. This enables fine-grained control over the functionalities available to the worklet, further restricting unauthorized code execution and enhancing security.
  • Worklets have a lifetime for their global object which are specified by the browser vendor. Web worker global objects are typically tied to the worker's lifetime. They explicitly terminate when the worker terminates. Unlike web workers with a more explicit termination model, worklet global object lifetime is defined by the implementation, not the developer. This means the browser vendor determines how long the worklet and its associated data persist. 

    This implementation-defined nature can be leveraged by the Privacy Sandbox in specific ways:
  • Controlled Persistence. The Privacy Sandbox might define specific policies for worklet lifetimes within its environment. This could involve:some text
    • Short-lived worklets. For tasks involving more sensitive or temporary data, the worklet and its global object might be terminated shortly after the task completion.  For example, reporting worklets currently have a fixed 50ms time limit for gathering information. There has actually been a request from some of the FOT #1 participants to not only make this fixed time longer, but to provide a range so that different ad servers with different (more time consuming) performance characteristics on the code called by the reporting worklet can complete their task.
    • Delegating Time Limits for Specific Use Cases.  Worklets can delegate worklet lifetime to the developer for specific use cases.  This capability is used by the Privacy Sandbox for its auction and bidding services, as auctions and bids have specific timeouts that often differ situationally.
  • Enforced Termination. The Sandbox can enforce stricter termination policies, ensuring worklets and their associated data are not retained for longer than necessary, mitigating potential privacy risks.
  • Worklets behave differently from workers when changes occur in browser context.   Both workers and worklets, as a rule, have a scope limited to a single browser tab.  If you change tabs, as you might when checking email while reading an article from a publisher site, then the worker and worklets both can go into background mode and are usually paused.  

However, when the focus returns to the original browser tab, the worker will typically automatically resume the communication between the main script and the web worker where it left off, depending on its implementation. Worklets, on the other hand can, and often must, re-initialize or refresh their state when the focus returns to the original tab, especially if they rely on elements or data specific to that tab. For a publisher wishing to start a new auction when the browser focus returns to their page, worklets provide a better vehicle than workers.

Script Runners are “Worklet Like” But are Not Worklets

Script runners, as their name implies, are a script execution environment.  Superficially they are similar to worklets in that they run scripts independent of the main execution thread with a flexible implementation model.  

However, script runners differ in significant ways that make them “worklet-like” but not actually worklets.  These differences are at a fine-grained technical level.  I will do my best to keep the discussion “high level”, but there is only so much I can do to up-level the discussion and still make the differences understandable.  In all these cases, I will try to provide examples that will make the technical concepts clearer. 

  • Script Runners are scoped to a user agent as they are spun up by an interest group.  Worklets are scoped to a single document.  The Protected Audiences API involves user-agent-level decisions about data access based on interest groups. Script runners, scoped to the user agent, can access information across documents within an interest group for better decision-making. This wouldn't be possible with document-scoped worklets.

    Here’s an example.  You are browsing a news website which wants to access your location data to display personalized news stories.  However, it turns out you're part of a "Privacy Focused" interest group that restricts location sharing.  That information doesn’t run across a single page.  It must be enforced across the publisher’s entire website.  Worklets can’t handle this because they are document-specific, and are not scoped to go across an entire website.  Script runners, with their scope at the user agent level, can.
  • Script Runners have a more flexible agent cluster allocation model.  An agent cluster refers to a group of processes within the browser that work together to execute specific tasks. These processes are often isolated from each other for security and performance reasons.  Each agent cluster is like its own walled garden.  Scripts and data running in one cluster typically cannot directly access or influence scripts and data in another cluster. This isolation helps prevent malicious code from interfering with other parts of the browser or websites a user visits.

    The agent cluster allocation model defines how scripts and web content are assigned to specific agent clusters for execution.  By default, scripts and content from the same website typically run in the same agent cluster. This ensures some level of coherence for website functionality.

    In worklets, the website code and a script share the same execution environment, potentially allowing the website to glean information about the script’s access to data.  This presents a privacy risk where data about an interest group can be leaked to the browser.

    Protected Audiences utilizes script runners because they have a more flexible allocation model.  The script runner executes in a different agent cluster than the HTML document. This creates a physical separation between website code and the scripts contained in the script runner.  The website cannot directly observe the script runner's actions, making it harder to infer information about your interest groups or data access decisions.
  • Script Runners,  unlike worklets, limit WebIDL interfaces.  Web Interface Definition Language (WebIDL) is a core browser technology that allows coders to define how various scripts and functions can interact.  The Protected Audiences API specifies a set of WebIDL interfaces available to script runners.  Any other WebIDL interfaces are restricted.
  • Script runners have restrictions on ECMAScript APIs.  ECMAScript is a specification that provides standards for writing scripting languages that run in browsers.  JavaScript is an ECMAscript-compliant scripting language for example.  Worklets have access to a broad set of ECMAScript APIs.  Script runners restrict access to only those ECMAScript APIs needed for data access decisions.  This limits exposures to both security and privacy risks.

    Imagine a script that needs to compare your system’s current date with a specific threshold to determine if location access should be granted based on time-related settings in an interest group. With the ECMAScript limitation in a script runner, the script wouldn't have direct access to the Date object for date manipulation. Instead, the Protected Audiences API might provide a specific function for this purpose within its allowed set of APIs, ensuring controlled access to time-related data.
  • Script runners are not module scripts, and are evaluated as if they were classic scripts.  Javascript was, and still is in some cases, written in-line in the browser, with code being run sequentially.  Historically, this was a limitation compared to most imperative languages.  ECMAScript 6 introduced the concept of modules to JavaScript.  This made it easier to code and made the resulting code more efficient at runtime in exchange for allowing more complex interactions within the scripts.  By opting for classic scripts, Protected Audiences script runners maintain a simpler, more controlled execution model that is well-suited for their core task of making secure data access decisions based on interest groups.
  • Script runners have other limits versus traditional HTML to improve isolation.   Without going into a great deal of detail, script runners do not allow don’t have access to certain standard HTML functionality in order to provide further isolation and better performance.  These include a lack of event runners, no access to settings objects, and no microtask checkpoints

So as you can see, script runners look a lot like worklets, but have a substantial number of key differences at a deep technical level.  

According to the leaders of the Protected Audience API working group, there is currently no plan to have script runners turned into a new “standard” worklet concept in the HTML specification.  So we are on our own when it comes to deciding how much we want to consider them as worklets versus a new species of HTML element.

Upleveling: Why Script Runners and Not Other Elements

What makes  script runners the vehicle of choice for auction and bidding functionality versus workers or “pure” worklets?  There a three main areas of concern for the Privacy Sandbox for which script runners provide an excellent platform:

  • Performance
  • Security
  • Data Isolation

We’ll examine each of these in order

Consistent Performance

As any person familiar with real-time bidding is aware, there can be multiple auctions on a page with multiple bidders for each auction.  The Google Privacy Sandbox moves the ad server into the browser.  As a result, we now potentially have significant performance issues since browsers were never designed to handle this kind of real-time processing, and definitely not at scale with tens of bidders or more for each auction.  Because they are based on worklets, script runners are able to run multiple activities in parallel, with script runners being created and closing on different timelines, without impacting the main Javascript thread.    Each auction would have its own script runner, as would each bidder whose interest groups qualify for the auction.  Web workers were never designed to handle this type of dynamic workload. Moreover script runners like worklets, as previously mentioned, allow for the creation of multiple instances with the same global scope. This enables parallelism within a single script runner instance.   This is critically important for auctions and bidding as it could, for example, allow a bidder to bid on multiple auctions on a single page without having to create separate script runners with the computational and memory overhead they represent.

Much of the work in the early TurtleDove experiments and now FOT #1 are centered on optimizing performance of the auction and bidding script runners.  There is still a very large question mark around how well script runners will scale once we move beyond the 1% of Chrome traffic being tested (proposed for Q3 2024).  It is one of the reasons so much urgent work and testing is happening around server-side auction and bidding functionality in a Trusted Execution Environment.  Over time I do not doubt we will see innovation that pushes more of the browser side functionality to the server side without impacting the privacy standards the Sandbox is being designed to maintain.

Lastly, script runners allow for consistent performance within the browser when multiple script runners need to run the same functionality.  An example of this was discussed in a particular issue in the FLEDGE Github repository.  Certain functions, like private aggregation functions, were initially able to run in the main Javascript thread (top-level script context) of a script runner.  But in cases where this top-level script ran once across all available script runners for different players in the auction, the effects of the top-level call to the functions in subsequent script runners was undefined and inconsistent.  Moving these functions into the script runner provided both better performance and consistency of execution.

Security

One important item not mentioned above about script runners has to do with something called attestation. Candidate organizations and their developers who wish to employ the Google Privacy Sandbox must formally enroll in the Sandbox platform to be allowed to participate.  There is an offline enrollment process with an enrollment form that must be submitted and reviewed by Google.   Additionally, there is a second process, called attestation, which is used to confirm that a participant in the Privacy Sandbox has agreed to use specific APIs according to the rules established by Google.  

Here is a English version of the core privacy attestation from the attestation GitHub repository:

The attesting entity states that it will not use the Privacy Sandbox APIs or services for the purpose of learning that you are the same user across different sites or apps, and that it will not otherwise circumvent the privacy protections of the Privacy Sandbox.

Developers who submit an enrollment form are then sent a file that contains the attestations for the APIs they requested to use.  These are stored in a specific directory on their website (e.g. https://www.example.com/.well-known/privacy-sandbox-attestations.json) and checked regularly by Google to ensure they have not been tampered with. We will discuss attestation at length in a later post, but for now it is enough to know that if the calling site has not included the Protected Audiences API in a Privacy Sandbox enrollment process and made it attestations, the request to add a script runner of this type will be rejected. 

The limitations to a single code module, WebIDL and ECMAScript limitations, handling script runners as classic script, among other features, also provides security against sloppy coding or the insertion of additional code modules by evil actors unbeknownst to the owner of the script runner.

Isolation

Isolation of user data between ad tech players to prevent reconstruction of a browser’s identity through cross-site data collection is always at the heart of anything to do with the Privacy Sandbox.  Script runners much tighter isolation - no access to the DOM, their reduced API surface, their restricted access to geolocation and browser data, their flexible agent cluster allocation model, their limits on WebIDL interfaces, as examples - provide a better isolation substrate for Privacy Sandbox functionality.

The fact that script runners, like worklets, can have an explicit lifetime is another critical feature for auction and bidding.  Publishers or SSPs must put time limits on auctions in order to ensure that ads are returned to all available slots within the browser rendering window.  

Conclusion

That was a fairly long discussion, but I hope that after wading through it you now have an understanding as to why this incredibly important new browser element is fundamental to the design of the Google Privacy Sandbox.  We will be revisiting script runners again and again as we talk about how the various product-level APIs are implemented.  So stay tuned.