Core Services of the Google Privacy Sandbox

March 2, 2024
Chapter 1: Introduction

The previous post ended with a high-level diagram of the revised Chrome browser that is adapted for the Google Privacy Sandbox (Figure 1).  In this article, we will explore in more detail the core products and services that form the browser side of the Privacy Sandbox.  Subsequent articles will highlight each element in Figure 1 and explain how it supports/ties into the products and services that need to be delivered (Figure 2).  After that, I will delve deeply into each of the services and how they work, referring to only those API calls that are most critical to understanding.  Lastly we will tie the entire current flow of a transaction through these browser elements.  That is how the Privacy Sandbox works today - the server side elements are still a long way from implementation.  We will therefore focus on those elements and their impact on the overall architecture in later articles.

Figure 1- The Browser with Updates for Google Privacy Sandbox

The Core Sandbox Technical APIS/Product Elements

Now it may seem like I am violating my promise to not drill into APIs, but in order to understand the Privacy Sandbox you first need to understand the core product elements, and these products are packaged as APIs with completely separate functions.  Just mentally “remove” the term API as I describe them and you will be able to see them as product names.

There are three core browser-centric products in the overall Google Privacy Sandbox "Suite", with many supporting elements (also defined by APIs).  

  • Topics
  • Protected Audiences
  • Attribution Reporting

There are also two core server-side products that make up the complete suite which we will cover later:

  • Key Management Server (there are at least two in order to provide Multi-party Computation)
  • k-anonymity server

I do not think of the balance of the technologies,  such as Fenced Frames or DNS over HTTPS,  as “products” per se because they are technologies designed to support the core products, not products in-and-of themselves.  Many are evolutions of browser standards that already exist or they are additions to the browser, such as secure Shared Storage, which will be available to more than just the Privacy Sandbox. 

Topics API

The Topics delivers targeting for what are typically thought of as contextual audiences without cookies as part of the Privacy Sandbox.  Contextual audiences are relatively easy to create.  You index all the pages on various websites and categorize them by some kind of audience taxonomy.  Then you capture in the browser what page a particular browser visits and serve an ad based on the content of that page.

The Topics API goes a bit further. It looks at all the pages a browser visits and algorithmically determines whether the browser "fits" into one of more audiences in a pre-defined taxonomy.  If so, it stores that information in the browser for later targeting during an ad auction. This mechanic is why Google does not consider these as contextual audiences and instead as something more sophisticated. I will refer to this type of audiences going forward as "topical audiences".

For example, the IAB has a ~1,500 element audience taxonomy that can easily be used for topical targeting.  Google is using a 471-element taxonomy as part of the Topics API.  If you were to ask me why Google is not using the IAB taxonomy to provide consistent contextual targeting across Google, publisher sites, and other third-party adTech platforms, I would hazard that the answer lies in the need to maintain k-anonymity for purposes of complying with privacy requirements.  In general, an audience must have at least 50-100 members for it to be considered sufficiently anonymous for targeting purposes.  A too fine-grained taxonomy makes it difficult to create a large enough audience to meet the anonymity requirement at a time when you are only testing on 1% of all Internet traffic. 

The Topics API evolved from what I consider the first true “product” to emerge from the process that led to today’s Google Privacy Sandbox: Federated Learning of Cohorts, or FLoC.  Federated learning is a data science approach that allows PII (or any) data to reside remotely (in this case in the browser) and when needed have it sent to a central server in anonymous fashion to update the weights of an algorithm.  The weights are then sent back to the remote locale and the algorithm run against the local data.  

Google came up with an approach that used federated learning to create topical audiences.  A cohort was a short name shared by a large number (thousands) of people, derived from their users' browsing histories. The browser would update the cohort over time as its user viewed pages on the web.  In FLoC, the browser used the local algorithms to develop a cohort based on the sites that an individual visited. However, the weights used by the algorithm for each feature were centrally calculated when the browser's local data was sent in anonymized form to a secure server that "federated" the data to generate new weights. At that point, the new weights would be returned to the browser. Those weights would then be used to algorithmically update the browser's inclusion in a specific audience based on it on-going behavior. The central idea that maintained privacy was that these input features to the algorithm, including the web history, were kept only in the browser and were not uploaded elsewhere. The browser only exposed the generated cohort to publishers and advertisers, not any of the user's browsing data, not the algorithm, and not the feature weights.

The FLoC API was developed in 2019 - 2020 and tested in 2021.  Testing ended in July, 2021 for the following reasons and these learnings were incorporated into the current Topics API:

  • FLoC ended up not using federated learning.  Google and others found that on-device computation was faster and less resource intensive.  So by definition the whole approach (and naming, obviously) had to change.
  • FLoC did not provide enough protection against cross-site identifying information.  Because of this, device fingerprinting was still possible.  Two academics from MIT found that more than 95 percent of user devices could be uniquely identified after only four weeks.
  • The adTech industry wanted more transparency and control over how the contextual categories were created.  In FLoC, the automatic way in which contextual audiences were created was a result of the algorithm, not a fixed taxonomy.  It was also unpredictable, which meant cohorts could be created around sensitive topics and the adTech providers would be unable to prevent advertisers’ ads from showing in contexts unsuitable for specific brands.  

We will drill into more detail on all of these issues when we talk about topical audience creation under the Privacy Sandbox.

Protected Audiences API

The Protected Audiences API is the core product discussed in articles about on-going testing and evolution of the Privacy Sandbox.  It started life as something called TurtleDove.  To this day I don’t know why bird names were chosen, even though I still have emails in my email folders from Michael Kleber (of Google, one of the core technical leaders of the Privacy Sandbox initiative) about setting up the repository.  A series of other bird-named APIs came in - PIGIN, DoveKey, TERN, SPARROW, PARRROT, SPURFOWL, SWAN. Ultimately Turtledove and the best suggestions from these other API proposals were merged into FLEDGE, which stands for First Locally-Executed Decision over Groups Experiment. FLEDGE was then renamed the Protected Audience API (abbreviated as PAAPI or just PA) in April 2023, once the technology looked reasonably viable and a more “product-oriented” name was needed.

The Protected Audience API allows advertisers and publishers to target audiences in the browser based on behavior they had seen - for example, from purchases made on their website - without being able to combine that with other information about the person. That includes who they are, what pages they visit across the web, or what other publishers/advertisers know about them. That capability for publishers and advertisers to capture data from others in the programmatic ad system is called by Google "cross-site (re)identification". It is a term you will see repeatedly in these posts because preventing cross-site reidentification is at the heart of the Google Privacy Sandbox (and actually all privacy-preserving solutions on the market or in design today). PA calls these audiences interest groups but I find that quite confusing, because I tend to think of interest groups being associated with contextual targeting (i.e. people who read certain pages have an interest in that topic).  Even the Topics API shows this same issue with defining audiences:

“Interest-based advertising (IBA) is a form of personalized advertising in which an ad is selected for the user based on interests derived from the sites that they’ve visited in the past. This is different from contextual advertising, which is based solely on the interests derived from the current site being viewed (and advertised on).

The term "interests", as in interest groups, is used for audience concepts in both Protected Audiences and Topics APIs.  Yet these are very different types of audiences and are stored in different browser storage locations (once again, read “files on the hard drive”).

Moving forward, we will be exact and use  "topical audiences" to refer to audiences in the Topics API, and "interest-based audiences" or "interest groups" to refer to audiences in the Protected Audiences API.

The Protected Audiences API is Where Auctions and Bidding Are Handled

The Protected Audiences API is where in-browser auction and bidding functionality are defined, as documented in the main Github privacy sandbox repository.  This is why all effort right now is on testing PA: it is where bid requests and bid responses for both topical and interest-based audiences occur.  PA also specifies where and how the ad for the winning bid is delivered to the browser and how this operates within the new fenced frames object. So while Protected Audiences API defines how interest-based audiences are created, stored and used, it is the core product of the three because it encompasses all the other services needed to bid for and deliver ads.

Having said this there are concerns, and I think rightfully so, that the computational requirements of running auctions in the browser at scale while maintaining rendering speed may be impractical where devices have limited processing power or there is high network latency. So, as we will discuss at length when we get into server-side discussions, there is server-side bidding and auction services API that is in development to run in Trusted Execution Environments.

The Protected Audience API Also Covers Auction Results Reporting

Reporting on auctions and conversions is a significantly complicated topic in the Privacy Sandbox, and is not yet fully fleshed out.  Reporting on conversions, attributing them to specific ads, and the rules by which fractional attribution is done, is handled by the Attribution Reporting API.  But reporting on auctions- what the auction structure was, what the winning bid was and its features, and what happened to losing bids, are all covered by PA. 

There are two kinds of reports:

  • Event-level reports associated with a particular auction, bid and ad delivery to a specific browser. These are only available to the advertiser and , in limited form, to the publisher that displayed the ad. The advertiser may delegate a subset of event-level reports to their DSP or similar adTech partner in some situations.
  • Aggregatable reports that provide rich metadata in aggregate to better support use-cases such as campaign-level performance reporting, segmentation based on topical or interest-based audiences, as well as reports combining second- or third-party data to analyze the performance of demographic, psychographic, or other segmentation schemes.

Today, PA reporting is in its infancy. For FOT #1, reporting functions in the Protected Audiences API can send event-level reports directly to participating advertiser/publisher (or their delegates) servers.  There is a longer-term plan for doing both event-level and aggregate-level reporting in a way that prevents an adTech from learning which interest groups a particular browser belongs to.  The basis for this long-term approach is currently outlined in a draft proposal called the Private Aggregation API.  This API covers numerous potential use cases beyond programmatic bidding.  As a result, there is also an extension of that API specifically for the Protected Audiences API that is described in the PA repository here.

Reporting is complicated even further because the Privacy Sandbox is built around fenced frames. which will be discussed in the next article.  Fenced frames are a privacy-preserving version of an iFrame.  The problem is that the reporting functions in PA, named respectively reportResult() for publishers and reportWin() for advertisers, can see results for topical ad requests under the Topics API, but cannot “see” the results of interest-based ad events that occur withiin the fenced frame because of its privacy protections.  Therefore a special mechanic is required to extract information about impressions, interactions, and clicks for interest-based ads out of the fenced frame for reporting purposes.  This is handled by the Fenced Frames Ads Reporting API endpoints that are part of the PA specification.

Attribution Reporting API

The Attribution Reporting API provides measurement services for both publishers and advertisers to the Google Privacy Sandbox.  As described in its documentation, the Attribution Reporting API enables measurement when an ad click or view leads to a conversion on an advertiser site, such as a sale or a sign-up. The API enables two types of attribution reports:

  • Event-level reports associated a particular event on the ad side (a click, view or touch) with coarse conversion data. To preserve user privacy, conversion-side data is coarse, and reports are noised and time-delayed. The number of conversions is also limited.
  • Aggregatable reports provide a mechanism for rich metadata to be reported in aggregate, to better support use-cases such as campaign-level performance reporting or conversion values.

The API allows advertisers and ad tech providers to measure conversions from:

  • Ad clicks and views.
  • Ads in a third-party iframe, such as ads on a publisher site that uses a third-party adTech provider.
  • Ads in a first-party context, such as ads on a social network or a search engine results page, or a publisher serving their own ads.

Each browser captures the activity and sends encrypted event reports to an adTech server. The adTech server, whether belonging to the publisher or the advertiser (or their proxies, like an SSP or DSP),  cannot see the individual events.  The adTech server, located in a Trusted Execution Environment, decrypts and then aggregates the individual browser actions into aggregate, privacy-preserving reports.  These are the only reports that the advertiser and publisher can see from this API.  

One key difference between the Attribution Reporting API and the standard reporting in the Protected Audiences API is that Attribution Reporting API involves a two-sided event. The first event is the ad being shown and activity around that.  The second is a purchase or some other conversion event on the advertiser's site.  The ad is considered the “attribution source” or “reporting origin” and has a unique source_id, while the conversion action is considered the “destination”.  The two events are tied together by a unique destination ID that is registered to the attribution source at the time it is created.  

There are two other important aspects of the Attribution Reporting API that distinguish it from auction-based reporting.  First, ads can be given priorities.  These priorities will represent how much weight they will be given in a fractional attribution system.  Second, there is an attribution window which is the amount of time after the ad is displayed or the campaign ends that a conversion will be counted against that impression/campaign.  The default is 30 days, but can be set by the advertiser between 1 - 30 days.  As of now, 30 days is the maximum conversion window allowed.  My guess is this will be extended at some point, since automobile advertisers tend to use longer attribution windows.

A Services View of the Google Privacy Sandbox

Figure 1 showed the physical elements in the browser that support the Google Privacy Sandbox.  However, we can take a different view when thinking about the three core products, which are really in themselves nothing more than services delivered through APIs.  This view is helpful because it shows all the other services and APIs on which the three core products depend, many of which have their own W3C standards, W3C working groups, and Github repositories.  This view is displayed in Figure 2.  

Figure 2 - A Services View of the Google Privacy Sandbox

To reiterate a point made in a prior post, I am not trying to show the entire services architecture of Chrome or any other browser.  I am only trying to represent enough of the features and services to explain how the Privacy Sandbox works.  

The Microsoft Variant

Before closing out, I do want to mention one evolution of the Google Privacy Sandbox that has occurred recently in the marketplace. Microsoft has announced its own version of the Privacy Sandbox, which I will refer to as the Microsoft Privacy Preserving Advertising platform (MPPA). Not clear they have a name for the overarching system quite yet as far as I can tell. MPPA is intended to be largely compatible with the Privacy Sandbox, but uses a variant with substantive changes to the Protected Audiences API called appropriately the Ad Selection API (Figure 3)

Figure 3 - Microsoft's Version of the Privacy Sandbox Services Architecture

We will discuss the differences in these two architectures in detail when we get into the details of auction and bidding for Privacy Sandbox. But let me give a quick summary of the main differences between how they will operate. I say "how they will" as Microsoft's version is still under development and won't have a first origin trial until late in 2024.

  • MPPA, unlike PA, allows multi-domain, multi-party, and multi-device processing in transient, trusted, and opaque environments with differential privacy and k-anonymity output gates. One result of this is that MPPA allows the use of bidding signals owned across domains in opaque processes.
  • MPPA is server-side only and avoids running auctions in the browser. Microsoft believes that this reduces scalability and other risks associated with a new browser-based model. It also maintains operational design and control with the adTech providers who have the experience and knowledge of their systems to quickly and effectively add the new capabilities. In my mind, this is one of the most significant differences and, as a product manager who always worries about risk, this is certainly a more appealing approach as a transition to a pure client-side auction model.
  • MPPA avoids shared services and failure points across all API users.
  • Under MPPA, machine learning can run and feed online/offline models back into the opaque auction in real-time.
  • Another big difference, MPPA allows creatives to be selected dynamically in the auction. This has been a significant point of discussion in the FLEDGE weekly meetings. Advertiser-side providers see this as a key feature that is missing from PA.
  • MPPA enables critical use cases such as page caps, competitive exclusion, and responsive ads through multi-tag and multi-size support.

That’s all for today.  In the next article, I will go return to the core browser elements and tie them to the products/services that have been today's focus.