Do targeting parameters and algorithms used in the targeting and delivery of advertising discriminate?

Targeting ad audiences is an exercise in obtaining the largest audience of people interested in what you are advertising. Since a media budget is limited and the advertiser cannot reach everybody, there begins the tradeoff of using targeting parameters to reduce people for whom the ad may be less relevant.

Critics have questioned whether targeting advertising to groups of people is discriminatory and charges have been laid against some media providers.1

In order to answer the question this paper must explain how audiences get defined, the relationship between audience data and ad delivery based on the data.

Most traditional media offer audience research indicating that certain magazines or TV shows have an appeal to some groups more than others. Kids shows, for instance, have more kids watching than news shows. Historically, the audiences are described by skews towards age and gender. Media suppliers can also add reader/viewer interests or they can match purchase patterns through the use of surveys. Surveys are limited to the amount of information they can collect. This generally reduces the audience definitions used by most media to basic age/gender information, which for the most part serves advertisers fairly well.

However, while the ad audience may be targeted, no one is excluded from seeing the ad, watching the show, or reading the magazine. The show is the show, all viewers see the same ad regardless of age, gender or any other audience characteristic. Although advertisers call this “media waste”, they accept an amount of off-target delivery which hits the light purchaser and prospective buyer resulting in support for sales and long-term growth.

Digital advertising adds more capabilities to audience definition and audience delivery. First the audience can be defined by all sorts of characteristics whether it is consumer volunteered (interested in Gardening declared in social media) or observed (visiting multiple car sites while seeking a new car) – the options are nearly limitless. Secondly, through the real-time delivery of the ads, the audience of the show may all see different ads depending on what data about the viewer is available. This real-time view data can include sources directly from the advertiser or other third parties such as weather providers.

Include or Exclude?

Ad delivery can be inclusionary or exclusionary, meaning the advertiser chooses certain targets AND adds other groups while EXCLUDING others. The ad targeting choices used in combinations can easily reduce “media waste” leading to some very specific audiences for the ads. Further the campaign may be guided by an algorithm which weights the ad delivery in certain directions within the targeting parameters.
Targeting settings are set by the advertiser or by their agency. Algorithms can be supplied by advertiser and/or their agency while at the same time the media supplier may apply an algorithm to the ad delivery. Further data may come from any or all of the three; advertiser, agency or media supplier.

What’s the risk?

There is a risk for advertisers to be accused of discrimination under the law2 depending on the combination of the advertiser’s creative/offer, the targeting settings and/or the type of ad, the data feed and the algorithm(s). Some are under the advertiser’s control, others under the control of the platform and some can be both.

Allegations would be tested by examining what targeting settings were chosen by the advertiser. The advertiser may have more risk if they are operating the settings directly themselves as opposed to through an agency where the advertiser’s direction is communicated by a media brief. In that case the brief should be examined in addition to the settings used by the agency as interpreted from the brief, plus applicable provisions in the Master Services Agreement between Advertiser and Agency.

Certain types of advertising are much more sensitive to allegations of discrimination; for instance, job postings, housing availability, financial and health matters. Advertisers in these areas need to take precautions that messages are widely available, application forms operate in a neutral manner and that any advertising aims to generate more response through neutral business results oriented optimization.
If the advertiser has ensured their operations are not a source of discrimination, the next questions are if the platforms and the interaction between advertiser/platform are creating risk of discrimination.

A data feed from advertiser’s website to the platform can be used by the platform’s algorithm to optimize the platform’s delivery of advertising. A conversion pixel is an example of such an arrangement.

Conversion pixel data is usually anonymized and aggregated with a huge amount other data. The platforms are not forthcoming with all the data sources nor do they disclose what policies guide who is included or excluded. Platforms’ conversion pixels have been spotted gathering very sensitive health information on Canadians which is getting mixed into the data pool used for ad delivery optimization.

“Facebook has agreed to and has already begun engagements with academics, researchers, civil rights and privacy advocates, and civil society experts to study algorithmic modeling by social media platforms, and specifically the potential for unintended bias in algorithms and algorithmic systems. Algorithms are becoming ubiquitous on internet platforms and in social media. Facebook recognizes that the responsible deployment of algorithms requires an understanding of their potential for bias and a willingness to attempt to identify and prevent discriminatory outcomes. While more needs to be done, by engaging with groups to study unintended bias in algorithms on social media platforms, Facebook is taking a much needed step forward.”3


Protecting the Marketer

There is no reason why an advertiser should accept this risk. Although the platforms’ terms and conditions intend to push various liabilities on the users of the platform through a “use equals acceptance of the terms and conditions” clause, that does not absolve the platforms of the results of their algorithms or data sources. During the meeting of the International Grand Committee on Big Data, Privacy and Democracy the platforms unanimously declared their responsibility for the algorithms used within their platforms.

The ACA recommends that its members prepare targeting parameters and brief their agencies to avoid discrimination through exclusionary ad delivery especially when operating in sensitive categories such as financial, job postings, housing, health. Creative (dynamic or static) with different offers combined with media targeting, algorithmic optimization and obscure/problematic data sources is a risky operation and should be monitored by the advertiser to ensure that unintended outcomes are not produced.

Further, ACA has developed a notice for advertisers to use advising that they expect a platform to comply with all applicable laws including human rights and other non-discrimination laws in the delivery of the advertiser’s advertising and that the platform’s operations will be scrutinized in the event of any allegation of discrimination.

For more information or questions, contact Chris Williams


1. https://www.hud.gov/sites/dfiles/Main/documents/HUD_v_Facebook.pdf
2. For example, The Ontario Human Rights Code http://www.ohrc.on.ca/en/ontario-human-rights-code
3. https://fbnewsroomus.files.wordpress.com/2019/06/civilrightaudit_final.pdf