Pay or Consent

Op-Ed: A critical analysis of the EDPB’s “Pay or Consent” Opinion

Illustration: Dall-E rendering of just how potentially evil the European Data Protection Board (EDPB) and certain privacy rights campaigners view the “Pay or Consent” approach + large companies. I didn’t try to get any drug cartel references in there, though as a lawyer acting for adtech companies I have been likened to a drug cartel lawyer (really) by some over the past 24h. Clearly, if you are in the adtech industry or digital platform sector and you need your legal fix, Better Call Peter!


Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms, published on 17 April 2024, is a long document by the EDPB’s standards, but one that could have benefited from a lot of editing. When reading those 42 pages, you may be struck by how repetitive some aspects are, as the same arguments are used in relation to various conditions for consent (freely given, no detriment, etc.) to show that the EDPB disapproves of a “Pay or OK” model.

This will be a long one, so here is something for the “too long; didn’t read” folks – or for some campaigners who will not want to read the entire piece anyway:

  • Scope: “large online platforms” (?): new vague concept more likely to create confusion than anything (and not excluding European publishers, mind you)
  • Competence: no surprises here, the EDPB considers itself competent
  • Consent & validity: the EDPB suggests that large online platforms potentially are dealing with an imbalance of power issue when collecting consent, and they “should” therefore consider offering another alternative in order to get “genuine” consent – but that alternative “recommendation” has a range of flaws, both under the GDPR and under the ePrivacy Directive
  • No, really, users are unable to refuse to use a service: the EDPB alleges that there may be no real practical option to refuse to use a service, ignoring the fact that there are competitors (some favoured by certain generations) and that people choose one platform, service or product over another because they feel that it is the best one for them in their particular situation (otherwise, they would use an alternative)
  • What is an “appropriate fee”? The EDPB’s position emphasises the need to document pricing and suggests that supervisory authorities have the authority to assess pricing, which will most certainly lead to a challenge due to the fact that they are not market regulators
  • Transparency and data subject rights: the EDPB requires greater granularity for consent (including between consent to functionality and consent to ads – but then how would that work if a person only consents to one, not the other?), increased transparency (even regarding the consequences of processing, something the GDPR reserves for automated decision-making) and suggests (ahem) a duration of one year for consent – all of those points could be challenged

Scope: “large online platforms” (?)

This “Consent or Pay” Opinion concerns on the face of it only the use of this business model by “large online platforms”, a new term that borrows from the “gatekeeper” and “VLOP” terminology of the Digital Markets Act (DMA) and the Digital Services Act (DSA) respectively. The EDPB defines the concept in broad terms: “large online platforms are platforms that attract a large amount of data subjects as their users” [paragraph 25].

While ostensibly aimed at certain social media platforms, the vague nature of the concept raises a lot of questions: are large national publishers “large online platforms” as well, due to their importance in a national market? In absolute terms, they might have many users in one Member State, which echoes the “large scale” concept from the actual text of the GDPR. So if that Member State is populous, are they automatically a “large online platform”? There is a reference to the DSA’s “online platform” definition, but the term “platform” is unknown in the GDPR, so time will tell whether regulators seek to apply this Opinion only in relation to DSA-type platforms or also to publishers.

Regulators often criticise controllers for their use of the conditional in privacy statements – but the conditional “may” seems fine when talking about the scope of an Opinion: “For the purposes of the present Opinion, the concept of ‘online platforms’ may cover, but is not limited to, ‘online platforms’ as defined under Article 3(i) Digital Services Act” and “The definition may cover, among others, certain controllers of ‘very large online platforms’, as defined under the DSA and ‘gatekeepers’, as defined under the DMA” [23 + 28].

This is not the EDPB at its best, in my opinion, and it illustrates the fact that this document seems to have been drafted solely with one goal in mind: being a tool to help target one social media company in particular without actually being seen as an “individual decision”, while not on the face of it seeming too dangerous for a wide range of significant European players.

Unfortunately, this level of vagueness creates more problems and just makes it that more likely for those European players to be covered as well. And it doesn’t mean that the Opinion cannot be challenged either.

Competence

The EDPB justifies its competence in adopting this Opinion on the basis of Article 51(1) and (2) GDPR, contending that this allows supervisory authorities to “assess the validity of consent used as a legal basis for the processing of personal data, including when such consent is collected in the context of ‘consent or pay’ models where personal data is processed for behavioural advertising purposes”.

We’ll get back to the issue of competence later.

Consent & validity

The EDPB then starts to look at the various GDPR conditions for valid consent, and ends up repeating many of its arguments across several of those conditions.

For this reason, I am grouping the EDPB’s concerns hereunder based on certain topics:

Dominant position & potential impact on freely given nature of consent

The EDPB refers in a few paragraphs to the Court of Justice of the European Union (CJEU)’s Bundeskartellamt judgment of 4 July 2023, which was a very important judgment in many respects.

One of the dimensions of that case was the issue of whether a dominant position from a competition law perspective affects the “freely given” nature of consent.

The EDPB is a little selective in its quotes of the CJEU judgment, but does refer to the following:

  • Para. 51: “the existence of a dominant position of a provider of online social networks ‘does not, as such, preclude the users of such a network from being able validly to consent, within the meaning of Article 4(11) of [the GDPR], to the processing of their personal data by that operator’”.
  • 52: “However, the CJEU clarified that a dominant position is ‘an important factor in determining whether the consent was in fact valid and, in particular, freely given, which is for that operator to prove’. This is because this circumstance ‘is liable to affect the freedom of choice of that user, who might be unable to refuse or withdraw consent without detriment’ and ‘may create a clear imbalance … between the data subject and the controller’”
  • 53: “In addition, although not central to the Court’s determination, the CJEU mentioned that, where it appears that certain processing operations are not necessary for the performance of a contract, ‘users must be free to refuse individually, in the context of the contractual process, to give their consent to [them], without being obliged to refrain entirely from using the service offered by the online social network operator, which means that those users are to be offered, if necessary for an appropriate fee, an equivalent alternative not accompanied by such data processing operations.’”
  • 54: “The CJEU also highlighted that as consent ‘is presumed not to be freely given if it does not allow separate consent to be given to different personal data processing operations despite it being appropriate in the individual case’ referring to Recital 43 GDPR. It further identified the ‘scale of the processing of the data’ and the ‘significant impact of that processing on the users of that network’, as well as the reasonable expectations of the users, as being particularly important factors in the case at hand.”

Basically, in the event of a dominant position, there is a need to carefully consider whether that position affects the freely given nature of consent.

The EDPB later comes back to the notion of dominance in a section dedicated to the “imbalance of power”, to say the following:

  • 102: “Controllers of large online platforms may find the considerations used to determine a company’s dominant position [under EU competition law] useful when assessing whether there is a clear imbalance of power.”
  • 103: The EDPB is of the view that “a controller does not need to have a ‘dominant position’ within the meaning of Article 102 TFEU for their market power to be considered relevant for enforcing the GDPR”
  • 105: “there might be situations where supervisory authorities might conclude on the existence of clear imbalance within the meaning of the GDPR, without a dominant position being established”

After that, “dominant position” stops to appear in the Opinion.

The implication is clear: in the case of a dominant position, there is potentially an imbalance of power, but there might also be for any “large online platform” without a demonstrated dominant position. (See earlier point about the vagueness of the concept of a “large online platform”)

And the EDPB uses this potential imbalance of power to suggest that the freely given nature of consent is at risk for any large online platforms.

Bye bye behavioural advertising, hello FAWBA?

Because of this potential imbalance of power, says the EDPB, it is that much harder for a large online platform to collect consent that is freely given, when deploying a “pay or consent” model where the “consent” option refers to a version of the service with profile-based advertising (also known as personalised ads / targeted ads / behavioural ads).

The EDPB refers for instance to the principles of data minimisation and fairness:

  • 59: “behavioural advertising may entail gathering and compiling as much personal data as possible about individuals and their activities, potentially monitoring their entire life, on- and offline. The EDPB considers that the magnitude and intrusiveness of the processing have to be taken into account while assessing compliance with the principle of data minimisation. Excessive tracking, which includes the combination of various sources of data across different websites, is thus harder to reconcile with the principle of data minimisation than, for example a system of personalized advertising in which users themselves actively and consciously determine their own preferences”

[First, “monitoring their entire life, on- and offline”? This seems straight out of a conspiracy theory. This aversion to profile-based advertising is never explained. Next, because of the context in which this Opinion was requested and its clear aim at one social media provider in particular, the EDPB’s wording regarding data minimisation appears misleading, as it gives the impression that that particular social media provider does such “excessive tracking” by default with “the combination of various sources of data across different websites,”, even though said social media provider has made it very clear that this is not the case and that a separate consent mechanism applies]

  • 60: “In this regard, it is important that controllers are able to demonstrate why they consider certain choices are in line with the principle of fairness as described in the previous paragraph. This is particularly important if the controller narrows down the data subject’s range of choices […] or which may risk unduly influencing the data subject’s choice (e.g. by charging a fee that is such to effectively inhibit data subjects from making a free choice).”

And this leads the EDPB to make a proposal that is the cornerstone of everything else in the Opinion:

  • 73: “The offering of (only) a paid alternative to the service which includes processing for behavioural advertising purposes should not be the default way forward for controllers. On the contrary, when developing the alternative to the version of the service with behavioural advertising, controllers should consider providing data subjects with an ‘equivalent alternative’ that does not entail the payment of a fee, such as the Free Alternative Without Behavioural Advertising as described below in this section.”

Let’s pause here.

First, “The offering of (only) a paid alternative to the service which includes processing for behavioural advertising purposes should not be the default way forward for controllers”. “Should not”, on the face of it an expression that includes a very strong recommendation, not a command, but an expression that carries a connotation of disapproval. So while the EDPB does not say “cannot”, it is indicating that it does not like the idea of “Pay or Consent” with profile-based advertising for the consent part.

Why not? The entire Opinion is very vague in this respect. As mentioned above in relation to para. 59 of the Opinion, the Opinion appears to be based on a flawed understanding of what profile-based advertising actually is, repeating myths about monitoring life online as well as offline (fortunately the myth “your phone is listening to your every word” isn’t explicitly in there, but we’re not that far). This in and of itself is already a reason to challenge the Opinion.

Next, even if we were to accept the EDPB’s aversion to profile-based advertising as legally justified, there are serious flaws with the alternative being proposed.

Let’s then look at that alternative – the “Free Alternative Without Behavioural Advertising” (FAWBA):

  • 74: “Should controllers decide to provide data subjects with an ‘equivalent alternative’ which involves the payment of a fee, […], in order to ensure genuine choice and to avoid presenting users with a binary choice between paying a fee and consenting to processing for behavioural advertising purposes, controllers should consider also offering a further alternative free of charge (‘Free Alternative Without Behavioural Advertising’)”
  • 75: “This alternative must entail no processing for behavioural advertising purposes and may for example be a version of the service with a different form of advertising involving the processing of less (or no) personal data, e.g. contextual or general advertising or advertising based on topics the data subject selected from a list of topics of interests. This is also linked to the principle of data minimisation […]: controllers should ensure that only personal data that is necessary for the purpose of placing such advertisement would be processed. Controllers should in any event bear in mind the need to comply with Article 6 GDPR and Article 5(3) of the ePrivacy Directive when applicable.”

Pause here again.

First, the word “must” appears suddenly – I wonder if this betrays an attempt to soften an earlier version of the text that might have not included a so-called “recommendation”.

Next, the last sentence is extremely important: “Controllers should in any event bear in mind the need to comply with Article 6 GDPR and Article 5(3) of the ePrivacy Directive when applicable”. We’ll get back to this in a bit, but it actually undercuts the premise of the “FAWBA” approach in an important way.

  • 76: “While there is no obligation for large online platforms to always offer services free of charge, making this further alternative available to the data subjects enhances their freedom of choice. This makes it easier for controllers to demonstrate that consent is freely given.”

The first sentence is very telling of the opinion of the persons holding the pen. Article 16 of the EU Charter of Fundamental Rights explicitly recognises the fundamental freedom to conduct a business, and Recital 4 of the GDPR explicitly states the following:

The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality. This Regulation respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties, in particular […] freedom to conduct a business […]

I might have to publish a bit more on this in the future [EDIT: read the in-depth article on that here – Op-Ed: Who dares question the primacy of data protection?], as this freedom gets overlooked a little too much, but the EDPB’s Opinion should never have included wording such as “no obligation for large online platforms to always offer services free of charge”.

In English, the use of the word “always” after a negative suggests that there may be some situations where it is the case: “While there is no obligation for large online platforms to always offer services free of charge” leaves open to interpretation whether “there may be some cases where large online platforms are under an obligation to offer services free of charge”. It’s a very bad sentence for that reason.

There is no obligation for any business, including a large online platform, to offer any product or service free of charge” would have been more appropriate. We are not talking about subsidised public services here, after all.

Let’s continue.

  • 77: “In the opinion of the EDPB, whether or not a Free Alternative Without Behavioural Advertising is provided is a particularly important factor to consider when assessing whether data subjects can exercise a real choice and therefore whether consent is valid. As stated in its reply to the Commission’s initiative for a cookie pledge, the EDPB considers among others relevant whether a user is offered, in addition to a service using tracking technology and a paid service, another type of service, such as one that includes a less intrusive form of advertising, when assessing the validity of consent and whether the data subject is able to exercise a real choice.”

So implicitly, there is a much higher likelihood that the EDPB and supervisory authorities will consider that there is no real choice if no FAWBA is offered. So much for the “recommendation” part.

  • 78: “The Free Alternative without Behavioural Advertising offered as a further alternative would play a relevant role to remove, reduce or mitigate the detriment that may arise for non-consenting users from either having to pay a fee to access the service or not being able to access it.”

This touches again upon the issue highlighted above about the freedom to conduct a business. With any business, I can choose whether I wish to benefit from the service or product, or not. I wrote an op-ed about this six months ago – I really recommend reading it. We’ll get back to this in a bit, as the EDPB later tries to justify this position by reference to the important role that certain platforms play in users’ daily lives (in effect, it uses the idea of network effects to somehow make a platform a “must” in every person’s life, such that free services without behavioural advertising need to be available).

  • 79: “Additionally, as previously observed by the EDPB, where a clear imbalance of power exists, consent can only be used in ‘exceptional circumstances’ and where the controller, in line with the accountability principle, can prove that there are no ‘adverse consequences at all’ for the data subject if they do not consent, notably if data subjects are offered an alternative that does not have any negative impact. In the context of this Opinion, such an alternative could be the offering of the Free Alternative Without Behavioural Advertising.”

First, the “no adverse consequences at all” part comes from one sentence in the EDPB’s earlier guidelines on consent under the GDPR (see para. 22 of the EDPB’s GDPR Consent guidelines). It is a very radical and absolute position that has yet to be properly tested in court (they are only guidelines, after all, and could be brought into question), and it raises more questions than it answers: is the slightest difference an adverse consequence? If I see even one ad and thus see less content compared to the paying version, are there really no adverse consequences “at all”? What if I see two ads, or three? Such an absolute wording is far too restrictive and puts a tremendous and disproportionate onus on controllers.

Next, those “exceptional circumstances”? The EDPB has created them, as has the CJEU, by pushing for a highly restrictive interpretation of the other legal grounds under Art. 6(1) GDPR.

Fast-forward in the Opinion, and the EDPB emphasises the need for functional equivalence:

  • 123: “If, compared to the Version With Behavioural Advertising, the Alternative Version is not of a different or degraded quality, and no functions are suppressed (unless any changes are a direct consequence of the controller not being able to process personal data for the purposes for which it sought consent), then the Alternative Version can likely be considered to be genuinely equivalent to the Version With Behavioural Advertising.”
  • 124: “The more the Alternative Version differs from the Version With Behavioural Advertising, the less likely it is for the Alternative Version to be considered as genuinely equivalent, although this remains a case-by-case assessment.”

Again, as mentioned above, this raises questions about how to handle e.g. contextual ads as opposed to profile-based ones in a manner that would satisfy the EDPB. If there are more ads on a contextual version, is it still “genuinely equivalent”?

This all leads to another broader comment: this “FAWBA” is being pushed (recommended / required) by the EDPB, but nowhere does the EDPB seem to leave any room for considerations by the controller on whether such a FAWBA is commercially (un)viable. This leads to an awkward situation: if a controller considers a “FAWBA” commercially viable, will its implementation be viewed by regulators / campaigners as recognition of the fact that the profile-based advertising version of the service goes too far in data protection terms? And if the controller considers it not to be commercially viable, can the controller refuse to follow the requirements of the EDPB and thus likely face a coordinated challenge before (or even by) EDPB members?

“Freedom to conduct a business” suddenly sounds like “obligation to conduct a business in a particular way, no matter the profits, to satisfy regulators and campaigners”.

The ePrivacy implications of a “FAWBA”

Let’s go back to para. 75 for a minute, and that sentence “Controllers should in any event bear in mind the need to comply with Article 6 GDPR and Article 5(3) of the ePrivacy Directive when applicable”.

As some readers may know, I wrote extensively on the EDPB’s proposed Guidelines on the technical scope of Art. 5(3) of the ePrivacy Directive – the so-called “cookie” rule – and contributed to a couple of submissions to the EDPB’s public consultation on the topic.

One issue that arose in that context was that the EDPB publicly declared on the day of publication of those guidelines that consent was required for “attribution”, one core aspect of the package of any kind of online advertising, whether profile-based or contextual / based on limited data. Other core aspects include anti-fraud measures (to limit the risk of an advertiser having to pay for clicks/impressions generated by bots) and frequency capping (a feature to limit the chances of a same ad appearing several times on a same page or in rapid succession for a given user).

If the EDPB’s broadened interpretation of Art. 5(3) of the ePrivacy Directive is allowed to stand, and if consent is indeed required for such features, then no matter which form of digital advertising is used, consent would be required.

This is a problem in general, but in the EDPB’s “FAWBA” approach it is even more of a problem:

(i) What if the platform wishes to include attribution, anti-fraud measures and frequency capping and – based on the aforementioned EDPB position – consent is then required anyway under the EDPB’s interpretation of Art. 5(3) of the ePrivacy Directive? Is that a “Free Alternative” that meets the EDPB’s requirement to have no “adverse consequences at all”, given that there is a choice between “pay or consent or consent”? Would the EDPB then say that it isn’t sufficient and a further “Free Alternative” is required?

[Or can we finally ditch the EDPB’s idea that these measures require consent?]

(ii) What if the platform doesn’t wish to include frequency capping, creating a higher likelihood of the same ad appearing constantly? Isn’t that a negative impact for the user, because the user experience is more irritating?

(iii) What if the platform doesn’t include attribution or anti-fraud measures? Wouldn’t that necessarily lead to more ads having to appear on the same page, to compensate for the fact that a large part of the ads might not be shown to actual users? Isn’t that a negative impact for the user, because the content is less visible?

Let’s leave it at that for now, but you see that the Opinion does nothing to resolve issues in that respect.

Lock-in and network effects: not-a-universal-service?

Back to the next sections of the Opinion, which now deal with the indispensable nature of certain services – and how that allegedly affects consent. These become much more emotionally laden sections, with a clear harkening to the days of universal services. [You could also call it rambling or a rant, depending on how you look at it]

  • 84: “If a data subject refuses to give their consent to the data processing for behavioural advertising purposes, and there are no other free of charge alternatives allowing them to access the same service, the data subject would face a financial consequence, as they would have to pay a fee in order to be able to use the service. This would especially be the case where there are lock-in effects present and the user has been able to use the service for a prolonged amount of time without a fee being present.”

Hold on. The fact that the user was “able to use the service for a prolonged amount of time without a fee being present” is now being used as an argument against “pay or consent”? Without consideration for the fact that – in the case of a certain social media provider – the “pay or consent” was a direct consequence of a range of legislative and regulatory evolutions? This seems like a gratuitous way to say “we didn’t mean to cause this, we actually wanted a contextual ads version from the beginning”.

  • 87: “Data subjects may suffer detriment if it becomes impossible for them to use a service that is part of their daily lives and has a prominent role. This could be the case, for instance, of a platform that is commonly and systematically used to disseminate information that may not be readily available from other sources, or of a platform whose use is necessary to have access to certain services relevant for the individual’s daily life. This may be information or exchanges which the users are reliant upon in their daily lives, which makes it harder for them not to participate on the platform. These types of situations may range from important information during public emergencies to parents receiving information regarding social activities for their children. Additionally, the platform may be a key forum for public debate on political, social, cultural and economic issues.”

This is another one of those sentences that could be rephrased as “we have allowed certain platforms to become too big and now we want them treated like a universal service without actually making them one”. There are alternatives, there are competitors (some very strong for certain generations), and let’s be honest: some social media providers have been hit by numerous scandals over the years, and I remember several movements of “Delete [Platform A]” or “Quit [Platform B]”, and some people did indeed leave those platforms. So isn’t this position a bit of an exaggeration?

  • 88: “In the same vein, the use of certain social media services might be decisive for the data subjects’ participation in social life. With rapid technological innovations and the fact that most people have an online presence, the role that social media play in data subjects’ day-to-day lives and interactions ought not to be underestimated. Many data subjects rely on these platforms as an important means to stay in contact with people that they do not physically interact with in their daily routines, such as friends and/or family. Considering that social media provide a particularly valuable and convenient alternative to in-person interactions, not having access to them can have important consequences on some users’ emotional and psychological well-being. In the above cases, the data subjects might be shut out from the social interactions taking place on the platform and feel socially isolated, especially when there is no alternative service that offers a similar experience and is also used by the data subject’s social contacts. The same goes for taking part in online discussion forums. Data subjects might be shut out of taking part in those online discussion forums, even though these now constitute an important part of online debates.”

Same comments as above, but the last sentence could be applied to any digital property allowing for user interactions, whether they are “large online platforms” or not (however that determination may be made). Should any digital platform be afraid to give moderators the power to ban someone, because that digital platform might happen to be an “important part” of the life of a particular individual and not accessing it might be a “detriment”?

  • 89: “Data subjects can also suffer detriment if, due to not paying a fee and not consenting, they are denied access to professional or employment-oriented platforms. More specifically their possibilities to find job opportunities or build and/or maintain professional networks can be negatively affected, they may feel disadvantaged compared to users that have access to the service or unable to follow important developments in their respective fields of work.”

Some job websites are specialised in a particular field. Should they be afraid to limit access to anyone? Or are we now talking about another large social media platform?

  • 90: “Further, a detriment may be more likely to occur, and possibly of a more significant nature, in case of a large online platform in which lock-in or network effects may be present. The detrimental consequences of denying access to a service can be even more important for the users of online platforms which have not been implementing ‘consent or pay’ models from the outset but have subsequently decided to introduce them.”

See the point made earlier about repetition?

And then we get to the sections where the EDPB basically says that consent or pay is likely to always involve a “detriment” to users in the case of “large online platforms”:

  • 94: “If any of the (non-exhaustive) negative consequences described in the paragraphs above are present, offering the sole choice between a paid service and a service entailing behavioural advertising based on the data subject’s consent would impact the possibility for data subjects to make a genuine choice and withhold consent without detriment.”
  • 95: “In light of the above, detriment is likely to occur when large online platforms use a ‘consent or pay’ model to obtain consent for the processing […]”

So, in effect, because certain platforms are “large” and they form an important part of people’s lives, only offering a choice between a paying version and a version funded by behavioural ads is “likely” to create a “detriment” for users (never mind that the behavioural ad version was there to start with or that the “consent or pay” model was introduced because of regulatory and legislative evolutions). This does not seem like much of a “recommendation” to implement a “FAWBA” but much more like a quasi-obligation.

No, really, users are unable to refuse to use a service

All of that text about network effects and lock-in is needed in the EDPB’s Opinion to lead up to its own response to the point I made earlier about the fact that people are free to choose whether to use a service or not:

  • 105: “[…] The crucial question is whether the controller’s position in the market, by itself or in combination with other factors, leads the data subjects to experience that there are no other realistic alternative services available to them, such as video sharing-platforms, job application portals, or platforms for buying and selling certain goods and services”
  • 110: “Another important factor in assessing imbalance is the extent to which the data subject relies on the service provided. The data subject’s experience of having a genuinely free choice is limited if the service is considered essential, e.g. to search for jobs, to get access to essential information for the data subjects’ daily life or to participate in the public debate.”

(you see the point about repetition and editing)

  • 113: “A controller may argue however, that the data subjects are not forced to consent or pay. They may opt not to use the service at all, or use another service which does not process personal data in the same manner as the controller. Firstly, the elements described above may result in a situation where there is no real practical option for the users to refuse to use the service. Secondly […] the EDPB stated in its Guidelines on consent that consent cannot be considered freely given simply because there is another similar service provided by a different controller which does not entail consenting to the processing of personal data for additional purposes.”

And here you have it. While a business is a business and a person is normally free to choose whether to use a service or not, according to the EDPB there may be cases where “there is no real practical option” to refuse to use a service. This statement is not based on anything tangible, purely on the build up that the EDPB made in the previous paragraphs about lock-in effects and network effects. If you are (too?) big as a platform, users apparently cannot refuse to use your service, and this apparently means that you cannot use the business model that you would like.

There are many issues with this position.

First, who determines whether there are any real practical options? Is it one random complainant? Is it one regulator? Will this be based on any in-depth market analysis like the kind you encounter in competition law?

Second, European competitors will be very glad to know that the EDPB has real faith in them.

Third, and perhaps more fundamentally, this reasoning could easily apply to any product or service that has carved out a market for itself, no matter how many users it has in absolute terms, no matter whether it is niche or for general use. People choose one platform, service or product over another because they feel that it is the best one for them. If there was a better option for them in their particular situation, they would choose it. There is therefore no real practical option, because the others are inferior from the perspective of that particular user. Whether this is because of an existing community of users or the quality of the product or service, a user chooses a service for a reason.

What is an “appropriate fee”?

The EDPB later discusses the “appropriate fee” component.

  • 132: “[…] controllers should assess, on a case-by-case basis, both whether a fee is appropriate at all and what amount is appropriate in the given circumstances, bearing in mind the requirements of valid consent under the GDPR as well as the need of preventing the fundamental right to data protection from being transformed into a feature that data subjects have to pay to enjoy, or a premium feature reserved for the wealthy or the well-off”.

This is based on the idea that data protection cannot be a commodity, something for which you have to pay. This is a fundamentally flawed argument and a purely emotionally charged one.

As has been repeatedly mentioned by many (e.g. the IAB Europe letter to the EDPB on the “Pay or Consent” model), this is a misleading argument “since end-users that choose to consent do not on the same occasion waive their fundamental rights over the processing of their personal data” and “users that choose to consent do not as a result allow the online content or service provider to ignore the GDPR, since the GDPR and its principles must be complied with at all times” (see IAB Europe, Rebutting the Flawed Assumptions Surrounding the Debate on the ‘Consent-or-Pay’ Model).

The EDPB then prepares its argument to apparently allow it to control pricing:

  • 136: “The accountability principle in Article 5(2) GDPR is key in this regard. Businesses are free to set their own prices and choose how their revenue models are structured, but this right should be balanced with the fundamental right for individuals to protection of their personal data. The accountability principle entails that controllers have the responsibility to ensure and to document that consent is freely given if they charge a fee for access to the version of the service that does not entail behavioural advertising. Controllers should document their choices and assessment of whether a given fee is appropriate in the specific case to demonstrate that imposing the fee does not effectively undermine the possibility of freely given consent in the situation at hand.”
  • 137: “[…] supervisory authorities are tasked with enforcing the application of the GDPR, including the requirements of valid consent. This may also relate to the impact of any fee on the data subjects’ freedom of choice. While it is for controllers to set the amount of a fee in itself, if supervisory authorities find that consent is not freely given or that the accountability principle has not been complied with, they can intervene and impose corrective measures. In this respect, they are competent to review or evaluate the assessment of appropriateness carried out by controllers. It is for supervisory authorities to ascertain to which extent it is appropriate to investigate this matter.”

This is a very ambitious position by the EDPB, as supervisory authorities are not market regulators and do not have any authority to examine pricing practices. I am certain that if a supervisory authority tries to challenge any pricing, they will be challenged themselves.

Transparency and data subject rights

Finally, the EDPB’s Opinion tackles transparency, notably through the prism of informed consent, as well as the issue of how to handle the exercise by data subjects of their rights.

  • 140: “Granularity of consent in relation to behavioural advertising by large online platforms merits special attention […] controllers cannot present data subjects with blanket consent for a number of different purposes, e.g. personalisation of content, personalisation of advertisements, service development, service improvement, audience measurement. In this vein, the EDPB recalls that the data subjects should be free to choose which purpose they accept, rather than being confronted with one consent request bundling several purposes. The emphasis in this regard should be placed on differentiating the purposes related to the functionality of the service from behavioural advertising purposes, and processing operations accompanied by this” – a footnote states that “Such purpose may also concern technical processing operations intrinsically linked to the advertising purpose, such as frequency capping or measuring the effectiveness of ad campaigns”.

This suggests that the EDPB wishes to have one consent request for the functionality of the service and one for behavioural advertising and related operations. This position is extremely unclear in terms of its consequences. If a user does not give consent to behavioural advertising or to contextual advertising (see point above re ePrivacy rules), but gives consent to the functionality of the service, what then? Of course the two are linked – if consent is required by law for the processing of personal data to offer a personalised service with personalised advertising, there should not be any differentiation at the level of the consent request.

This section appears to suggest a pick-and-choose approach to the service and to the funding component, when the funding component defines how the service is presented to the user under the “pay or consent” model (with or without a “FAWBA”).

  • 143: “Therefore, it is necessary to inform the data subject of certain elements that are crucial to make a genuine choice. Depending on the context, more information may be needed to allow the data subject to genuinely understand the processing operations at hand”

This has become an increasing problem in the positions of regulators. They wish sufficient information to be provided to data subjects to allow them to “genuinely understand the processing operations at hand” when it comes to adtech, but they have not yet appeared to place that emphasis on any other activity, such as the processing of health data in the healthcare system or the processing of financial data by financial institutions. The adtech and metrics environment is a technically complex one – why should the processing operations have to be “genuinely” understood by data subjects before they can give their consent?

  • 148: “[…] Large online platforms should not define the purpose of the processing activity in terms that are too broad for the data subject to understand the consequences of their choice (e.g. ‘commercial purposes’ or ‘personalisation’).”
  • 149: “Large online platforms should describe in a fair and complete manner the purpose for which the consent is collected. For example, large online platforms may not limit the description of the purpose of the processing to the advantages it provides to the data subjects (e.g. a more personalised experience) if such processing also entails other consequences for them (e.g. profiling, intrusive tracking,…).”

This appears once more to be a case of double standards. Outside of the adtech & metrics ecosystem, there is no requirement to describe the consequences of processing except where Article 22 GDPR applies (on automated decision-making). Is the EDPB suggesting here that adtech automatically involves automated decision-making? Or is it suggesting that when using online advertising increased transparency requirements apply, in a manner that is unique to that activity?

In terms of data subject rights, the EDPB draws upon the CJEU’s Proximus judgment of 27 October 2023 to suggest that there should be the possibility to withdraw consent centrally through the “large online platform”:

  • 172: “In the context of ‘consent or pay’ models to be considered here, a distinction must first be made between the exercise of the right of withdrawal as such and the user’s wish to continue the use of the service after withdrawal of consent. It is important that transparent and clearly recognizable information is provided on how the right of withdrawal can be exercised, in order to avoid giving the impression that the withdrawal would automatically lead to entering into a paid subscription. In such cases, exercising the right of withdrawal will result in the user being once again faced with the choice of whether to give consent to the processing for behavioural advertising purposes or take out a paid subscription (or opt for the Free Alternative Without Behavioural Advertising where this is offered).”
  • 176: “The conclusions of the CJEU in the Proximus judgment also apply in the context of behavioural advertising, especially under the use of online marketing methods like real time bidding. It would also be contradictory to the principle of making withdrawal as simple as consenting if the user himself had to exercise his or her right of withdrawal against each controller involved, where consent can be given to all of them with one click.”

Then finally, it adds that consent given should only last during a limited period of time “such as one year”:

  • 178: “In the context of behavioural advertising, considering the intrusiveness of the processing, a limited period of time during which consent remains valid, such as one year, seems appropriate”.

The reference to one year not entirely unexpected, due to the emphasis by some regulators on one year or just over a year for certain types of cookies etc., but it is the first specific guidance from the EDPB on this issue. It could be brought into question, though, as it could be aligned with the duration of a subscription for a given “large online platform”. If for instance a particular platform finds that its users typically take a 2 year subscription, why should consent not be able to last as long?

Closing thoughts

The analysis above shows that – in my view – there are many flaws with the EDPB’s Opinion on Pay or Consent.

I know some will dismiss my personal thoughts on the matter as biased (potentially without examining their own bias?), but let’s look at a few objective elements:

  • This Opinion is 42 pages full of repetition, vague concepts and emotional sections
  • It is based on an aversion to profile-based advertising that is never properly explained
  • It does not take into account the EDPB’s own positioning regarding Art. 5(3) ePrivacy Directive

Beyond that, some things that may indeed be more “biased” – i.e. a result of the fact that I regularly see (or develop) legal arguments for “the business side”:

  • The Opinion only pays lip service to the freedom to conduct a business and never describes how the non-absolute nature of the right to data protection should interact in a proportionate manner with that freedom
  • The Opinion introduces uncertainty where previously national regulators had been clearer
  • The Opinion pretends to go for a recommendation but shows the disapproval that (some?) regulators feel vis-à-vis the practice
  • The Opinion ignores user agency and seems to reward user apathy

All in all, I would expect this Opinion to be brought into question.

Of course, happy to read your comments – the civil ones, at least.

And if you would like to take part in our “Adtech & Metrics: EU Law Evolutions and Tomorrow’s Legal Strategies” webinar on 16 May or would like to discuss any of these topics, well, Better Call Peter!