Jump to content

Extension:CampaignEvents/Aggregating participants' responses

From mediawiki.org

When an event ends, all the participants' responses are anonymized and presented to the event organizers in aggregate form. Our goal is to do this in a way that is useful for the organizers (who can then see statistics about their events), while at the same time protecting participant data. Many smaller events will not have enough participants for us to be able to cleverly obfuscate the data, for instance by randomizing the answers: with a small data set, any amount of noise we could add would quickly bias the data and make it almost useless. This page explains what measures were implemented in order to balance anonymization and usefulness.

Question-level threshold[edit]

For each question, we first check whether the number of responses is larger than an arbitrary threshold. Given the expected amounts of data, we chose the value 10. If a question received less than 10 total responses, no data will be shown for that question. This not only helps us protect the data, but it also helps us keep the presented information more relevant, given the next measure.

Option-level threshold[edit]

For each question that has at least 10 total responses, we show the exact number of participants who chose a given option iff that option received at least 5 responses (the option-level threshold); otherwise we just display "less than 5". Here, each option is independent of the others. With this criterion in mind, it becomes easier to understand the rationale behind a question-level threshold. Without a question-level threshold, most of the options would be reported as having received "less than 5" responses, which is not very useful. In fact, given the values that we have chosen for these thresholds, if a question received less than 10 responses, then there can be at most one option with at least 5 responses. This wouldn't be very useful for the organizers, especially if the question has many answer choices.

In practice, this is what it would look like:

Q: What's your favourite animal?
Option # of responses What organizers see
Cat 42 42
Dog 33 33
Elephant 2 less than 5
Penguin 4 less than 5
Dolphin 9 9

Displaying the number of nonresponses[edit]

One of the things we wanted to include in the report is how many participants did not answer a certain question. From now on, let:

  • be the total number of participants; in our example, assume .
  • be the number of nonresponses that is shown to the organizer (what we want to determine).
  • be the number of responses to option .

The naïve approach would be to derive with a simple number subtraction: . In our example we would have , and therefore:

Q: What's your favourite animal?
Option # of responses What organizers see
Cat 42 42
Dog 33 33
Elephant 2 less than 5
Penguin 4 less than 5
Dolphin 9 9
No response 10 10

However, this is problematic. When we did the subtraction, we used all the values, even those that are below the threshold and that the organizers cannot see. In practice, from an organizer's point of view, this means that the number of nonresponses actually carries additional information. And because is derived from (and unambiguously determined by) all the , this additional information necessarily describes the themselves.

Let be the sum of responses across all options with at least 5 responses; in our example, . The organizer still cannot know that 2 people responded "Elephant" and 4 "Penguin", but they can reverse the subtraction and discover that a total of 6 people responded either "Elephant" or "Penguin": . By seeing the number of nonresponses, the organizer gained additional information; this can be easily proven, for example by observing that was a plausible scenario without seeing the number of nonresponses, but it no longer is plausible in the last table.

Sometimes, depending on the number of options and the distribution of responses, the additional information carried by the nonresponse number can actually be enough for the organizer to infer the obfuscated values for the whole dataset. For instance, in the following example, assuming a total of 20 participants:

Q: Are you human or dancer?
Option # of responses What organizers see
Human 14 14
Dancer 2 less than 5
No response 4 4

The organizer could easily deduce that exactly 2 people responded "dancer".

Solution[edit]

To avoid leaking information, our goal is to compute without using any additional information beyond what the organizer already knows. In the naïve approach, the extra information came from using all the in our calculation, when the organizer was not aware of the values smaller than 5. For those, all the organizer knows is that . Or in other words, from an organizer's perspective, those values lie in the interval, which is equivalent to saying that .

Because these number intervals are known to the organizers, we can use elementary interval arithmetic to compute without using more information than the organizer already knows. Let be the number of options with less than 5 responses. We can now change our definition: .

All the values used in this formula are already known to the organizer, meaning this does not carry any additional information. The formula can also be rewritten as follows:

In our first example, we have . Putting it all together:

Q: What's your favourite animal?
Option # of responses What organizers see
Cat 42 42
Dog 33 33
Elephant 2 less than 5
Penguin 4 less than 5
Dolphin 9 9
No response 10 between 8 and 16

Usefulness[edit]

One thing to note about this calculation is that the resulting value is somewhat useless, because it does not convey additional information; meaning, the organizer could have computed this value on their own by just looking at the rest of the table. However, that's precisely our goal: we don't want the number to carry any new information. In practical terms, there's still value in showing this number: the organizer doesn't have to do the subtraction themself, which could lead to arithmetic mistakes, especially if the numbers are large.

Clamping[edit]

The formula we're using does a good job at protecting the data, but sometimes it may result in intervals that make little sense. For instance, considering our previous example with 20 participants but slightly different numbers:

Q: Are you human or dancer?
Option # of responses What organizers see
Human 17 17
Dancer 2 less than 5
No response 1 between -1 and 3

Here we have an interval with negative numbers, which clearly aren't possible in practice. This can easily be addressed by tweaking our formula so that the lower bound is capped at 0, thus yielding "between 0 and 3".

Note that the only reason why we need this correction is that the "less than 5" threshold wasn't accurate in the first place. Because the organizers already knows that 17 out of 20 people responded "Human", they also knew that the number of "Dancer" responses must have been between 0 and 3.

Likewise, if all 20 people responded "Human", the organizer would know that "less than 5" actually means 0. This is just a limitation in the simple thresholding approach we're using for individual options, and there isn't much we can do about it.

Worst case scenario[edit]

The last thing I'd like to consider is what's the worst case scenario for the nonresponse interval, i.e. how large it can get. We can easily see that the length of the interval can be maximized by minimizing (and consequently maximizing ). This corresponds to the scenario where all options received less than 5 responses, which leaves us with . This really adds no information for the organizer, but that was our goal in the first place, so what else to say if not: mission accomplished!