Zhuohang LiVanderbilt UniversityNashvilleTNUSAzhuohang.li@vanderbilt.edu,Andrew LowyUniversity of Wisconsin–MadisonMadisonWIUSAalowy@wisc.edu,Jing LiuMitsubishi Electric Research LaboratoriesCambridgeMAUSAjiliu@merl.com,Toshiaki Koike-AkinoMitsubishi Electric Research LaboratoriesCambridgeMAUSAkoike@merl.com,Kieran ParsonsMitsubishi Electric Research LaboratoriesCambridgeMAUSAparsons@merl.com,Bradley MalinVanderbilt UniversityNashvilleTNUSAb.malin@vanderbilt.eduandYe WangMitsubishi Electric Research LaboratoriesCambridgeMAUSAyewang@merl.com
Abstract.
In distributed learning settings, models are iteratively updated with shared gradients computed from potentially sensitive user data. While previous work has studied various privacy risks of sharing gradients, our paper aims to provide a systematic approach to analyze private information leakage from gradients. We present a unified game-based framework that encompasses a broad range of attacks including attribute, property, distributional, and user disclosures. We investigate how different uncertainties of the adversary affect their inferential power via extensive experiments on five datasets across various data modalities. Our results demonstrate the inefficacy of solely relying on data aggregation to achieve privacy against inference attacks in distributed learning.We further evaluate five types of defenses, namely, gradient pruning, signed gradient descent, adversarial perturbations, variational information bottleneck, and differential privacy, under both static and adaptive adversary settings. We provide an information-theoretic view for analyzing the effectiveness of these defenses against inference from gradients. Finally, we introduce a method for auditing attribute inference privacy, improving the empirical estimation of worst-case privacy through crafting adversarial canary records.
1. Introduction
Ensuring privacy is an important prerequisite for adopting machine learning (ML) algorithms in critical domains that require training on sensitive user data, such as medical records, personal financial information, private images, and speech.Prominent ML models, ranging from compact neural networks tailored for mobile platforms(howard2017mobilenets) to large foundation models(brown2020language; rombach2022high), areoften trained on user data via gradient-based iterative optimization.In many cases, such as decentralized learning(dhasade2023decentralized; hsieh2017gaia) or federated learning (FL)(mcmahan2017communication; hard2018federated; guliani2021training), model gradients are directly exchanged in place of raw training data to facilitatejoint learning, which opens up an additional channel for potential privacy leakage(lowy2022private).
Recent works have explored information leakage through this gradient channel in various forms, albeit in isolation.For instance, Nasr et al.(nasr2019comprehensive) showed that it is feasible to infer membership (i.e., single-bit information indicating the existence of a target record in the training data pool) from model updates in federated learning.Beyond membership, Melis et al.(melis2019exploiting) demonstrated inference over sensitive properties of the training data in collaborative learning.Other independent lines of work additionally explored attribute inference(lyu2021novel; driouich2022novel) and data reconstruction(zhu2019deep; geiping2020inverting; gupta2022recovering) through shared model gradients.However, some emerging privacy concerns that have so far only been considered under the centralized learning setting, such as the distributional inference(suri2022formalizing; chaudhari2023snap) and user-level inference(kandpal2023user; li2022user), have not been well investigated in the gradient leakage setting.
Existing studies on information leakage from gradients have several limitations.First, the majority of the current literature focuses on investigating each individual type of inference attack under their specific threat models while lacking a comprehensive examination of inference attack performance under various adversarial assumptions, which is essential for providing a holistic view of the adversary’s capabilities.For instance, from the attack’s perspective, assuming the adversary to have access to a reasonably-sized shadow dataset and limited rounds of access to the model’s gradients helps to capture the realistic inference privacy risk under a practical threat model. Conversely, from the defense’s perspective, assuming a powerful adversary with access to record-level gradients and auxiliary information about the private record helps to estimate the worst-case privacy risk, which may facilitate the design of more robust defenses.Second, while several types of heuristic defenses have been explored by prior work, their supposed effectiveness has not been fully verified under more challenging adaptive adversary settings. Moreover, existing studies do not adequately explain why some defenses succeed in reducing the inference risk over gradients, while others fail,which could provide important guidance on the design of more effective defenses.
In this paper, we conduct a systematic analysis of private information leakage from gradients.We start by defining a unified inference game that broadly encompasses four types of inference attacks that aims at inferring common private information of the data from gradients, namely, attribute inference attack (AIA), property inference attack (PIA), distributional inference attack (DIA), and user inference attack (UIA), as illustrated in Figure1.Under this framework, we show that information leakage from gradients can be treated as performing statistical inference over a sensitive variable upon observing samples of the gradients, with different definitions of the information encapsulated by the variable being inferred, leading to a generic template for constructing different types of inference attacks.We additionally explore different tiers of adversarial assumptions, with varying numbers of available data samples, numbers of observable rounds of gradients, and varying batch sizes, to investigate how different priors and uncertainties in the adversary’s knowledge about the gradient and data distribution affect the adversary’s inferential power.
We perform a systematic evaluation of these attacks on five datasets (Adult(misc_adult_2), Health(health_heritage), CREMA-D(cao2014crema), CelebA(liu2015deep), UTKFace(zhang2017age)) with three different data modalities (tabular, speech, and image).A common setting in distributed learning is that the data distribution is heterogeneous across different nodes but hom*ogeneous within each node.Under this assumption, where the sensitive variable is common across a batch, we show that a larger batch size leads to higher inference privacy risk from gradients across all considered attacks, highlighting that solely relying on data aggregation is insufficient for achieving meaningful privacy in distributed learning.With a moderate batch size (e.g., ), we show that an adversary can launch successful inference attacks with very few shadow data samples (). For instance, in the case of property inference on the Adult dataset, the adversary can achieve AUROC with only shadow data samples.Moreover, we demonstrate that an adversary with access to multiple rounds of gradient updates can perform Bayesian inference to aggregate adversarial knowledge, eventually leading to higher confidence and better attack performance.
We apply the developed inference attacks to evaluate the effectiveness of five common types of defenses from the privacy literature(zhu2019deep; sun2021soteria; wu2023learning; jia2018attriguard; jia2019memguard; shan2020fawkes; song2019overlearning; scheliga2022precode; scheliga2023privacy), including Gradient Pruning(zhu2019deep), Signed Stochastic Gradient Descent (SignSGD)(bernstein2018signsgd), Adversarial Perturbations(madry2018towards), Variational Information Bottleneck (VIB)(alemi2016deep), and Differential Privacy (DP-SGD)(abadi2016deep), against both static adversaries that are unaware of the defense and adaptive adversaries that can adapt to the defense mechanism. We find that most heuristic defense methods only offer a weak notion of “security through obscurity”, in the sense that they defend against static adversaries empirically but can be easily bypassed by adaptive adversaries.Although DP-SGD shows consistent performance against both static and adaptive adversaries, to fully prevent inference attacks, it often requires injecting too much noise which diminishes the utility of the learning model.We provide an information-theoretic perspective for explaining and analyzing the (in)effectiveness of these considered defenses and show that the key ingredient of a successful defense is to effectively reduce the mutual information between the released gradients and the sensitive variable, which could serve as a guideline for designing future defenses.Finally, to provide practical guidance in selecting privacy parameters, we introduce an auditing approach for empirically estimating the privacy loss of attribute inference attacks through crafting adversarial canary records to approximate the privacy risk in the worst case.
In summary, our main contributions are as follows:
- •
We provide a holistic analysis of inference privacy from gradients through a unified inference game that broadly encompasses a range of attacks concerning attribute, property, distributional, and user inference.
- •
We demonstrate the weakness of solely relying on data aggregation to achieve privacy against inference attacks in distributed learning. We do this through a systematic evaluation of the four types of attacks on datasets with different modalities under various adversarial assumptions.
- •
Our analyses reveal that reducing the mutual information between the released gradients and the sensitive variable is the key ingredient of a successful defense. This is shown by investigating five common types of defense strategies against inference over gradients from an information-theoretic perspective.
- •
Our auditing results provide an empirical justification for tolerating large DP parameters when defending against attribute inference attacks (c.f.(lowy2024does)). This is achieved by implementing an auditing method for empirically estimating the privacy loss against attribute inference attacks from gradients.
2. Background and Related Work
2.1. Machine Learning Notation
A machine learning (ML) model can be denoted as a function parameterized by that maps from the input (feature) space to the output (label) space.The training of an ML model involves a set of training data and an optimization procedure, such as stochastic gradient descent (SGD). At each step of SGD, a loss function is first computed based on the current model and a batch of training samples and then a set of gradients is computed as . Finally, the model is updated by taking a gradient step towards minimizing the loss.
2.2. Related Work
Developing ML models in many applications involves training on the users’ private data, which introduces privacy leakage risks from different components of the ML model across several stages of the development and deployment pipeline.
Leakage From Model Parameters ().The first way of exposing privacy information is through analyzing the model parameters.This is connected to the most prominent centralized ML setting, where the model is first developed on a local dataset and then released to the users for deployment.Various forms of privacy leakage have been studied in this setting.White-box membership inference(leino2020stolen; nasr2019comprehensive; sablayrolles2019white) aims at identifying the presence of individual records in the training dataset given access to the full model.Data extraction attacks exploit the memorization of the ML model to extract training samples(haim2022reconstructing; carlini2023extracting), whereas model inversion attacks generate synthetic data samples from the training distribution(yin2020dreaming; wang2021variational).In contrast, for distributional inference attacks(ateniese2015hacking; ganju2018property; suri2022formalizing), the attacker’s goal is to make inferences about the entire training data distribution rather than individuals.
Leakage From Model Outputs ().Another source of privacy leakage is the model output, which is related to more restrictive settings such as machine learning as a service (MLaaS) in cloud APIs where only black-box access to the ML model is granted. Under this setting, researchers have studied several privacy attacks that can be launched by querying the model and observing the outputs.For instance, query-based model inversion attacks(fredrikson2014privacy; fredrikson2015model) exploit the predicted confidence or labels from the model to make inferences about the input data instance(zhang2020secret) or attribute(mehnaz2022your).Model stealing attacks attempt to recover the confidential model weights(tramer2016stealing) or hyper-parameters(wang2018stealing) given query access to the model.Black-box membership inference attacks(salem2018ml; truex2019demystifying; sablayrolles2019white; song2021systematic) and black-box distributional inference attacks(mahloujifar2022property; chaudhari2023snap) allow an adversary to decide whether a data point was included in training or reveal information about the training data distribution by analyzing its output prediction or confidence.
Leakage From Model Gradients ().The final source of privacy leakage is the gradient of the loss function with respect to the model parameters, which is essential for updating the model with stochastic gradient descent. This is relevant to ML settings that release intermediate model updates during model development, such as distributed training, federated learning, peer-to-peer learning, and online learning.Compared to model parameters, model gradients carry more nuanced information about a small batch of data used for computing the update and thus may reveal more information about the underlying data instances.Current literature studies different types of gradient-based privacy leakage in isolation.One line of work focused on data reconstruction from model gradients(zhu2019deep; geiping2020inverting) or updates(salem2020updates; haim2022reconstructing) with various data types, such as image(zhu2019deep; geiping2020inverting; yin2021see; li2022auditing), text(gupta2022recovering; haim2022reconstructing), tabular(vero2023tableak), and speech data(li2023speech).However, these attacks rely on strong adversarial assumptions and do not generalize to large batch sizes(huang2021evaluating).Another line of work investigated the extraction of private attributes or properties(melis2019exploiting; feng2021attribute) of the private data from model gradients.Specifically, Melis et al.(melis2019exploiting) first revealed that gradients shared in collaborative learning can be used to infer properties of the training data that are uncorrelated with the task label.Lyu et al.(lyu2021novel) explored attribute reconstruction from epoch-averaged gradients on tabular and genomics data.Feng et al.(feng2021attribute) discovered that gradients of Speech Emotion Recognition models leak information about user demographics such as gender and age.Dang et al.(dang2022method) showed that speaker identities can be revealed from the gradients of Automatic Speech Recognition models.Kerkouche et al.(kerkouche2023client) demonstrated the weakness of secure aggregation without differential privacy in Federated learning by designing a disaggregation attack that exploits the linearity of model aggregation and client participation across multiple rounds to capture client-specific properties.In contrast to existing studies that design separate treatments for each type of attack, in this work, we take a holistic view of information leakage from gradients.
3. Problem Formalization
This section introduces four types of inference attacks from gradients, namely, attribute inference, property inference, distributional inference, and user inference. We formally define information leakage from gradients using a unified security game, following standard practices in machine learning privacy studies(salem2023sok), and discuss variants of threat models that affect the adversary’s inferential power. In Section4, we describe methods to construct these attacks.
3.1. Attack Definitions
We consider four types of information leakage from model gradients that generally involve two parties, namely, a private learner who releases model gradients computed on a private data batch, and an adversary who tries to make inferences about the private data given access to the gradients.This generic setting captures multiple ML application scenarios such as distributed training, federated learning, and online learning.
Attribute Inference.Attribute inference attacks (AIA) seek to infer a data record’s unknown attribute (feature) from its gradient.Prior works in both centralized(wu2016methodology; yeom2018privacy) and federated settings(lyu2021novel; driouich2022novel) usually assume the record to be partially known.For instance, infer a missing entry (e.g., genotype) of a person’s medical record(fredrikson2014privacy).It is worth noting that, in practice, when the attributes are not completely independent, an adversary with partial knowledge about the record may be able to infer the unknown attribute just from the known ones, as in data imputation(jayaraman2022attribute).
Property Inference.Property inference attacks (PIA) aim to infer a global property of the private data batch that is not directly present in the data feature space but is correlated with some of the features (and consequently the gradients). For tabular data, these properties could be sensitive features that have been intentionally excluded from training (e.g., pseudo-identifiers in health records that are required to be removed for HIPAA compliance); for high-dimensional data like image and speech, they could be some high-level statistical features capturing the semantics of the data instance (e.g., race of a face image(melis2019exploiting) or gender of a speech recording(feng2021attribute)).
Distributional Inference.Distributional inference attacks (DIA) aim to infer the ratio of the training samples () that satisfy some target property111Some prior work also refers to distributional inference as property inference..The majority of current literature on DIA(ganju2018property; suri2022formalizing; mahloujifar2022property; chaudhari2023snap) is in the space of centralized learning, which captures leakage from model parameters. These studies usually define DIA as a distinguishing test between two worlds where the model is trained on two datasets with different ratios ( and )(suri2022formalizing). This can be further categorized into property existence tests that decide if there exists any data point with the target property in the training set and property size estimation tests that infer the exact ratio of the property in the training data(chaudhari2023snap).In this work, we extend DIA to the gradient space and consider a general case that combines property existence and property size estimationby formulating DIA as performing ordinal classification between a set of ratio bins (), i.e., .
User Inference. User inference attacks (UIA) or re-identification attacks aim to identify which user’s data was used to compute the observed gradients. Here, the adversary does not know the user’s exact data used for computing the gradients. Instead, the adversary is provided a set of candidate users and their corresponding underlying user-level data distributions. This setting shares similarities with the subject-level membership inference(suri2022subject) in the sense that both attacks measure the privacy risk at the granularity of each individual. However, the user inference attack aims to infer richer information that directly exposes the user’s identity compared to the membership inference attack, which only discloses a single bit of information (i.e., whether a given user’s data sample is involved in training). Thus user inference can be considered as a generalization of subject-level membership inference attack.
We note that except for attribute inference which directly exposes (part of) the user’s private data, property inference, distributional inference, and user inference attacks are inferential disclosures (also known as deductive disclosures) that exploit the statistical correlation exists in data to infer sensitive information from the released gradients with high confidence.We exclude record-level privacy attacks such as membership inference and data reconstruction as our analysis here focuses on distributed learning scenarios where private information can be shared across different data samples within a batch.
3.2. Unified Inference Game
Our framework aims to capture an abstraction of privacy problems in distributed learning settings, where an attacker aims to recover some sensitive information of a particular client from their shared gradients (or model updates).In practical distributed learning settings, the data may be heterogeneously split across the clients, and an attacker may take advantage of side information about a particular client’s local data distribution.Generally, the objective of the attacker is to recover the sensitive information, represented by the variable , which is related to the local data distribution of the client through a joint distribution .As we will detail later, specific choices in what represents and the corresponding specialized structure of enable the framework to capture attribute, property, distributional, and user inference privacy problems.This joint distribution may capture both the side information available to the attacker and the inherent heterogeneity of the data.To focus on evaluating the effectiveness of gradient-based attacks and defenses,we simplify the modeling of the overall training procedure, by updating the model in a centralized fashion on the entire training data set , but generating gradients for the attacker on batches drawn according to .
Definition 3.1.
Unified Inference Game.Let be the joint distribution, the loss function, the training algorithm, the total number of training rounds, and a set of rounds that are observable to the adversary222We use to denote the discrete set ..The unified inference game from gradients between a challenger (private learner) and an adversary is as follows:
- (1)
Challenger initializes the model parameters as .
- (2)
Challenger samples a training dataset , where .
- (3)
Challenger draws the sensitive variable .
- (4)
Challenger draws a batch of data samples , where , for the given .
- (5)
Challenger computes the gradient of the loss on the data batch, .
- (6)
Challenger applies the defense mechanism to produce a privatized version of the gradient .When no defense is applied, is simply the identity function, i.e., .
- (7)
The model is updated by applying the training algorithm on the training dataset for one epoch .
- (8)
Steps (5)-(7) are repeated for rounds.
- (9)
A static adversary gets access to , , , and the set of (intermediate) model parameters and released gradients . An adaptive adversary also gets the defense mechanism .
- (10)
The adversary outputs its inference of the sensitive variable, i.e., for the static adversary, or for the adaptive adversary. The adversary wins if and loses otherwise.
In the above general game, the flexibility of the joint distribution allows capturing various scenarios.Rather than explicitly defining this joint distribution, which anyways depends on the unknown data distribution, we implicitly define it through transformations/filtering of a given data set.Further, providing the adversary with knowledge of the distribution is realized by providing the adversary with suitable shadow datasets drawn according to such transformations and filtering operations.
Attribute Inference Game.The variable is a discrete attribute within the features .Sampling is accomplished by drawing uniformly or according to its marginal empirical distribution within the given training data set . Drawing the data batch according to the distribution , is accomplished by uniformly selecting data samples from the entire training data set with features that possess the attribute .
Property Inference Game.This scenario is similar to attribute inference, except that is a property associated with, but external to the features of, each data sample (i.e., may be some meta-data property of each sample, but excluded from the features of ).Drawing the data batch is handled similarly to the attribute inference case.
Distributional Inference Game. In this class of scenarios, we have a general set of transformations , which are selected by the sensitive variable .Each transformation corresponds to implicitly realizing the corresponding , by applying a general transformation that involves selective sampling from the overall training set .For example, the selection of may indicate a particular proportion for the prevalence of a certain attribute or property, and thus the corresponding transformation would select batches of data according to that proportion.
User Inference Game. This is a special case of property inference, where specifically corresponds to the identity of an individual that provided the corresponding data samples.Unlike other inference attacks, the sensitive variable, as it represents identity, does not take on a fixed set of values. To make the attack more operational, similar to prior work on data reconstruction(hayes2024bounding), we assume the inference is over a fixed set of candidate users randomly sampled from the population at the beginning of each game.
3.3. Threat Model
In this work, we assume the adversary has no control over the training protocol and only passively observes gradients as the model is being updated.In practice, the adversary could be an honest-but-curious parameter server(li2014scaling) in a distributed learning or federated learning setting, a node in decentralized learning(dhasade2023decentralized), or an attacker who eavesdrops on the communication channel.The game as defined in Definition3.1 is similar to games defined in many prior works(carlini2022membership; yeom2018privacy) which captures the average-case privacy as the performance of the attack is measured by its expected value over the random draw of data samples.In Section7, we consider an alternative game where the data samples are adversarially chosen to provide a measure of worst-case privacy for privacy auditing.
We consider the following aspects that reflect different levels of the adversary’s knowledge:
- •
Knowledge of Data Distribution.Similar to many prior works on inference attacks(shokri2017membership; melis2019exploiting; ye2022enhanced; suri2022formalizing; carlini2022membership; liu2022ml; chaudhari2023snap), we model the adversarial knowledge of the data distribution through access to data samples drawn from this distribution, which are referred to as shadow datasets. A larger shadow dataset implies a more powerful adversary that has more knowledge about the underlying data distribution.For discrete attributes, we additionally consider a more informed adversary who knows the prior distribution of the attribute, which can be estimated by drawing a large amount of data from the population.
- •
Continuous Observation. We use the observable set to capture the adversary’s ability to observe the gradients continuously. Intuitively, an adversary observing multiple rounds should perform better than a single-round adversary.Assuming a powerful adversary is beneficial for analyzing and auditing defenses. For instance, the privacy analysis in DP-SGD(abadi2016deep) assumes that the adversary has access to all rounds of gradients.
- •
Adaptive Adversary. When evaluating defenses, in addition to the static adversary, we consider a stronger adaptive adversary who is aware of the underlying defense mechanism. This has been demonstrated as pivotal for thoroughly assessing the effectiveness of security defenses(carlini2017adversarial; tramer2020adaptive).
4. Attack Construction
4.1. Inference Attacks
The objective of the inference adversary is to infer the sensitive variable from the observed gradient, i.e., modeling the posterior distribution .The general strategy of implementing inference attacks from gradients is to exploit the following two adversarial assumptions as defined in the unified inference game in Section3.2.First, the adversary possesses knowledge about the underlying population data distribution. Operationally, this implies that the adversary is able to draw data samples with corresponding sensitive variable from to construct a shadow dataset.Second, the adversary has access to the training algorithm and the current model parameters, which allows the adversary to compute the gradients for each batch of samples within the shadow dataset.With this information, the adversary can train a predictive model to approximate the posterior.
Attribute & Property Inference.The attribute and property inference attacks follow a similar attack procedure, with the difference being whether the sensitive variable is internal or external to the data record.Specifically, the adversary first constructs a shadow dataset by sampling from the population distribution, i.e., where .Then the adversary draws data batches from the shadow dataset through bootstrapping. This is achieved by repeatedly sampling the sensitive attribute and then drawing records that have the sensitive attribute from .Next, for each data batch , the adversary computes the gradient using the current model parameters .This results in a set of labeled data pairs , which can then be used for training an ML model that predicts the sensitive variable from gradient observations.In practice, we find that it is beneficial to train the predictive model using a balanced dataset, which can be seen as modeling , and capture the prior knowledge in a separate term. This provides more stable performance for small shadow dataset sizes and skewed sensitive variable distributions.
It is worth noting that here we are considering a more restrictive setting for attribute inference where the adversary holds no additional knowledge about the private data besides the gradients compared to prior works that assume the private record to be partially known (e.g., (lyu2021novel; driouich2022novel) assume that everything is known except for the sensitive attribute).Our framework can be easily extended to the general case where the adversary holds arbitrary additional knowledge about the private record by training a predictive model using shadow data drawn from .
Distributional Inference.In distributional inference, the sensitive variable is the index of the ratio bin to which the property ratio belongs.The adversary first samples a random bin index and then samples a property ratio within that bin.Next, the adversary draws a data batch with records with the property and the rest without the property and derives the gradient . This process is repeated by the adversary to collect a set of labeled gradients and attribute pairs to train a predictive model.We note that in the setting of distributional inference, the sensitive variable is a series of ordinal numbers indicative of the continuous property ratio and thus should not be treated as regular multi-class classification.To utilize the ordering information, we adopt a simple strategy to ordinal classification(frank2001simple), which transforms the -class ordinal classification problem into binary classifications. Specifically, the adversary trains a series of binary classifiers, with the -th classifier trained to decide whether or not is larger than . The final posterior probability can be obtained as
User Inference.In contrast to other inference attacks where the sensitive variable is sampled from a well-defined set of values, in user inference, the sensitive variable is the user’s identity, which does not take on a fixed set of values.Moreover, the identities that occur during test time are likely not seen during the development of the attack model. As a result, the posterior cannot be directly modeled.To resolve this, we employ a training strategy analogous to the prototypical network(snell2017prototypical) for few-shot learning. Specifically, we first train a neural network that is composed of an encoder that maps the gradient vector to a continuous embedding space and a classifier that takes the embedding as input and outputs the predicted user identity. Given gradient and sensitive variable pairs created from the shadow dataset, as the number of available users in the shadow dataset is finite, the neural network can be trained in an end-to-end manner using standard multi-class classification loss such as cross-entropy. After training, the classifier is discarded. At the time of inference, the adversary is provided with an observed gradient and a set of candidate data batches , where .Then, the adversary can derive the corresponding set of candidate gradients based on the current model parameters .Finally, the adversary computes the probability of each candidate identity after observing the gradient as
4.2. Continual Attack and Adaptive Attack
The inference attack can be further improved if the adversary has access to multiple rounds of gradients or the defense mechanism.
Inference under Continual Observation.In cases where continual observation of the gradients is allowed, the adversary can use the set of observed gradients from multiple rounds to improve the attack. A naive solution would be to train a model to directly approximate . However, this would be generally infeasible in practice because of the high dimensionality of .Instead, the adversary can use Bayesian updating to accumulate adversarial knowledge.Specifically, given a set of observed gradients, the log-posterior can be formulated as
(1) | ||||
(2) | ||||
(3) | ||||
(4) | ||||
(5) |
where Eq.(3) makes the approximating assumption that the gradients are conditionally independent given .Since is independent of , and therefore it can be treated as a constant. if the gradients are additionally mutually independent. In Eq.(5), the prior term is known and can be approximated by training a fresh model for each round of observation. The sensitive variable can thus be estimated as .
Adaptive Attack.The adversary can design adaptive attacks if the defense mechanism is known.Instead of training the predictive model using clean gradient pairs , a simple strategy for adaptive attack is to apply the same defense mechanism to the shadow data’s gradients and use the transformed gradient pairs to train the predictive model .As we will show in Section6, this simple strategy is sufficient to bypass several heuristic-based defenses.
5. Attack Evaluation
In this section, we evaluate the four inference attacks on datasets with different modalities to investigate the impact of various adversarial assumptions. The findings we present below indicate the key factors that affect the attack performance are: (1) Continual Observation: an adversary can improve the inference by accumulating information from multiple rounds of updates, (2) Batch Size: when the private information is shared across the batch, using a large batch averages out the effect of the other variables, making it easier to infer the sensitive variable, and (3) Adversarial Knowledge: the attack improves with the amount of knowledge of the data distribution (as captured by the number of available shadow data points).
5.1. Experimental Setup
5.1.1. Datasets and Model Architecture.
We consider the following five datasets with different data modalities (tabular, speech, and image) in our experiments.
Dataset Type Task Label Sensitive Variable Correlation Adult Tabular Income Gender -0.1985 Health Tabular Mortality Gender -0.1123 CREMA-D Speech Emotion Gender -0.0133 CelebA Image Smiling High Cheekbones 0.6904 UTKFace Image Age Ethnicity -0.1788
- (1)
Adult(misc_adult_2) is a tabular dataset containing records from the 1994 Census database. We train a fully-connected neural network to predict the person’s annual income (whether or not more than K a year) and use gender (male or female) as the private attribute. For property and distributional inference attacks, the sex feature is removed.
- (2)
Health(health_heritage) (Heritage Health Prize) is a tabular dataset from Kaggle that contains de-identified medical records of over patients’ inpatient or emergency room visits. We train a fully-connected neural network to predict whether the Charlson Index (an estimate of patient mortality) is greater than zero. We use the patient’s gender (male, female, or unknown) as the private attribute, which is removed for property and distributional inference attacks.
- (3)
CREMA-D(cao2014crema) is a multi-modal dataset that contains emotional speech recordings collected from actors ( male and female). Speech signals are pre-processed using OpenSMILE(eyben2010opensmile) to extract a total number of utterance-level audio features for automatic emotion recognition. Following prior work(feng2021attribute), we use EmoBase which is a standard feature set that contains the MFCC, voice quality, fundamental frequency, and other statistical features, resulting in a feature dimension of for each utterance(haider2021emotion). We train a fully connected neural network to classify four emotions, including happy, sad, anger, and neutral. We use the speaker’s gender (male or female) as the target property for inference attacks.
- (4)
CelebA(liu2015deep) contains face images, each of which is labeled with binary attributes. We resize the images to pixels and train a convolutional neural network to classify whether the person is smiling and use whether or not the person has high cheekbones as the target property.
- (5)
UTKFace(zhang2017age) consists of over face images annotated with age, gender, and ethnicity. We resize the images to pixels and select images from the four largest ethnicity groups (White, Black, Asian, or Indian) to train a convolutional neural network to classify three age groups (, , and years old). Ethnicity is used as the target property.
We split each dataset three-fold into a training set, a testing set, and a public set. The training set is considered to be private and is only used for model training and inference attack evaluation. The testing set is reserved for evaluating the utility of the ML model. The public set is accessible to both the adversary and the private learner, which can be used as the shadow dataset for training the adversary’s predictive model or developing defenses as described in Section6.We provide a summary of the datasets in Table1, including the task label , the sensitive variable for AIA and PIA, and the Pearson correlation between and .
5.1.2. Metrics.
We define the following metrics for measuring inference attack performance:
- (1)
Attack Success Rate (ASR): We measure the attack performance by the number of times the adversary successfully guesses the sensitive variable, i.e., , where is the total number of trials (i.e., repetitions of the inference game).
- (2)
AUROC: We additionally report the area under the receiver operating characteristic curve (AUROC). For sensitive variables that have more than two classes, we report the macro-averaged AUROC.
- (3)
Advantage: We follow prior work(yeom2018privacy; guo2023analyzing) and use the advantage metric to measure the gain in the adversary’s inferential power upon observing the gradients.Specifically, the advantage of an adversary is defined by comparing its success rate to a baseline adversary who doesn’t observe the gradients, i.e., , where is the success rate of the baseline adversary.The Bayes optimal strategy for the baseline adversary without observing gradients is to guess the majority class, i.e., .
- (4)
TPR@FPR: Besides average performance metrics, recent work on membership inference(carlini2022membership; ye2022enhanced) argue the importance of understanding the privacy risk on worst-case training data by examining the low false positive rate (FPR) region. Inspired by this, we additionally report the true positive rate (TPR) when the FPR is .
5.1.3. Adversary’s Model.
We conducted preliminary experiments with various types and configurations of ML models and found that random forest with estimators performs the best (especially in the low FPR region) for estimating the posterior in AIA, PIA, and DIA with small shadow dataset sizes. For UIA, we use a fully-connected network with one hidden layer as the encoder. The embedding dimension is set to be for the CREMA-D dataset of for CelebA dataset. As the gradient vector is extremely high dimensional (e.g., the gradient dimensions for CREMA-D and CelebA datasets are and , respectively), we apply a -dimensional max-pooling layer before the adversary’s predictive model with a kernel size of for tabular datasets and for other datasets for dimensionality reduction.
5.1.4. Other Attack Settings.
We assume the model parameters are randomly initialized at the beginning of the inference game. During the game, the model parameters are updated at each epoch using SGD with a learning rate of .We evaluate AIA on the tabular datasets and UIA on datasets that contain user labels (CREMA-D and CelebA), while PIA and DIA are evaluated on all datasets.For AIA, PIA, and DIA, we use a training set of samples and a balanced public set that contains a default number of samples equally divided for each sensitive attribute/property class. For UIA, we first filter out user identities that contain less than batch size number of samples and then split the dataset according to user identities. We select and users on the CREMA-D dataset, and and users on the CelebA dataset as the training and public sets, respectively. We select more users on the CelebA dataset because the majority of users only have very few samples ().We set for DIA, i.e., inferring over ratio bins (), and for UIA, i.e., choosing from candidate users.For AIA and PIA, we assume the adversary has access to a prior of the sensitive variable that is estimated from the population. For DIA and UIA, we assume the adversary holds an uninformed prior, and thus the baseline is simply random guessing.The default batch sizes are for AIA and PIA, for DIA, and for UIA.For AIA, PIA, and DIA, the total number of trials of each experiment is equal to the number of random draws of training batches (i.e., ); for UIA, is the number of random draws of candidate sets, which we set to be .We repeat each experiment with different random seeds and report the mean and standard deviation of the results.
5.2. Evaluation of Inference Attacks
We evaluate each type of inference attack with a small shadow dataset ( samples) and compare the results of single-round attacks (where the adversary only observes a single round of gradients) to multi-round attacks (where the adversary gets continual observation of the gradients).Due to space limits, we only include a snapshot of the results (one dataset per attack) in Figure2 and provide the full results in Appendix FigureLABEL:fig:sr_mr.
Attribute Inference.We present the results of AIA in FigureLABEL:fig:sr_mr_aia. We observe that the adversary is able to infer the sensitive attribute with high confidence using only shadow data samples. For instance, on the Adult dataset, the multi-round adversary is able to achieve a high average AUROC of and a TPR@FPR of . On the Health dataset, however, the AUROC of the multi-round adversary reduces slightly to while the TPR@FPR drops drastically to . This is likely because the sensitive attribute on the Health dataset contains an “unknown” class () that is uncorrelated with other features, making it hard to estimate statistically.
Property Inference.FigureLABEL:fig:sr_mr_pia depicts the results of PIA, where we observe that the adversary is able to achieve high performance across all five datasets.Namely, the average AUROCs of the multi-round adversary on the Adult, Health, CREMA-D, CelebA, and UTKFace datasets are , , , , and , respectively.This consistent high attack performance is in contrast to the general low correlation between the sensitive properties and the task labels across all datasets as indicated in Table1 (except for CelebA, where a spurious relationship exists), which suggests that the information leakage observed is intrinsic to the computed gradients(melis2019exploiting), regardless of the specific data type and learning task.
Distributional Inference.FigureLABEL:fig:sr_mr_dia summarizes the results of DIA.Although distributional inference is a more challenging task (-class ordinal classification), we observe that the multi-round adversary still performs fairly well with a batch size of , achieving an average AUROC of , , , , and on the Adult, Health, CREMA-D, CelebA, and UTKFace datasets, respectively.
User Inference.We report the results of UIA in FigureLABEL:fig:sr_mr_uia. We observe that the adversary is able to identify the user with relatively high confidence on the CelebA dataset, with an average AUROC and TPR@FPR of and for the multi-round adversary. On the CREMA-D dataset, the average AUROC of the multi-round adversary is only , which may be due to the low identifiability of the features extracted for emotion recognition.
General Observations.Additionally, we have the following general observations across different type of attacks and datasets.First, the performance of single-round attacks decreases as the training progresses. This is because the gradients of the training data will become smaller in magnitude as the training loss decreases and thus the variation within these gradients will become harder to capture.Second, on most datasets, the multi-round attack performs better than any single-round attack, proving the effectiveness of the Bayesian attack framework.Third, we observe very similar performance for AIA and PIA on the tabular datasets. This indicates that whether the sensitive variable is internal or external to the data features does not affect the inference performance.
5.3. Attack Analyses
We investigate the following factors that may affect the performance of inference attacks.
Impact of Batch Sizes.In Figure3, we study the impact of varying batch sizes on the performance of the inference attacks.We report the results on the Adult dataset for AIA, PIA, and DIA, and results on the CREMA-D dataset for UIA.We observe that the performance of all four considered inference attacks improves as the batch size increases. This is because the records within the batch are sampled from the same conditional distribution .As the private information is shared across the batch, a larger batch size would amplify the private information and suppress other varying signals, thereby improving inference performance on .For distributional inference, the difference in the number of samples with the property between each ratio bin also increases as the batch size increases and thus becomes easier to distinguish.For AIA and PIA, we observe that the gap between the single-round adversary (solid lines) and multi-round adversary (dashed lines) is the largest when the batch size is , and then gradually reduces as the batch size increases further due to performance saturation.This result suggests that simply aggregating more data does not protect gradients from inference. In fact, it may even increase the privacy risk in distributed learning where data are sampled from the same conditional distribution. This indicates that data aggregation alone is insufficient to achieve meaningful privacy in these settings.
Impact of Adversary’s Knowledge.To investigate the impact of the adversary’s knowledge on the performance of the attack, we use PIA as an example and plot the attack performance with varying shadow data size and number of observations on the Adult dataset in Figure5.We observe the general trend that the attack performance increases with the number of observations and available shadow data samples. Interestingly, the attack performance does not always increase monotonically along each axis. For instance, given a small shadow dataset of only samples, the AUROC of an adversary that observes rounds does not outperform an adversary that only observes rounds of gradients. This is likely because when the model is near convergence, the gradients are small and thus have low variance, which requires more shadow data to accurately estimate the posterior. Such errors in the predictive model will accumulate when using the summation of the log-likelihoods of all single rounds to approximate the joint distribution (Eq.(3)), eventually leading to suboptimal performance.
Impact of Model Size.In Figure4, we use PIA as an example to study the impact of the machine learning model size. We control the size of the models by varying the model width. Specifically, for fully connected neural networks, we control the number of neurons for the hidden layer. For convolutional neural networks, we control the number of output channels for the first convolutional layer, with the remaining convolutional layers being scaled accordingly.We observe that the attack performance tends to improve slightly with increasing model size, except for the Adult and UTKFace datasets, where performance is saturated. However, most of these improvements are not statistically significant (falling within the margin of error) and thus do not allow for a conclusive statement.We include additional results of other types of inference attacks in Appendix FigureLABEL:fig:model_size, where we make similar observations. These results demonstrate that all four types of inference attacks can be generalized to larger model sizes.
6. Defenses
In this section, we investigate five types of strategies for defending inference from gradients against both static and adaptive adversaries and analyze their performance from an information-theoretic view. The main takeaways from our analyses are: (1) heuristic defenses can defend static adversaries but are ineffective against adaptive adversaries, (2) DP-SGD(abadi2016deep) is the only considered defense that remains effective against adaptive attacks, at the cost of sacrificing model utility, and (3) reducing the mutual information between the released gradients and the sensitive variable is a key ingredient for a successful defense.
6.1. Privacy Defenses Against Inference
Privacy-enhancing strategies in machine learning generally follow two principles: data minimization and data anonymization.Data minimization strategies, such as the application of cryptographic techniques (e.g., Secure Multi-party Computation and hom*omorphic Encryption) and Federated Learning, aim to reveal only the minimal amount of information that is necessary for achieving a specific computational task - and only to the necessary parties. As shown by prior work(truex2019hybrid; elkordy2023much; lam2021gradient; kerkouche2023client), data minimization alone may not provide sufficient privacy protection and, thus, should be applied in combination with data anonymization defenses to further reduce privacy risks.However, for heuristic-based privacy defenses, it is important to conduct a careful evaluation of their effectiveness against adaptive adversaries.We consider the following five types of representative defenses from the current literature in our experiments:
- (1)
Gradient Pruning. Gradient pruning creates a sparse gradient vector by pruning gradient elements with small magnitudes. This strategy has been used as a baseline for privacy defense in federated learning(zhu2019deep; sun2021soteria; wu2023learning). By default, we set the pruning rate to be .
- (2)
SignSGD. SignSGD(bernstein2018signsgd) binarizes the gradients by applying an element-wise sign function to the gradients,thereby compressing the gradients to 1-bit per dimension. Similar to gradient pruning, it has been explored in prior work(wu2023learning; yue2023gradient) as a defense against data reconstruction attacks in federated learning.Along similar lines, Kerkouche et al.(kerkouche2020federated) evaluated SignFed, a variant of the SignSGD protocol adapted for federated settings, and found it to be more resilient to privacy and security attacks than the standard federated learning scheme.
- (3)
Adversarial Perturbation.Inspired by prior research on protecting privacy through adopting evasion attacks in adversarial machine learning(jia2018attriguard; jia2019memguard; shan2020fawkes; o2022voiceblock), we explore a heuristic defense strategy against inference attacks that inject adversarial perturbation to the gradients. Specifically, at each round of observation, the adversary first trains a neural network to classify the sensitive variable from the gradient using a public dataset (same as the shadow dataset). Then, the defense generates a protective adversarial perturbation to cause to misclassify the perturbed gradients. We adopt -bounded projected gradient descent (PGD)(madry2018towards), which generates the adversarial example (perturbed gradient) by iteratively taking gradient steps.For AIA, PIA, and DIA, this defense generates an untargeted adversarial perturbation through gradient ascent, i.e.,, where is the norm ball centered around with radius .For UIA, the defense generates a targeted adversarial perturbation through gradient descent, i.e.,, to make the gradients misrecognized as the target user .By default, we set the total number of steps to be , , and .
- (4)
Variational Information Bottleneck (VIB). This defense inserts an additional VIB layer(alemi2016deep) that splits the neural network into a probabilistic encoder and a decoder , where is a latent representation that follows a Gaussian distribution.An additional Kullback-Leibler (KL) divergence term is introduced to the training loss: , where is the standard Gaussian. Optimizing this VIB objective reduces the mutual information between the representation and the input by minimizing a variational upper bound. Prior work suggests that this helps to reduce the model’s dependence on input’s sensitive attributes and improve privacy(song2019overlearning; scheliga2022precode; scheliga2023privacy). We set as the default for our experiments.
- (5)
Differential Privacy (DP-SGD).Differential privacy (DP)(dwork2006calibrating) provides a rigorous notion of algorithmic privacy.
6.2. Defense Evaluation
In Figure6, we compare the performance of defenses against static and adaptive adversaries. Due to space limits, here we focus on PIA on the adult dataset. The full results including all four types of inference attacks are available in Appendix FigureLABEL:fig:defenses_full.We observe that heuristic defenses such as Gradient Pruning, SignSGD, and Adversarial Perturbation can successfully defend against static adversaries in terms of reducing the advantage of the adversary to zero. However, these defenses are ineffective against adaptive adversaries aware of the defense. For instance, in the case of gradient pruning, the adaptive adversary can achieve a high advantage () that is only slightly decreased compared to no defense ().Interestingly, in the case of Adversarial Perturbation, we found that the adaptive adversary’s performance is increased, rather than decreased, compared to no defense, reaching a perfect advantage and AUROC of .For the rest of the defenses, namely, VIB and DP-SGD, the attack performance is consistent across static and adaptive adversaries. However, only DP-SGD manages to effectively reduce the advantage of the adaptive adversary to near zero.
To understand the privacy-utility trade-off of these defenses, we plot the PIA adversary’s advantage evaluated on the training data versus the measured AUROC of the network on predicting the task label on the test dataset on the Adult dataset in Figure7. We consider three different sets of parameters for each type of defense (details in Appendix). We observe that in the case of static adversaries, SignSGD achieves the best trade-off that approximates the ideal defense (upper left corner) by reducing the advantage to zero without affecting model utility. However, in the case of adaptive adversary, only DP-SGD provides a meaningful notion of privacy, at the cost of diminishing model utility.Moreover, there may exist stronger adversaries that are more resilient against these defenses.For instance, in Table2, we show that an adversary using principal component analysis (PCA) with principal dimensions as dimensionality reduction can bypass the DP-SGD defense with and that defends an adversary using max-pooling, and requires larger noise to thwart.
In the next section, we analyze the underlying principles of these defenses and the necessary ingredients for a successful defense.
6.3. Defense Analyses
In this section, we provide an information-theoretic perspective for understanding and analyzing defenses against inference attacks from gradients.
Information-theoretic View on Inference Privacy.The inference attacks captured in the unified game can be viewed as performing statistical inference(du2012privacy) on properties of the underlying data distributions upon observing samples of the gradients.A well-known information-theoretic result for analyzing inference is Fano’s inequality, which guarantees a lower bound on the estimation error of any inference adversary.Formally, consider any arbitrary data release mechanism that provides computed from the private discrete random variable supported on .Any inference from the observation must produce an estimate that satisfies the Markov chain .Let be a binary random variable that indicates an error, i.e., if . Then we have
(6) |
where is the binary entropy.For , a standard treatment is to consider the mutual information and , and thereby we can obtain a lower bound on the error probability:
(7) |
Note that this bound is vacuous when , and a slightly tighter bound can be obtained by considering exactly (rather than using the approximating bound of ) and numerically computing the lowest error probability that satisfies the inequality in(6), as noted by prior work(guo2023analyzing).The bound in inequality (7) captures both the prior (via ) and the cardinality of the sensitive variable alphabet, indicating that data with a large degree of uncertainty is hard to infer or reconstruct, which aligns with intuition from Balle et al.(balle2022reconstructing).Inequality (7) generically holds for any data release mechanism. In the context of inference from gradients, the adversary’s goal is to obtain an estimate of upon observing , which can be described as a Markov chain of .Since the adversary’s success rate is , one can get an immediate upper bound on the adversary’s advantage:
(8) |
As is a constant, this indicates that reducing results in increasing the lower bound of the error probability and consequently diminishing the adversary’s advantage.This analysis can be generalized to continuous sensitive variables by applying continuum Fano’s inequality(duchi2013distance).
Understanding Defenses.Next, we provide an explanation of the failures of heuristic defenses using the above framework and argue that a successful defense should effectively minimize the mutual information between the gradients and the sensitive variable.The Gradient Pruning and SignSGD defenses can be viewed as trying to reduce the number of transmitted bits in the gradients. However, this does not necessarily reduce the mutual information.The neural network classifier used in the Adversarial Perturbation defense is trained to minimize cross-entropy loss,which provides an approximate upper bound on the conditional entropy , and serves as a proxy for estimating the mutual information .However, generating adversarial perturbations to produce against this fixed classifier does not necessarily result in a reduction of the mutual information , and likely increases it.This is because the gradient steps used to generate the protective perturbation also contain information about . As the perturbation generation process is deterministic, an adaptive adversary can learn to pick up these patterns and gain additional advantage.In the case of VIB, the mechanism is stochastic but optimizing the VIB objective only gradually reduces the mutual information between the latent representation and the input , which still does not guarantee a reduction in during the optimization process.By design, differential privacy is not intended to protect against statistical inference as its goal is to preserve the statistical properties of the dataset while protecting the privacy of individual samples.However, an alternative information-theoretical interpretation of differential privacy is that it places a constraint on mutual information(bun2016concentrated; cuff2016differential). An easy way to see this is that by adding Gaussian noises to the gradients, the DP-SGD algorithm essentially creates a Gaussian channel between the true and released gradients, thereby placing a constraint on , which further bounds as according to the data processing inequality.More concretely, due tothe Gaussian channel ,we have the upper bound given by the channel capacity, if the gradients satisfy an average power constraint , where is the dimensionality of .One can obtain a stronger result in cases where the sensitivity is bounded (e.g., Theorem 2 in (guo2023analyzing)).
It is worth noting that the goal of our analyses here is to provide a perspective for understanding the effectiveness of a class of defense strategies, rather than deriving tight bounds.Additionally, as mutual information is a statistical quantity, the mutual information interpretation of inference privacy inherently only captures the average-case privacy risk.In the next section, we provide a privacy auditing framework for empirically estimating the privacy risk by approximating the worst-case scenario.
Adversary Type AUROC TPR@1%FPR ASR Advantage 96.90 MaxPooling 0.3004 0.0773 0.0017 0.0010 0.5732 0.1124 0.0001 0.0002 96.90 PCA 0.9825 0.0112 0.7284 0.1679 0.9437 0.0222 0.8239 0.0694 6.46 PCA 0.7010 0.0278 0.0471 0.0120 0.6995 0.0091 0.0598 0.0286
7. Empirical Estimation of Privacy Risk
In the privacy game defined in Definition3.1, the data is randomly sampled from the distribution, which only captures the average-case privacy risk and therefore cannot be used for reasoning about the minimal level of noise required for ensuring a certain level of privacy, as it may underestimate the privacy risk in the worst case.To better understand the privacy risk in the worst-case scenario, we provide a privacy auditing framework for empirically estimating the privacy leakage of a specific type of inference attack, namely, attribute inference, by allowing the data to be chosen adversarially.We start with a formal definition of per-attribute privacy following prior work(ahmed2016social; ghazi2022algorithms):
Definition 7.1.
Per-attribute DP.A randomized mechanism is -per-attribute DP if for all pairs of inputs differing only on a single attribute and for all events defined on the output of , the following inequality holds:
One can show that DP-SGD satisfies -per-attribute DP. However, it is hard to derive the privacy parameter analytically, as the per-attribute sensitivity of the gradient is not readily tractable and the common technique of gradient clipping only provides a very loose bound on sensitivity.Instead, we seek to obtain an empirical estimate of the per-attribute DP for each step through the following audit game.