Usage
  • 59 views
  • 172 downloads

Adversarial Training for Improving the Robustness of Deep Neural Networks

  • Author / Creator
    Hou, Pengyue
  • Since 2013, Deep Neural Networks (DNNs) have caught up to a human-level performance
    at various benchmarks. Meanwhile, it is essential to ensure its safety and
    reliability. Recently an avenue of study questions the robustness of deep learning models
    and shows that adversarial samples with human-imperceptible noise can easily fool
    DNNs. Since then, many strategies have been proposed to improve the robustness of
    DNNs against such adversarial perturbations. Among many defense strategies, adversarial
    training (AT) is one of the most recognized methods and constantly yields
    state-of-the-art performance. It treats adversarial samples as augmented data and
    uses them in model optimization.
    Despite its promising results, AT has two problems to be improved: (1) poor
    generalizability on adversarial data (e.g. large robustness performance gap between
    training and testing data), and (2) a big drop in model’s standard performance. This
    thesis tackles the above-mentioned drawbacks in AT and introduces two AT strategies.
    To improve the generalizability of AT-trained models, the first part of the thesis introduces
    a representation similarity-based AT strategy, namely self-paced adversarial
    training (SPAT). We investigate the imbalanced semantic similarity among different
    categories in natural images and discover that DNN models are easily fooled by adversarial
    samples from their hard-class pairs. With this insight, we propose SPAT
    to re-weight training samples adaptively during model optimization, enforcing AT to
    focus on those data from their hard class pairs.
    To address the second problem in AT, a big performance drop on clean data, the
    second part of this thesis attempts to answer the question: to what extent the robustness of the model can be improved without sacrificing standard performance? Toward
    this goal, we propose a simple yet effective transfer learning-based adversarial training
    strategy that disentangles the negative effects of adversarial samples on model’s standard
    performance. In addition, we introduce a training-friendly adversarial attack
    algorithm, which boosts adversarial robustness without introducing significant training
    complexity. Compared to prior arts, extensive experiments demonstrate that the
    training strategy leads to a more robust model while preserving the model’s standard
    accuracy on clean data.

  • Subjects / Keywords
  • Graduation date
    Fall 2022
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-k3jb-cm76
  • License
    This thesis is made available by the University of Alberta Library with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.