This decommissioned ERA site remains active temporarily to support our final migration steps to https://ualberta.scholaris.ca, ERA's new home. All new collections and items, including Spring 2025 theses, are at that site. For assistance, please contact erahelp@ualberta.ca.
Search
Skip to Search Results-
Fall 2022
Since 2013, Deep Neural Networks (DNNs) have caught up to a human-level performance at various benchmarks. Meanwhile, it is essential to ensure its safety and reliability. Recently an avenue of study questions the robustness of deep learning models and shows that adversarial samples with...
-
Enhancing Adversarial Robustness Through Model Optimization on Clean Data in Deep Neural Networks
DownloadFall 2024
Adversarial robustness has emerged as a critical area in deep learning due to the increasing application of deep neural networks (DNNs) and the consequent demand for their security. Adversarial examples, which are inputs modified with imperceptible perturbations to deceive DNNs, have garnered...