Search

Skip to Search Results
  • Spring 2023

    Shah, Haseeb

    Gradient Descent algorithms suffer many problems when learning representations using fixed neural network architectures, such as reduced plasticity on non-stationary continual tasks and difficulty training sparse architectures from scratch. A common workaround is continuously adapting the neural...

  • Fall 2024

    Mesbahi, Golnaz

    If we aspire to design algorithms that can run for long periods, continually adapting to new, unexpected situations, then we must be willing to deploy our agents without tuning their hyperparameters over the agent’s entire lifetime. The standard practice in deep RL—and even continual RL—is to...

1 - 2 of 2