Usage
  • 61 views
  • 83 downloads

Data-Enabled Optimization of Building Operations

  • Author / Creator
    Zhang, Tianyu
  • Retrofitting buildings and optimizing their operation have been at the forefront of global efforts to reduce carbon emissions over the past few decades. Intelligent control of building systems, such as Heating, Ventilation, and Air Conditioning (HVAC), presents two clear benefits: it improves human comfort and reduces energy consumption and carbon emissions. However, the complex interplay between various building systems, coupled with the high costs associated with data collection, poses a significant challenge for developing accurate models and control schemes that rely on data or building models. To address these challenges, this thesis explores the use of readily available data to (a) train control agents capable of striking an acceptable balance between energy consumption, and thermal and visual comfort of the occupants; (b) evaluate a diverse population of control agents to find the most suitable one for transfer to a new building; (c) establish accurate personal thermal comfort models; (d) learn complex building dynamics; (e) assign space to occupants such that a better trade-off between energy consumption and thermal comfort can be achieved.

    To facilitate training and evaluation of learning-based controllers, we implement an open-source simulation platform in Python. This platform, called COBS, enables modeling occupant behavior and learning a control policy online by interacting with building systems in simulation. The first contribution of this thesis is learning a near-optimal policy for controlling a subset of actuators and setpoints that are part of multiple building systems, namely HVAC, shading, and lighting systems, using a model-free Reinforcement Learning (RL) algorithm. We show that this controller achieves a better trade-off between energy consumption and human comfort than controllers that are widely used in commercial buildings today. A notable extension is the introduction of a Multi-Agent Reinforcement Learning (MARL)-based HVAC control policy training and offline evaluation framework. This framework enables creating a library of policies from the source building(s), where training is inexpensive, using policy and environmental diversity in RL, ensuring robust performance in various target buildings, even without retraining.

    The next key contribution of this thesis is the introduction of personal comfort models and proposing a data-efficient approach for training these models. Specifically, the models are trained using weak labels derived from occupants' interactions with a Personal Comfort System (PCS), reducing both the need for direct occupant engagement and potential subjective biases. To reduce the training cost for individuals who lack prior data, several group comfort models are trained and combined using an ensemble method.

    Lastly, the thesis presents novel and efficient algorithms that can be adopted in a workspace reservation system that assigns desks to long-term and short-term occupants in shared workspaces. These algorithms assign occupants with similar thermal preferences to a zone that fulfills their thermal comfort requirements in the most energy-efficient manner. This leads to a more energy-efficient HVAC operation, while ensuring the satisfaction of thermal comfort constraints.

    Collectively, the above contributions could greatly enhance building operations and reduce the carbon footprint of the building sector without requiring expensive retrofits.

  • Subjects / Keywords
  • Graduation date
    Spring 2024
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/r3-nqb3-hg81
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.