Wednesday, April 14, 2021
  • Setup menu at Appearance » Menus and assign menu to Top Bar Navigation
Advertisement
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News
No Result
View All Result
NikolaNews
No Result
View All Result
Home Data Science

Reinforcement Learning to Reduce Building Energy Consumption

December 6, 2019
in Data Science
Reinforcement Learning to Reduce Building Energy Consumption
585
SHARES
3.3k
VIEWS
Share on FacebookShare on Twitter

Heating, ventilation, and air conditioning of buildings accounts alone for nearly 40% of the global energy demand [1].

The need for Energy Savings has become increasily foundamental to fight Climate Change. We have been working on a cloud-based RL algorithm that can retrofit existing HVAC controls to obtain substantial results.

You might also like

6 Limitations of Desktop System That QuickBooks Hosting Helps Overcome

Robust Artificial Intelligence of Document Attestation to Ensure Identity Theft

Trends in custom software development in 2021

In the last decade, a new class of controls which relies on Artificial Intelligence have been proposed. In particular, we are going to highlight data-driven controls based on Reinforcement Learning (RL), since they showed from the very beginning promising results as HVAC controls [2].

There are two main ways to upgrade with RL the air conditioning systems: to implement RL on new systems or to retrofit the existing ones. The first approach is suitable for manufacturers of Heating and air-conditioning systems, while the latter can be applied to any existing plant that can be controlled remotely.

We designed a Cloud-Based RL algorithm that continuously learns how to optimize power consumption by remotely reading the environmental data and consequently defining the HVAC set-points. The cloud-based solution is suitable to scale to a significant number of buildings.

Our test demonstrated a reduction between 5.4% and 9.4% in primary energy consumption for two different locations, guaranteeing the same thermal comfort of state-of- the-art controls.

Traditionally HVAC systems are controlled by model-based (e.g., Model Predictive Control) and rule-based controls:

The basic MPC concept can be summarized as follows. Suppose that we wish to control a multiple-input, multiple-output process while satisfying inequality constraints on the input and output variables. If a reasonably accurate dynamic model of the process is available, model and current measurements can be used to predict future values of the outputs. Then the appropriate changes in the input variables can be computed based on both predictions and measurements.

In essence, MPC can fit complex thermodynamics and achieve excellent results in terms of energy savings on a single building. Following this train of thought, there is a significant issue: the retrofit application of this kind of models requires to develop a thermo-energetic model for each existing building. Similarly, it is clear that the performance of the model relies on its quality, and having a pretty accurate model is usually expensive. High initial investments are one of the main problems of model-based approaches [3]. In the same fashion, for any intervention of energy efficiency on the building, the model has to be rebuilt or tuned, again, with an expensive involvement of a domain expert.

Rule-based modeling is an approach that uses a set of rules that indirectly specifies a mathematical model. This methodology is especially effective whenever the rule-set is significantly simpler than the model it implies, in such a way that the model is a repeated manifestation of a limited number of patterns.

RBCs are, thus, state-of-the-art model-free controls that represent an industry standard. A model-free solution can potentially scale up because the absence of a model makes the solution easily applicable to different buildings without the need for a domain expert. The main drawback of RBC is that they are difficult to be optimally tuned because they are not adaptable enough for the intrinsic complexity of the coupled building and plant thermodynamics.

Before introducing the advantages of RL Controls, we are going to talk briefly about RL itself. Using the words of Sutton and Barto [4]:

Reinforcement learning is learning what to do — how to map situations to actions — so as to maximize a numerical reward signal. The learner is not told which actions to take, but instead must discover which actions yield the most reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards. These two characteristics — trial-and-error search and delayed reward — are the two most important distinguishing features of reinforcement learning.

In our case, an RL algorithm interacts directly with the HVAC control system. It adapts continuously to the controlled environment using real-time data collected on-site, without the need to access to a thermo-energetic model of the building. In this way, an RL solution could obtain primary energy savings reducing the operating costs while remaining suitable to a large-scale application.

Therefore, it is desirable to introduce RL controls for large-scale applications on HVAC systems where the operating cost is high, like those in charge of the thermo-regulation of a significant volume.

One of the building use classes where it could be convenient to implement an RL solution is the supermarket class. Supermarkets are, by definition, widespread buildings with variable thermal loads and complex occupational patterns that introduce a non-negligible stochastic component from the HVAC control point of view.

We are going to formalize this problem by using the framework of Reinforcement Learning, let us contextualize it in a better way.

In RL, an agent interacts with an environment and learns the optimal sequence of actions, represented by a policy to reach the desired goal. As reported in [4]:

The learner and decision maker is called the agent. The thing it interacts with, comprising everything outside the agent, is called the environment. These interact continually, the agent selecting actions and the environment responding to these actions and presenting new situations to the agent.


Credit: Data Science Central By: Enrico Busto

Previous Post

Fugro using machine learning to map boulders on the sea floor

Next Post

New vulnerability lets attackers sniff or hijack VPN connections

Related Posts

6 Limitations of Desktop System That QuickBooks Hosting Helps Overcome
Data Science

6 Limitations of Desktop System That QuickBooks Hosting Helps Overcome

April 13, 2021
Robust Artificial Intelligence of Document Attestation to Ensure Identity Theft
Data Science

Robust Artificial Intelligence of Document Attestation to Ensure Identity Theft

April 13, 2021
Trends in custom software development in 2021
Data Science

Trends in custom software development in 2021

April 13, 2021
Epoch and Map of the Energy Transition through the Consensus Validator
Data Science

Epoch and Map of the Energy Transition through the Consensus Validator

April 13, 2021
NetSuite ERP ushering a digital era for SMEs
Data Science

NetSuite ERP ushering a digital era for SMEs

April 12, 2021
Next Post
New vulnerability lets attackers sniff or hijack VPN connections

New vulnerability lets attackers sniff or hijack VPN connections

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

Plasticity in Deep Learning: Dynamic Adaptations for AI Self-Driving Cars

January 6, 2019
Microsoft, Google Use Artificial Intelligence to Fight Hackers

Microsoft, Google Use Artificial Intelligence to Fight Hackers

January 6, 2019

Categories

  • Artificial Intelligence
  • Big Data
  • Blockchain
  • Crypto News
  • Data Science
  • Digital Marketing
  • Internet Privacy
  • Internet Security
  • Learn to Code
  • Machine Learning
  • Marketing Technology
  • Neural Networks
  • Technology Companies

Don't miss it

Five Top Quality APIs
Learn to Code

Five Top Quality APIs

April 14, 2021
Cybersecurity: Victims are spotting cyber attacks much more quickly – but there’s a catch
Internet Security

Cybersecurity: Victims are spotting cyber attacks much more quickly – but there’s a catch

April 14, 2021
Detecting the “Next” SolarWinds-Style Cyber Attack
Internet Privacy

Detecting the “Next” SolarWinds-Style Cyber Attack

April 14, 2021
Weekly NFT roundup March 23-29: Circle, Klaytn, and more
Blockchain

Weekly NFT roundup April 7–13: Christie’s, Triller, and more

April 14, 2021
Machine learning can help keep the global supply chain moving
Machine Learning

Machine learning can help keep the global supply chain moving

April 14, 2021
Why I Think That Avengers: Age of Ultron is One of the Best Sci-Fi Movies About A.I | by Brighton Nkomo | Apr, 2021
Neural Networks

Why I Think That Avengers: Age of Ultron is One of the Best Sci-Fi Movies About A.I | by Brighton Nkomo | Apr, 2021

April 14, 2021
NikolaNews

NikolaNews.com is an online News Portal which aims to share news about blockchain, AI, Big Data, and Data Privacy and more!

What’s New Here?

  • Five Top Quality APIs April 14, 2021
  • Cybersecurity: Victims are spotting cyber attacks much more quickly – but there’s a catch April 14, 2021
  • Detecting the “Next” SolarWinds-Style Cyber Attack April 14, 2021
  • Weekly NFT roundup April 7–13: Christie’s, Triller, and more April 14, 2021

Subscribe to get more!

© 2019 NikolaNews.com - Global Tech Updates

No Result
View All Result
  • AI Development
    • Artificial Intelligence
    • Machine Learning
    • Neural Networks
    • Learn to Code
  • Data
    • Blockchain
    • Big Data
    • Data Science
  • IT Security
    • Internet Privacy
    • Internet Security
  • Marketing
    • Digital Marketing
    • Marketing Technology
  • Technology Companies
  • Crypto News

© 2019 NikolaNews.com - Global Tech Updates