Credit: Google News
Floods are the deadliest natural disaster in the world, affecting millions of people and causing thousands of fatalities and billions of dollars in economic damage every year. Early warning systems, though, can prevent up to 43% of fatalities and 35% of economic damages, Sella Nevo, Lead, Flood Forecasting Initiative at Google, pointed out during his keynote address at Mint Digital Investment Summit in Bengaluru.
According to Nevo, it was this belief that led Google and his team to build a flood forecasting system trained using machine learning (ML) to provide a more accurate idea of which areas will be flooded so that people could be alerted in time. In fact, Nevo and his team ran a pilot in Patna, Bihar, during the 2018 monsoon, and achieved over 90% accuracy in terms of providing the actual representation of the water situation on the ground.
Google’s forecasting system is based on a scalable high resolution hydraulic model which, according to Nevo, can simulate the behaviour of water across the floodplain more accurately. The model helps in understanding how much water is going to be within a river system and exactly where it will flow. Existing forecasting systems that are modelled on the widely-used elevation data sets, like the Shuttle Radar Topography Mission (SRTM), often miss details of the topography resulting in water overflowing everywhere and spilling throughout the whole map, argued Nevo. He added that “SRTM’s maps are almost 20 years old”.
Another factor which makes existing global data sets stale is the standard methodology they use. For example, in the Stereographic imagery method, a satellite with two cameras placed at a specified angle is used to take two images of each location at the same time. Looking at the differences between those images helps calculate the height of the elevation at every point. Google is taking a “slightly different” approach. Instead of using expensive specialized satellite missions, Google first purchased large amounts of standard high resolution optical imagery that is captured anyway, from different sources, and then used them to generate high-resolution elevation.
Google also collaborated with governments for gauge measurements and forecasts so they could tell how much water can be poured into the models and how much can be expected to flow through them. Further, Google wanted a map that ignores anything that water can flow through or under and retain things that would block the waters’ progression. To do this, they used a convolutional neural network (loosely modelled on the brain) that automatically identifies one location or structures that has to be ignored. It can then remove these structures and give an elevation map that the model needs, Nevo said.
Scaling hydraulic models globally also involves huge computational power and costs. Hence, Google is using physical equations to train the machine learning models, and is seeing some “early” positive results.
Moreover, Google has collaborated with government agencies and NGOs to make sure they have this information for timely evacuation or relief efforts.
Credit: Google News