Sea water floods the Ground Zero construction site in Manhattan.
To get an accurate and holistic view of risk, it is essential to know where your risks are located. As a result, geospatial technology is important in all aspects of the insurance process, from underwriting and pricing to modelling and claims.
The incidence of natural disasters worldwide has steadily increased, especially since the 1970s, thanks to climate change, according to a report from the New England Journal of Medicine. The last few months have seen a number of devastating catastrophic events around the world. The recent 8.2 magnitude Iquique earthquake in Chile, droughts in Pakistan, a deadly mudslide in Washington state in US, and floods in South Africa, New Zealand and Europe, to name a few. In each of these events, mapping and geospatial technology has played a key role in understanding the timeline of the event, the damage caused and the geographical extent of human and economic losses. In 2011, 296 separate events resulted in total insured losses amounting to $45 billion and total economic losses of $192 billion according to Aon Benfield’s Impact Forecasting’s Annual Global Climate and Catastrophe Report.
The geo-factor in risk aggregation
It is a well known fact that insurance is used to mitigate the risks of catastrophic events. Reinsurance is effectively an insurance for insurance companies; it enables insurers to cover risks that they may not be able to absorb themselves. Natural catastrophes can produce losses for insurers and are one type of risk where reinsurance is needed. Swiss Re, one of the largest re-insurers in the world, was formed following a fire in Glarus, Switzerland in 1861 when traditional coverage was inadequate to cover losses. World over the reinsurance industry is estimated to be worth more than $200 billion and this figure continues to grow. Property catastrophe cover is increasing in prevalence as people seek to cover themselves against potential natural disasters. Analytical capability is helping behind the scenes,with many of the leading insurance companies spending hundreds of millions each year on their analytical capabilities.
To get an accurate view of risk, it is essential to know where the risks are located. No wonder then that location is a key component in determining risk: knowing with accuracy where risks are located, the prevalence of catastrophic events, the topography and geology of the area and data regarding the types of buildings in an area. Much of this information has a geographical component and can be placed on a map and viewed together in a GIS or other geospatial technologies. Such technologies are important for insurers as they seek to understand where their exposure is located in relation to other factors and to help them quantify their risk. Geospatial technology is used extensively in all parts of the insurance process from underwriting and pricing to modelling and claims. Mapbased underwriting platforms allow the user to add a prospective customer’s property to a map and determine the premium or amount which should be charged based on a number of factors including location and proximity to risk-prone areas and other information such as demographic data.
A catastrophe model estimates the potential loss of property and life following a major disaster, and is very important for the reinsurance industry, having been used for the last 15-20 years. The model helps with pricing catastrophe reinsurance cover for different types of events or perils, including windstorm, earthquake, flood, tsunami and storm surge. Catastrophe models became significant in the early 1990s following two large hurricanes in the US — Hugo and Andrew. Reinsurance capacity was low and many companies became insolvent following the disasters. The need for risk assessment was realised and catastrophe models became important. Until a few years ago only natural perils were modelled, but in the recent years (following the World Trade Centre attacks in 2001) terrorism and other man-made catastrophes are also being modelled.
Risk modelling firms usually use multiple data sources such as land-use information, field surveys, satellite imagery, digital elevation models, and combine these data sources with their own expertise from wide-ranging fields like seismologists, meteorologists, hydrologists, engineers, mathematicians, finance, risk management and insurance professionals.
It goes without saying that the location or spatial aspect is an inherent part of catastrophe modelling. Catastrophe models generally contain four core components — hazard, vulnerability, loss and exposure — and spatial and locational data is a vital factor in each. The hazard component represents the frequency and severity of the peril with regard to spatial and temporal data. This is usually based on a stochastic event set. The vulnerability component classifies the susceptibility of the portfolio to the hazard. For property (re) insurance the building type, main use, construction material, age, etc. may all be modelled to give an accurate description of the building. The exposure data represents the information within the risk portfolio such as total insured values (TIVs), deductible and limit information and reinsurance application. The loss component calculates financial losses based on information supplied in the exposure data.
Aggregating and disaggregating risks is a geographic problem. Often, when looking at property risks, the insurance underwriter will aggregate their risks when describing these to the re-insurer or reinsurance broker. This can lead to uncertainty in both the spatial and attribute information relating to risks. For example, instead of knowing that a property is located at 1234 First Avenue, ZIP 40123, built in 1954, made of reinforced concrete and with a building value of $1 million, the risk might have been aggregated with 12 other properties in that particular zip or postal code. The aggregated risk information might be that there are 13 properties built between 1950 and 2000, of unknown building type with a summed building value of $10 million.
The effect of spatial uncertainty in loss calculation – two PML curves showing the effect on losses when using higher resolution postcode data (left) vs. lower resolution county data (right)
For perils like earthquake and windstorm, this aggregation of the location has less of an effect on modelled losses, but for flood and terrorism the geographical location of the risk has a huge impact and increases the uncertainty of loss calculation. This uncertainty can manifest itself in less accurate pricing decisions being made for reinsurance catastrophe cover.
In the last few years the reinsurance industry has seen the location component of the catastrophe model being explored further and visualised in software solutions. Some of the catastrophe models now contain mapping and visualisation tools to understand catastrophe model inputs and outputs. Mapping tools have the ability to visualise exposure, hazard, and loss outputs from the model.
For instance, following Windstorm Christian which hit western Europe in October 2013 (insured of loss over $1.35 billion), the event footprint supplied by the UK Met Office was added to Impact Forecasting’s catastrophe model, ELEMENTS, allowing insurers to view a footprint map of the hazard, quantify losses and then show those losses on the map.
The rise of open geographic data
Accurate data is needed as an input to the catastrophe model to reduce the location uncertainty of aggregating in loss calculation. Geographic information is a key component of 80% of all data and is present in all parts of the catastrophe model — exposure, hazard, vulnerability and loss. To accurately locate the property at risk, geocoding takes an address string and converts it into coordinates which can be understood by the model.
Geographic information is more than just mapping exposure. For instance, the Ordnance Survey in the United Kingdom released OS OpenData in 2010. Part of the data offering, which is free, includes postcode points, street mapping, and building data for Great Britain. These datasets can be used to map and understand an insurer’s risk. The OS Open Data offering is part of a government-backed initiative which has seen the opening up of data to UK citizens and organisations. This influx of ‘free’ data is useful for insurers and re-insurers and follows similar initiatives for opening up data around the world. Similarly, open datasets from the Met Office, USGS and NASA (amongst others) are helping catastrophe model developers across the world better understand the hazard components of the phenomena they seek to model.
|Location & insurance industry
In the UK in 2012, the Association of British Insurers (ABI) estimates that the industry paid out $2 billion in domestic and commercial claims as a result of flood and storm damage. To help underwriters, mapbased platforms allow the user to add a prospective customer’s property to a map and help to determine the premium which should be charged based on the location and proximity to risk factors such as natural hazards and demographic data. There is an increasing use of location-aware smartphones and social media to look at customer behaviour which can then be used for policy and claims management. Location-enabled telematic devices in vehicles are giving insurance companies detailed information on driver behaviour.
The increasing use of geospatial technology and geographic data is helping insurers achieve a better handle of their risks. Communication is also the key to make better decisions. For instance, in the UK, the Association for Geographic Information (AGI) acts as a membership body to provide a forum for geospatial professionals. A number of insurance companies are members of AGI and this is increasing. Most recently, an AGI Insurance and Risk special interest group has been created to help enable this discussion.
The insurance industry as a whole is taking more interest in evolving geospatial technologies. In future, the industry will make more use of geospatial data to drive their analytical capabilities and improve their bottom line.