If you tuned in to BBC’s Countryfile on Sunday, you’ll have seen Roger Nock from our Flying Unit talking to Adam Henson at his Cotswold farm. We were talking about how the aerial imagery we fly in our planes has been used to map hedgerows for the Rural Payments Agency, and help work out subsidies for farmers. We showed an example of how 3D data can be captured and displayed over Adam’s farm.
After the programme, we received a tweet asking where the LiDAR camera was in our plane. The answer is simply that we don’t fly LiDAR (3D laser scanning of the ground) and our planes are surveying aerial imagery (taking a photo with a high-resolution camera on-board the plane). We are treating this imagery in a similar way to how others would work with LiDAR data though.
3D mesh of Adam’s farm, with attributes attached to the data
So, what were you seeing on Countryfile?
It all started with the Flying Unit. For the programme, the plane flew over the farm at 5,000 ft, up and down and left to right across the area. Their flight lines overlap each other, by up to 80%, because the on-board camera is looking straight down at the ground and we need to capture a wider view and more angles of the features on the ground to create strong 3D data, especially in urban areas with many taller buildings.
Back at our head office, Jon Horgan, our Product Development Consultant, worked with the aerial imagery to create the visualisation you saw on Countryfile. Jon’s been investigating how we can to extract information from the 3D topography of our built environment, with our aerial imagery, and how it could benefit businesses and government.
Using classification within the point cloud to highlight features
Meet the point cloud
Jon’s visualisation of Adam’s Farm looked fantastic (in fact, it looked very much like a real-life video of the area). However, it’s the data behind the visualisation, the point cloud, and the information you can attach to it, that contains value for businesses and other users.
Compare the different between the 2015 and 2017 imagery (both shown in point cloud):
Jon uses imagery to create a point cloud, matching each pixel in the imagery to create a point. The overall effect looks like a 3D image, but as you zoom in, you can see the individual points rather than the image. Jon can then classify features within the point cloud – such as buildings, tracks or hedgerows to enable simple analysis to be carried out. From here, you could also look to link further datasets, such as OS MasterMap or AddressBase or calculate the biomass of vegetation – making it simpler for people without geographic information system experience, to run queries and analyse data.
So far, Jon’s work with imagery and point cloud data in the Bournemouth area has been used in a 5G data project to help identify the best places to place mobile masts to ensure wide coverage, and in Manchester for the CityVerve project looking at smart cities. His example on Adam’s farm shows how the Rural Payment Agency could use their dataset, the OS Landscape Features Layer, to easily analyse hedgerows and assess subsidy claims for farmers. The possibilities are endless.
Find out more about the hedgerow dataset which RPA use
Find out more about Jon’s work with point cloud