17
Jun
2014
0

Semantic and Geometric Enrichment of 3D Building Models

Today’s guest blogger is Jon Slade. Jon’s currently working on his PhD at Cardiff University, sponsored by Ordnance Survey.

After a fruitful year spent as a GIS Specialist at Arup I chose to change direction and return to the world of academia to study a PhD full-time at Cardiff University. Whilst I found the commercial world, in which I have worked for 15 years rewarding I missed really immersing myself deeply in a topic. My UCL MSc Summer research project ‘Google Maps Journey Immersion’ gave me a taster of this. And so, with experience of civil engineering from my time at Arup, a life-long love of maps and all things Ordnance Survey plus an unashamed fondness for reality architecture shows such as Channel 4′s Grand Designs, the opportunity to study a PhD on the Semantic and Geometric Enrichment of 3D Building Models at Cardiff, sponsored by Ordnance Survey felt ideal. This shall be my main area of work for the next 3-4 years.

Buildings in this case could be houses, skyscrapers, office blocks, churches or anything else that has a roof and walls. By 3D Building Model we mean the three-dimensional shape (geometry) of a building, held in computer form. Associated with this shape and embedded into the model might be semantic information such as whether a particular face is a wall or a roof. Such models are produced in the world of Computer-aided Design (CAD) including in the fields of architecture, civil engineering and town planning. In recent years members of the public have begun to produce their own models and store them in online repositories such as the Trimble 3D Warehouse.

Trimble 3D Warehouse

Building models can also be produced using specialist photogrammetry software. Such software uses the principles of stereoscopy, whereby a series of photos taken from the air, from slightly different locations allow the software technician to infer the depth (and therefore height) of features, such as the corners of building roofs. From these points in 3D space, the software technician can then create a 3D building model.

But why do we need to enrich existing models? Or, more specifically, why attempt to enrich using an automated computer method?

Producing models through CAD, through online means such as the Trimble 3D Warehouse and with photogrammetry software is a labour intensive manual process, requiring creation on a building-by-building basis. The extent of semantic information in those models is also determined by the person who created the model.

Automated computer methods for generating models of multiple buildings in one go, across a wide area of the Earth’s surface, do exist. These techniques often use aerial imagery and apply image detection algorithms from the field of Computer Vision to match common features between images taken from different locations, enabling the shape for a building or series of buildings to be inferred automatically. Such methods are often supplemented with data from airborne LiDAR equipment, LiDAR being the emission and measurement of the response of small bursts of laser light. LiDAR data can be used to create a ‘point cloud’, a series of very closely spaced points, each one representing the first solid object that the laser reached and which will therefore include points on the surfaces of buildings. In some cases LiDAR and imagery taken from the ground (Google’s Street View Car uses both of these techniques) is used to provide data for the underside of the multi-building models.

Example of a Point Cloud

Example of a Point Cloud
Source

A good case study of such techniques covers the creation of a 3D Paris City Model.

As already mentioned however, whilst models created individually have the potential to include a large amount of semantic information, one is reliant upon the software technician to provide that data to the model. As far as standardising what information should be recorded, at an individual building level the standards of Building Information Modelling (BIM) such as Industry Foundation Classes (IFCs) are increasingly being used within the fields of civil engineering, construction and architecture. And at a city-scale CityGML has seen some adoption in the world of GIS and urban planning. However, just because a standard is being used, doesn’t mean that all semantic information for a standard will be ‘populated’ into the model(s) in addition to which some users of the models may require information not normally catered for by such standards.

And remember, creating individual models manually is labour-intensive. By the very nature the automated means of their creation, the auto-generated multi-building models meanwhile tend toward less geometric detail and less semantic content.

(So we are faced with the modern conundrum, the trade-off of detail versus time).

Across a city, the models for some buildings may therefore have been provided with a large amount of semantic information and possess detailed geometry, such as those produced for larger commercial buildings built in recent years and modelled manually. Other models may have only rudimentary geometry and be lacking semantic information, such as those produced through the aforementioned automated means – these models might perhaps be of older buildings not of historical note, not therefore warranting sufficient interest to have been modelled manually, retrospectively. And, in all cases the nature of the semantic information may not suit a particular application.

(One also needs to consider that some of the building models may not be in the public domain, representing as they do the intellectual property of the designers).

In summary then, one suggests that the richness of the semantic and geometric information for existing 3D building models is at worst limited and at best, inconsistent.

If suitably enriched, the rationale for this research posits that building models could be used in desktop and mobile applications to enhance a user’s experience of the built environment. It is suggested that potential users could for example come from the fields of urban planning, cultural heritage and mobile phone/WiFi network planning. Imagine for example viewing an augmented reality version of a heritage building or city tour accompanied by detailed annotations about the architectural style, building materials and history for each building and its facets. Or, as a mobile phone or WiFi network planner being able to use a city-wide building model which goes beyond just simplified building shapes and includes semantic information and geometric detail such as windows and wall materials – such additional information could assist in the optimisation of mast/antenna placement as a result of more accurate models of radio wave path loss, based on differing building materials.

This research aims to produce automated enrichment through the matching of supplementary tagged photos and annotated diagrams to the building models. To do this it is envisaged that Computer Vision image detection algorithms will be refined/developed and that the these will then be used for the matching of 3D building model to supplementary images – existing algorithms include SIFT, FRISK, SURF and FREAK. Once matched opportunities for refinement of the geometry and extraction and placement of annotations will be afforded. A possible ancillary benefit of the process could be that the texture mapping of the surfaces of the building models might be enhanced with that from the supplementary images.

Initial thoughts as to the possible sources of the supplementary photo and diagram data are: online tagged photo libraries such as Flickr; architectural drawings, civil engineering drawings, heritage guides and BIM/CAD files. As for development tools, whilst it has not yet been decided, the image detection capabilities of MATLAB, perhaps supplemented with an Object Oriented language such as Java might be utilised.

  • At a high-level the following ‘Research Goals’ have provisionally been identified:
  • Identify Sources of Captioned Imagery for 3D Building Model Enrichment
  • Refine/Develop Computer Vision Algorithm for 3D Building Model to Image Matching
  • Prove Geometry Enhancement Technique between 3D Building Models & Images
  • Prove Caption Extraction & Placement Technique between 3D Building Models & Images
  • Prove Texture Mapping from Images to 3D Building Models

I plan to blog here as the research progresses.

You may also like

OS Maps has a new 3D fly-through feature
Creating 3D data for Countryfile
3D streaming
Mapping Britain in 3D

1 Response

Leave a Reply

Your email address will not be published. Required fields are marked *

Name* :

Email* :

Website: