Metro HD Data Sets
Imagery and Derivative Product Overview
Aerial Imagery (Oblique and Nadir)
A Leica CityMapper-2 sensor exposure includes five (5) individual images (i.e., one (1) nadir image and four (4) oblique images). Oblique data is acquired and stored based on the orientation of the sensor (i.e., forward, right, backward, left at successive 90° angles with respect to the direction of flight) at the time of acquisition. All imagery data is accompanied by a text file that includes corresponding exterior orientation parameters based on a refined GNSS solution and bundle adjustment. Raw image frames are not distortion-free. Oblique imagery spectral bands are red, green and blue only. All images are radiometrically balanced to minimize the effects of different oblique viewing angles. GSD is 5.0 cm (2-inch) for nadir orientation and on average 6.7 cm (2.64 inch) for oblique.
Accuracy
RMSE x/y: 15 cm (5.91 inch)
RMSE z: 25 cm (9.84 inch)
Coordinate Reference System
See the corresponding section as per the Metro HD Program.
Format
Nadir imagery:
GSD: 5.0 cm (2-inch)
Delivered in R-G-B-NIR spectral order
Oblique imagery:
GSD: 6.7 cm (2.64inch) average
Delivered in R-G-B spectral order
Lossless 8-bit TIFF
Internally tiled
256 x 256 or;
512 x 512
Exterior Orientation Parameters
Delivered in ASCII text format
Image name
x-coordinate
y-coordinate
z-coordinate
omega
phi
kappa
Adjusted GPS timestamp
Camera parameters can be delivered as needed
Orthorectified Imagery
Standard perspective orthomosiac images include the visual effects of building lean (i.e., top and bottom appear displaced from vertical based upon the object’s height and distance from the image center). As such, coordinate values represent the ground and not the top of an object’s location (visualized) as it appears in the imagery. This process relies upon a terrain elevation layer to remove the differential scaling and includes the removal of the bridge and building distortions.
True orthomosiac imagery is a raster product that goes a step further from the standard perspective offering where each pixel represents true nadir viewing geometry. This process removes the building lean and perspective distortion of the final image product using a DSM in the rectification process. The result is that all surface objects are visually represented and located in their true orthogonal 2D position, with the top and bottom of the structure aligned. Image geometry provides a uniform and consistent scale across the entire image while the perspective decreases obscurations and reveals an increased observation of surface features (e.g., streets, utilities, street furniture).
Orthomosaic images are available at a 5.0 cm (2-inch) ground sample distance. Depending on the purchase method, imagery is delivered as a full 4-band (Red- Green-Blue-NearIR) or 3+3 (Red-Green-Blue and/or Green-Red-NearIR).
Accuracy
RMSE x/y: 25 cm (9.84 inch)
RMSEr: 35.4 cm (13.94 inch)
CL95: 61.2 cm (24.09 inch)
Format
Raster
Lossless 8-bit GeoTIFF
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Applications
True orthomosiac imagery
Planimetric data extraction (including building footprints)
Special Considerations
Bridge and building distortions removed (standard orthoimagery)
Elevation Product Overview
LiDAR Point Cloud
Lidar data represents the raw elevation point cloud after system calibration, noise removal and strip adjustment. LiDAR point data is encoded with spectral information (i.e., Red-Green-Blue-NearIR) from the corresponding aerial imagery and includes intensity, the return number, and scan angle as attributes. All points are classified as LAS Class 0 (Created Never Classified).
Accuracy
RMSEz: 10 cm (3.94 inches)
CL95z: 19.6 cm (7.72 inches)
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Format
Density: 20 points per meter2 (minimum allowable 8 points per meter2)
ASPRS LAS v1.4
Point attributes: intensity, return number, scan angle
RGBNir spectral encoding
Terrain Products (Raster)
Digital Surface Model
Digital surface models (DSM) are derived from lidar first return pulses.
Accuracy
RMSE x/y: 20 cm (7.87 inch)
RMSEz: 10 cm (3.94 inches)
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Format
32-bit floating point
Single band raster image
GSD based on lidar point density
GeoTIFF
Digital Elevation Model
Digital elevation models (DTM) are constructed from identified lidar ground points.
Accuracy
RMSE x/y: 20 cm (7.87 inch)
RMSEz: 10 cm (3.94 inches)
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Format
32-bit floating point
Single band raster image
GSD based on lidar point density
GeoTIFF
AI Derivative Products Overview
3D Building Models
CityGML-compliant XML-based encoding for the representation, storage, and exchange of digital 3D city models to facilitate efficient visualization and data analysis. Standardized 3D object models with respect to geometry, topology, semantics and appearance. Increased detail and representation of reality building models have differentiated footprints, roof structures (e.g., pitch angle and orientation), heights (derived from DSM), and thematically differentiated boundary surfaces. Buildings are untextured.
Accuracy
RMSE x/y: 15 cm (5.91 inch)
RMSEz: 10 cm (3.94 inches)
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Format
Delivered in GPKG containing vector building footprints (Default)
Minimum and maximum object height delivered as attributes
OBJ format containing simplified building models
Applications
This model level is most appropriate for city districts and projects. Past customer usage includes but is not limited to:
Urban analysis
Estimation of solar exposure
Classifying building types
Urban planning
RF engineering/planning
3D Tree Models
Tree positions are identified from derived landcover maps with 2D segmentation of individual trees based on heights from DSM. Derived tree data contains stems and crowns modeled as a modified sphere. Corresponding attribute information includes tree positions, height, crown diameter and crown volume. Trees are classified as coniferous or deciduous.
Accuracy
RMSE x/y: 15 cm (5.91 inch)*
RMSEz: 10 cm (3.94 inch)*
*based upon the accuracy of the input data sets as the specific location of tree crowns is difficult to delineate due to surface winds
Format
Attributes delivered in CSV
Models delivered in:
OBJ (simplified tree models)
GPKG (containing tree position and height)
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Applications
RF engineering/planning
Urban planning
Understanding the spatial composition of the urban landscape
Vegetation inventory (tree cover change)
Shadow analysis
Solar potential modeling
Urban microclimate prediction
Landcover Map
The land cover data is delivered as a raster product where each pixel is assigned to one of the 25 most probable classes. The segmentation process is based on artificial intelligence 2D semantic segmentation of input image techniques taking advantage of the database of hand-labeled training data created from various images taken at diverse locations around the world. The training database is built through a manual delineation and identification process from various dedicated training set images. The accuracy of the process is assessed against a test set of images that are held out of training and processing. This is for the overall algorithm and is not assessed on a project-by- project basis.
Roof | Facade | Terrace | Tree | Shrub |
Structure | Object | Solar panel | Vehicle | Train |
Boat | Airplane | Wall | Retaining Wall | Stairs |
Bridge | Object | Dirt Road | Railway | Sports Field |
Water | Agriculture | Grass | Sand | Rock |
Accuracy
Overall single image accuracy: 87% - averaged across all classes
Format
Single band indexed raster image
5.0 cm (2-inch)
GeoTIFF
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Applications
RF-engineering/planning
Ecological modeling and analysis
Landscape planning
Urban heat mapping
Further feature extraction (e.g., building footprints, vegetation)
Aerial Mesh
The aerial mesh data product is derived from a combination of imagery and LiDAR to create a virtual representation of the real world. The aerial mesh is a product where a mesh of triangles represents the surface of all-natural and built-up objects within the imaged scene. Aerial mesh texture is achieved by assigning 2D RGB image data for each corresponding triangle that forms the 3D Mesh. The foundation of the aerial mesh is a spatial data structure that enables a Hierarchical Level of Detail (HLOD), so only visible tiles are streamed and rendered, improving overall performance. The OGC 3d Tiles format defines a spatial hierarchy for fast streaming and precise rendering, thereby balancing performance and visual quality at any scale.
Accuracy
RMSEx: 2x nominal RMSE_x accuracy of aerial imagery data*
RMSEy: 2x nominal RMSE_y accuracy of aerial imagery data*
RMSEz: 3x nominal RMSE_z accuracy of lidar data*
*Areas considered to be known limitations of the processing and artifacts are excluded from this definition. This includes cases such as near roof edges, areas with single-color texture, areas with highly repetitive patterns in texture (e.g., agricultural fields), and thin structures such as wires or cables.
Format
Tiles stored as OGC 3D Tiles (built on glTF) Default
Optimized for web streaming
Containing five levels of detail as a minimum
Texture resolution of highest LOD is the same resolution as input imagery
Coarser LODs image resolution halved progressively
Batch 3D Model (b3dm)
OBJ with JPG texture (best purposed for offline work)
Output at a variety of resolutions (lowest level is the default)
SLPK (scene layer package)
Coordinate Reference Systems
See the corresponding section as per the Metro HD Program.
Applications
Observe and monitor change
Landscape planning and analysis
City Planning
Smart city integration and analysis
Line of sight analysis
Transportation analysis
Special Considerations
This is not the super mesh product
This is an automated process with known and inherent limitations in texture and geometry
Artifacts are corrected either by manual editing or AI to the best extent possible
Texture and geometric (i.e., incomplete or incorrect modeling) artifacts include but are not limited to:
Highly reflective surfaces like (e.g., glass facades)
Moving objects (e.g., cars)
Building edges, wires, traffic signs/lights)
Balconies with delicate structures (warping)
Slim/fine objects (background texture extension)
Ancillary Data Sets
The following data sets are not standard Metro HD products for sale or distribution. These files are generated as part of standard image product generation and could be provided at an organization’s request and/or to support a bid package.
Mosaic Seam Lines
Vector representation of mosaic seamlines stored as polygons in shapefile format.
Polygons shall represent the total area included from each individual image in the final mosaic
Polygons will be topologically correct, with no gaps or overlaps
Shapefile attribute table must contain, at a minimum, the following attributes:
Image acquisition date and time (time format: UTC HH:MM:SS)
Sensor manufacturer and type (Leica CityMapper, etc.)
Sensor serial number (System serial number or Nadir camera serial number is acceptable)
Tail number of aircraft which acquired the image
Triangulation Report
If Leica HxMap is used, the HxMap Triangulation Project Folder and a screenshot of the Triangulation perspective showing Statistics and Settings tabs shall be delivered
For other Triangulation workflows, a Report which details the Triangulation results:
Residuals and standard deviation of adjusted exposure station position and orientation, compared to direct georeferencing from the trajectory
Residuals and standard deviation of adjusted image measurements. Tie points and control points are summarized independently.
Residuals and standard deviation of adjusted control measurements. Control points and checkpoints (if used) are reported individually.
Datum transformation (if any) calculated in Triangulation
Raw Data
The distribution and/or sale of raw data will be handled case-by-case. Raw data will only be sold if it is proven that the organization is attempting to extract information and/or produce products that Hexagon cannot. A royalty commitment should be put in place as part of the agreement.
The following raw data items will be provided upon request:
A copy of raw data as it appears on the MMs or as created by HxMap Data Copy
Processed GPS/INS file in SOL format
Base station data