Extracting Parking Areas from Remote Sensing Imagery and Spatiotemporal Traffic Data

Julian Glaab, University of Rhode Island

Abstract

The increasing world-wide demand for mobility turned parking space in urban areas into a scarce resource. Searching for parking is not only annoying to drivers, it also represents a serious cause of urban traffic. A previous study found that vehicles searching for parking contribute as much as 30 percent to the total traffic. To solve this problem, a comprehensive Intelligent Transportation System (ITS) is necessary to supply drivers directly with information about open parking spaces in their surrounding, so they do not need to “go and look” themselves. However, in order to create an ITS like this, a detailed map with accurate parking area position and additional parking information, like cost or parking restrictions is required. Currently, no database provides this information above the level of individual municipalities. For mapping parking areas fast, cost efficient and reliably, a highly scalable process is required, which only relies on existing and disseminated technologies instead of exhaustive manual data collection methods. Remote sensing imagery and vehicle trajectories from probe vehicles, so-called “floating car data” (FCD) were identified as an appropriate data source, as these sources provide all necessary information and are available for most urbanized regions in standardized form. This study uses machine learning techniques to extract and aggregate parking area information from remote sensing imagery and floating car data. The multistage process for detecting parking area positions on images is based on GIS data, a convolutional neural network for detection of parking vehicles and a recursive algorithm to derive parking areas from the position of individual parking vehicles. Subsequently, metadata about cost, user group restrictions and opening hours is added to every detected parking area. Metadata is obtained by analyzing characteristic temporal patterns in local parking behavior, which are retrieved from floating car data. In order to sense these patterns, machine learning classifiers are applied to a set of different features. To achieve higher accuracy, FCD data is enriched by context data, like demographics or points of interest. Results are validated using OSM data and through manually collected reference data in a test area in the city of Braunschweig in Germany. The developed process for parking area detection is robust and achieved a detection accuracy above 95 percent with respect to parking area capacity in fully exposed image areas. However, the process is not able to sense parking areas, which are hidden by objects like roofs or trees. The idea of metadata extraction from floating car data is very promising, as the average classification accuracy ranges already above 80 percent in the used training data set. The remaining error is mainly due to the fact, that the amount of used training data currently does not allow a spatial classifier resolution high enough to capture parking restrictions of small individual parking areas. In future work it is planned to use more floating car data to improve classification accuracy further and to extract even more parking information from this data source. Parking area mapping based on remote sensing imagery may be improved by using an own specialized convolutional network architecture for vehicle detection. Additionally, images from autumn and winter season, where more ground is exposed to the classifier, could be used in order to increase overall detection accuracy.

Subject Area

Geographic information science|Information science|Artificial intelligence

Recommended Citation

Julian Glaab, "Extracting Parking Areas from Remote Sensing Imagery and Spatiotemporal Traffic Data" (2017). Dissertations and Master's Theses (Campus Access). Paper AAI10615492.
https://digitalcommons.uri.edu/dissertations/AAI10615492

Share

COinS