The Final Solution

This is the 8th and final post on on my paper “A Method for Determining and Improving the Horizontal Accuracy of Geospatial Features” Other posts on this topic:

7. Determining the Spatial Accuracy of Polygons Using Buffer Overlay

In the last post on the subject we described how Buffer Overlay can be used to determine the horizontal accuracy of polygon features. We ran Buffer Overlay on a single Township/Range where we had 259 permits to compare against parcels. This resulted in 259 cumulative probability (CP) curves one for each permit. In the sample below 21 curves are shown for each of these when the CP is greater than one the buffer distance is equal to the accuracy of the permit. If the curve does not cross the 1 CP then its accuracy could not be determined.

Here is our initial horizontal accuracy distribution for the 259 permits. In the graph below we have two peaks: < 2 feet where we have about 50 records and >= 60 where we have about 35 these are those feature for which accuracies could not be determined. The middle part of the curve shows a random distribution of accuracies.

Here is the cumulative curve. If the data was perfect all 259 permits would have a horizontal accuracy of .5 feet. In which case the cumulative curve (shown below) would be a straight line across at a cumulative count of 259.

This information is visualized below where you can see that there are many features at accuracy levels from  2 – 60 feet. This type of visualization is useful for manual correction of features but we were looking to automate the process.

In most cases the last buffer and clip operation will extract a line segment from control that forms a complete ring. This means that using standard topology tools the extracted control lines can be built into polygons and used to replace less accurate test (permit) features.

The solution that we implemented consisted of two processes:

  • If the clipped line work formed a complete ring the ring could be easily built into a polygon. This was the Phase 1 Correction.
  • If the clipped line work had gaps, we implemented a more complex algorithm that did a node-to-node comparison and then extended lines to form closed rings. In this case, it was possible to create multiple rings and once those rings were built into polygons their area was compared to the original permit (test feature) to assure that the new feature was not significantly larger or smaller than the original. This was the Phase 2 Correction.

In this video the first build is the trivial solution or Phase 1 Correction, the second build is the more complex Phase 2 Correction.

After the Phase 1 Correction we we generated statistics and found that we had corrected 104 permits or 40% of our 259 features to parcel. The horizontal accuracy distribution before and after Phase 1 correction is shown below.

In this close up we see that the Phase 1 Corrections curve spikes at 0.5 feet above the initial conditions and that the rest of the Phase 1 Correction curve is under the Initial curve as is expected.

The same improvements can be seen in the cumulative curve.

After the Phase 1 correction we also calculated the RMSE error and found a significant reduction in the horizontal error when compared to parcel (see below).

Lastly, a visual comparison was done showing the obvious improvements at the 0.5 feet horizontal accuracy (shown in yellow below).

The phase two correction made a 6% (16 record) improvement to our data. Here are some images of the features corrected. Red is the original permit the green area is the corrected feature following parcel.

The image below shows a common problem with our algorithm on the west side of the polygon where a corner has been clipped. However, in general the new feature is a better representation than the original in red.

The final horizontal accuracy measures after Phase 2 correction are provided below. Phase 2 is under Phase 1 is under Initial conditions as expected.

In conclusion, we have found that Buffer Overlay analysis is an excellent tool for determining and improving the horizontal accuracy of our permit features to parcels. The initial accuracy assessment is easy to perform and provides data that is easy to interpret. Visualizing the data by these generated accuracies provides an avenue for manual correction of the data. However, with a little more effort complete rings can be extracted from the last buffer and clip operation that can be built into more horizontally accurate polygons. This is the low hanging fruit but also provides the biggest bang for the buck.

Our additional efforts in Phase 2 where we connected nodes to close gaps and create rings only improved our data by 6%, however, we feel confident that we can tweak our algorithm to yielding at least a 10-15% gain in the future.

Advertisements

A Method for Determining and Improving the Horizontal Accuracy of Geospatial Features


“A Method for Determining and Improving the Horizontal Accuracy of Geospatial Features” is the title of a paper I am writing and will be presenting at the 2012 South Florida GISExpo presentation. Other posts on this research:

2. Coordinate Sample Builder

Here is part of the abstract:

  • Many geospatial data sets stewarded by GIS professionals are based on cadastre (parcel) boundaries.
  • Many of these same data sets lack positional accuracy measures that could be generated by comparing these features to parcels.
  • A simple positional accuracy measure for linear features developed by Goodchild and Hunter (1997) is used to generate positional accuracies for polygons representing geographic features closely related to parcels.
  • The method is then extended to extract and replace lower accuracy polygons with higher accuracy polygons derived from parcels.

The South Florida Water Management District (SFWMD) regulates water supply, water quality, groundwater withdrawals, and surface water runoff through the issuance of permits for these activities on specific land parcels. In the images below you can see a large (left) and small (right) Environmental Resource Permit.

Our GIS consists of ~ 85,000 permit polygons

  • 40,000 Environmental Resource Permits that never expire
  • 45,000 Water Use Permits that are valid for 20 years

These are used by engineers, hydrologists, and environmental scientists to make informed decisions during the application review process and post permit compliance.

We have been digitizing permits since the 1980’s and during that time we have used many different base maps.

  • 1980 to 1987 – USGS 1:24,000 Quad Map (7 years)
  • 1987 to 1995 – 20 Meter multi-spectral and 10 Meter SPOT panchromatic imagery (8 years)
  • 1995 to 1999 – 1 Meter USGS DOQQs (4 years)
  • 1999 to Present – Cadaster/Parcels in select counties

Which basically boils down to having 15 years of pretty awfully data. Since 1999 we have been manually “correcting” our older permits to parcels  but we have not made this task into a project (too many other things to do) instead we fix them when we run across them. In order to speed things up we decided to automate and in the following posts I will describe how we are going about this automation.

If you would like to view the permits on Google Earth go to our e-permitting page and for an “Application #” enter “120117-3” you should get the screen shown below where you can click on the “View in Google Earth” link. If your on a Mac you will have to File – Open the downloaded KML.

SFWMD Help for Viewing Permits through Google Earth

From 1980 to 1995 we digitized permits with a less then optimal base map. We had 7 years of data digitized with USGS 1:24,000 Quad Maps, and then we went 8 years using 20 Meter multi-spectral and 10 Meter SPOT panchromatic imagery.

In 1999 we began working with parcels and at the time the 16 counties in our jurisdiction had their parcels at different states of accuracy. The large counties with longer established GIS operations like Miami-Dade, Broward, and Palm Beach had very accurate parcel data. In the less populated counties we had to tolerate a certain amount of parcel drift as these counties improved their GIS.

In addition, we were compiling the parcels into a seamless mosaic at the District but we were not subjecting the final output to a reliable QA/QC. On one yearly update we found that the entire parcel fabric for one county had shifted tens of feet and it turned out that the previous years data had been incorrectly projected. After this we started visually inspecting the parcels in a more systematic way by zooming in to 30 random locations and noting any abnormalities.

In 2008, when we began working on how to improve the accuracy of our permits the first thing that we needed to do was objectively quantify their accuracy against the parcels. We knew that we could get a RMSE by comparing the vertices of permits and parcels but to do this effectively we need a tool. More in future posts.