Picture of a Credit Card Transaction

The following is the simplest infographic I could find of how a credit card transaction works from UniBul’s Money Blog. If  you go to the page their are actually a couple of other infographics on credit card transactions.

a-credit-card-payment-is-so-incredibly-simple-until-you-take-a-closer-look[1].png

Here is the prose version:

Betty swipes her card and her account number, the expiration date, the billing address’s zip code, and the CVV code is sent to something called a front-end processor (in the above example Authorization Step 2 it is shown as Master Card but this is most often farmed out to a private company). The front-end processor’s job is to quickly check that Betty’s card has enough funds to cover the payment. It forwards the information contained on her card to a network of the relevant card association (MasterCard, Visa, American Express, etc.) that figures out the issuing bank the card came from. Her transaction now moves to a separate payment processor representing the issuing bank, the one whose name is on Betty’s card and manages her account. Once her bank has verified the validity of the information and checked for sufficient credit, a signal goes back the other way. The bank tells its processor to give the all clear to the association, that conveys it back to the front-end processor so that Farmer John and the acquiring bank can be satisfied that Betty has enough funds to cover the oranges. Within seconds Farmer John is notified of the approval.

Betty is walking away with her oranges. However, the payment system is not done. Farmer John has not been paid for delivering the oranges. For that to occur Farmer John must send a follow-up request to his acquiring bank, usually in a batch of receipts at day’s end. The acquiring bank will pay Farmer John for those receipts, but it will need to place a request for reimbursement from the issuing bank, using an automatic clearinghouse (ACH) network managed by either the regional Federal Reserve banks or the Electronic Payments Network of the Clearing House Payments Company, a company owned by eighteen of the world’s biggest commercial banks. Still, Betty’s bank won’t release the funds if it’s not convinced that it was really she who bought the oranges. So before it even gets the request for payment, its antifraud team has been hard at work analyzing the initial transaction, looking for red flags and patterns of behavior outside her ordinary activity. If the team is not sure about who was swiping the card, it will call Betty’s cell and home phone numbers, text her, and e-mail her, trying to get her to confirm that it really was her at the Farmers Market. Once her bank is satisfied that all is aboveboard, it will release the ACH settlement payment and register a debit on her credit card account. The money then flows to Farmer John’s acquiring bank, which credits Farmer John’s account. This process typically takes up to three business days to complete.

All this processing is not done for free. Each entity in bold red letters takes a cut of Betty’s transaction which usually totals between 1 and 3 percent of the sale. This may not seem like a lot but when you take into account all sales world wide…

 

Picture of Blockchain Transaction

I have been reading “The Age of Cryptocurrancy” by Paul Yigma and Michael J. Casey. They provide this simple to understand image of how a blockchain operates.

blockchain explained.jpg

A Practical Example of Farmer John selling Betty some oranges for $1.50

Betty goes to the Saturday morning farmers market in her town and wants to purchase $1.50 worth of oranges from Farmer John. Betty will use a cryptocurrency for this transaction (Bitcoin). Farmer John presents Betty his payment address as a quick response code:

Bill's address

Betty uses a Bitcoin wallet on her smartphone to scan the code. She is presented a screen where she can enter an amount to send to John’s address. She types ‘$1.50’ and presses send. A moment later John’s tablet notifies him that there is an incoming payment pending, which is not confirmed yet. About ten minutes later, the payment is finalized when it gets confirmed.

Why 10 minutes? Will be left for another discussion.

Under the hood

1) The Payment Transaction:
The software on Betty’s smartphone checks whether she has a sufficient balance and then creates a payment transaction. This transaction is composed of three pieces of information: Which “coins” to spend, the recipient, and a signature.

Betty’s wallet is connected to other participants in the network. The wallet passes the transaction to all of them, who in turn pass it on to all of their connections. Within a few seconds, every participant in the network has received notification of Betty’s payment order. Each and every participant checks whether the listed “coins” exist, and whether Betty is the rightful owner.

2) Confirmation:
So far, Betty’s payment is only a promise, because it is still unconfirmed.

To change that, some network participants, which we’ll call miners, work on confirming these transactions. The miners grab all the unconfirmed transactions and try to pack them into a set. When their set doesn’t fit the requirements, they reshuffle it and try again. At some point, somebody finds a set with the right properties: A valid block.

Just as with the transactions before, they send this block to all their connections, who in turn forward it to theirs. Everyone checks the work (to confirm that the block follows the rules) and when satisfied, they apply the included transactions to their own ledger: The transactions get executed and the “coins” that were used by Betty get transferred to Farmer John as ordered by the transactions. Betty’s transaction (and all the others) is now confirmed. Betty can now eat her oranges and Farmer John can now spend his “coins”.

Miners are compensated for processing transaction through the issuance of newly minted cryptocurrency coins by the blockchain. Neither Betty nor Farmer John incur any cost for this transaction.

Blockchain

Blockchain

Blockchain

I have been aware of blockchain technology since 2010 when I came across an article on the subject, however, I was slow to realize the extent of the possibilities of the technology until mid-2015 when I started reading up on the topic.

At its core blockchain is relatively easy to understand. The blockchain is a public ledger where transactions are recorded and confirmed anonymously. It’s a record of events that is shared between many parties. More importantly, once information is entered, it cannot be altered. So the blockchain is a public record of transactions.

The blockchain was initially created to track the transactions (purchase/selling) of cryptocurrancies but it can be used for much more. Here are some examples:

Ownership Trading – The technology can be used to track any type of digital asset be that tickets, merchandise (digital down load of software, music), products, subscriptions among many others. See Peertracks

File Storage – Peer to Peer file sharing networks removes the need for centralized databases and heavy storage areas. IPFS (Planetary File System) – an innovative protocol is complimenting this big change. See  Storj

Voting, Authorization, and Authentication. An increasing number of organizations and political parties have proposed the creation of a blockchain-based system to build a fairer and more transparent voting environment. See Factom

The list of projects is endless. See below for some applications in FinTech:

UKF_web_dash11.jpg

 

Copying a Website to Your Local Hard Drive

CaptureSince its inception more than 20 years ago LWDD has maintained a custom web site. Where “custom” means custom html coding and scripting that meant its been difficult and costly to maintain. The website is hosted offsite and edited through a custom application.

We have been working on development of a new web page based on WordPress that will soon go live. In preparation, I had to make a copy of our old web site for posterity and found WinHTTracker Website Copier  a free and simple application to get the job done.

After installation a copy is just a few keystokes away. Start the application click next.
winhttracker

Create a new project and specify the location on your local drive, click next.
HTTracker2

 

Specify the address of the website you want to copy, next.
winhttracker3

 

You will get some other options but in most cases you will not need them, finish.
winhttracker4

The website will be copied to your local drive.
winhttracker5

The website is now copied locally. Go to the location where the website was copied and you will find an index.html. When clicked you will be able to view your copied website.

winhttracker6

Benefits of IT Outsourcing

Capture

Studies have found that lower overall cost is a popular reason for hiring an IT outsourcing company. The latest evidence of this comes from a survey from Nimsoft / EMA of medium and large size companies, senior managers and leaders. This investigation revealed that 41% of participants named “reduce cost” as the most important factor is selecting to partner with an IT outsourcing company. The majority of outsourcing contracts arise from the need to achieve a higher return on investment, so this result is not too strange. Some of the other 11 reasons listed on the report are described below.

1. Improve Technology

Technological performance influences the final result. Old obsolete programs and hardware limit the potential of any company. Knowing this, many it leaders opt to bring in an IT outsourcing company to provide an analysis of the current system. Many of these outsourcing services are conducted using the Information Technology Infrastructure Library (ITIL) guidelines that represent the leading document of best practices in the IT field. These international guidelines explain how to catalog, evaluate, and adjust IT systems for maximum efficiency and power.

You may ask yourself, why not have in-house staff apply ITIL standards? Well, it is certainly possible to carry out an ITIL in-house, but this route often poses problems of labor and productivity. IT outsourcing services can normally perform this task faster, since they have experience working with companies in different industries. In conjunction with in-depth research, this experience allows IT outsourcing services to understand what solutions work best for your particular situations.

2. Flexible Productivity

Although ITIL analysis is a key outsource offering specific projects are also welcome. If the system crashes or you need additional bodies to drive a rebound in labor, outsourcing services are ready to help make your organization more flexible and better able to take on additional projects. Finally, an IT outsourcing company can also develop custom applications to improve overall operations.

3. Improved Efficiency

Maintaining IT system can be a burden. Many organizations are perpetually responding to problems without the time and resources to plan and optimizing their organizational IT setting, outsourcing IT can often produce greater organizational efficiency.

Don’t pass up this benefit if you already have an in-house IT Department. In fact, IT workers often revel in the incorporation of an outsourcing company, since by doing so it allows them to focus largely on the strategic evolution of IT, rather than devote most of their time to putting out fires.

4. Focus on Key Objectives

IT outsourcing services can be quickly completed tasks that otherwise would have required many hours. In this way, the association with an IT outsourcing company can free your employees to tune into key business processes and objectives.

Improved of efficiency, enhanced technology, and greater flexibility are just some of the reasons why IT leaders consider collaborating with IT outsourcing services. However, for the majority of business leaders, decrease cost remains the number one motivation.

Juan Tobar, IT Manager

Fiber or Coaxial

Capture

Current and emerging technologies are requiring faster Internet connections that help users to run multimedia applications, VoIP, and other applications that cannot function properly without a high speed Internet connection.  As a result, there are many different types of Internet connections that have evolved since the initial dial-up access including fiber optic and cable Internet access. As the new IT Manager for LWDD one of the issues I am working on is evaluating our internet and network speed and considering all options for better and cheaper service.

Fiber or Cable not Fiber vs. Cable
When considering these two options the question you should be asking is what place do each of these products have in your organization. Which is better does not apply as the products are as different as apples and oranges.

With fiber you will get a dedicated switch and fiber from your ISPs node to your door. With cable you share fiber and have some length of cable from the curb to your door. For a detailed explanation of Comcast HSI (hybrid fiber-coax) network take a look at:

ATTACHMENT A: COMCAST CORPORATION DESCRIPTION OF CURRENT NETWORK MANAGEMENT PRACTICES

Currently, we have 4.5 Mbit data and 1.5 Mbit VOIP over 3 T1 fiber from a major service provider in our area for which we pay $1,500. Although speed test can vary from moment to moment  here are the results of some of our tests:

TEST DOWN UP
CableTest 2.7 2.2
CharterComm 1.44 0.45
SpeedOfMe 2.58 3.79
SpeedtestNET 1.85 4.06
VerizonTest 3.15 4.23
Average 2.34 2.95

We examined a number of other fiber-to-the-door ISPs and the costs were similar:

10 Mbit Fiber $1,000/month
20Mbit Fiber $1,200/month

Comcast Business Class advertises that its network can perform at rates of up to 100 Mbit down and 20 Mbit up at a cost of about $250/month. Here are the results of the speed test:

TEST DOWN UP
CableTest 12.5 2.5
CharterComm 48.18 18.96
SpeedOfMe 4.47 15.17
SpeedtestNET 44.39 13.44
VerizonTest 105.65 21.2
Average 43.04 14.25

As we can see the variability of fiber throughput both up and down is much more stable than the hybrid fiber-coax. However, on average we still get much more bang for the buck with Comcast Business Class. Comcast service varies greatly from area to area so this solution may not work for you but at these prices it is worth taking a look.

This does not eliminate our fiber connection as we need a backup in case Comcast goes down. Many fiber ISPs offer discounts for lines used as backup. In addition we will be exploring a simple DSL solution as backup.

Juan Tobar, IT Manager

Things I learned at the ESRI User Conference

Sunrise
ArcGIS API for Silverlight
  • Includes improved security, integrated with Visual Studio, new tool for querying, and edit tracking.
  • Stuff you will need: ArcGIS Viewer for Silverlight (Application builder, configurable viewer, extensibility kits), API for ArcGIS Silverlight, and Silverlight
  • Programming is done through VB/C# and XAMLs
ArcGIS Flex Viewer
  • Includes a new application builder that is much easier to use than previous versions.
  • Users can easily configure and deploy apps without programming.
  • Stuff you will need: Flash Player, SDK either Adobe Flex 4.6 or Apache 4.8 or later, Download API (http://links.esri.com)
  • Programming Language is Action Script and Javascript (mostly used in HTML wrapper)
  • Based on our experience with C#.NET Silverlight is our preferred option.
  • In this new environment RegGSS would fork into: A General Support Tool for GIS Professionals using ArcMap and A Spatial Decision Support System for Permit Review Staff using Silverlight.
ArcGIS Workflow Manager
  • Could replace the work distribution and history tracking functions in our custom coded Data Processing Center
  • Could replace the spatial notifications functions in our custom coded Early Notification Systems.
  • Regulatory data entry workflows are simple compared to the complex workflows that the tool can support. LiDAR to DEM processing would be able to use more of the workflow capabilities.
ArcGIS Data Reviewer
  • Could replace custom python coded QA/QC checks.
  • Workflow Manager and Data Reviewer separate data entry and QA/QC into two distinct functions implementation would require a paradigm shift for Regulation where data entry and QA/QC are preformed concurrently.
Python Map Automation
  • Additional functionality being added but ESRI does not want all ArcObject mapping functions converted to Python.
  • Works by modifying elements of a template.mxd thus requires graphic objects to be manipulated to have unique names.
  • Could replace some custom C# code in our Early Notification Systems, Area of Interest Reports, and MyApplications
  • Much sample code available at http://esriurl.com/4598…6465
Collector App
  • Could replace the RegGSS Report a Digitizing Errors/Omissions that allows reviewers to digitize/submit a correction.
  • High potential use for those folks that are often in the field: Environmental Resource Compliance, Everglades Regulation.
  • ESRI Staff stated they have no plans for COGO functions to be migrated to collector app.
Parcel Fabric
  • This new feature class brings highly needed flexability and spatial accuracy to COGOed features.
  • Currently, when COGOing metes and bounds are not preserved, and we are sometimes forced to move COGOed features for cartographic reasons. Parcel fabric solves these problems by allowing features to be shifted while preserving the underying COGOed data.
  • Regulatory Conservation Easements and Land Management COGOed parcels should be migrated to this feature data set.
Github
  • Github is a version control system that ESRI is using to upload sample application code in all flavors: flex, javascript, silverlight.

A Regulatory Application of LiDAR Data

Capture South Florida Water Management District (SFWMD) has an agreement with the U.S. Department of Agriculture and the Natural Resources Conservation Service (NRCS) to provide engineering/hydrologic modeling technical assistance to NRCS in the delivery of technical services related to fish and wildlife conservation throughout the state of Florida on non-Federal private and Tribal lands, and to deliver conservation programs that are a part of the Food, Conservation, and Energy Act of 2008, also referred to as the 2008 Farm Bill.

In order to evaluate the water management systems for off-site potential impacts a computer model is generally developed for the existing condition and the peak stages and flows generated are compared to the proposed system to determine impacts.

One of the most data intensive steps for model development is the creation of the topographic information necessary to construct the stage/storage relationship.  The use of digital elevation models (DEM) derived from LiDAR data allows for quick and accurate estimates of basin storage especially in remote areas where conventional topographic surveys are unavailable or cost prohibitive.  The use of these LiDAR derived DEMs allows District staff to accurately delineate and evaluate watersheds because of the large extent of the area covered.  In the past, many watershed hydrology and hydraulics assessments have not been possible because of a lack of topographic data. The following discussion presents a typical project in support of the 2008 Farm Bill.

The project involved the production of volume calculations (based on one foot intervals) for each of the 47 basins identified within a large project encompassing approximately 24,500 acres of land. The steps taken to accomplish this were as follows: Create a file directory for each basin, within each directory, create a file geodatabase to store: the basin boundary, DEMs, contours, TINs, and text files.

The basin boundary becomes the mask for clipping the DEM. blog1.jpg

The clipped LiDAR DEM.

blog2.jpg

Using ESRI’s 3D Analyst Extension a contour feature class is created from the clipped DEM.

blog3

A TIN is created using the contour layer as the input feature class. blog4

In some instances the TIN will extend beyond the boundary of the basin so it needed to be edited to remove the excess area. When clipping for this purpose, the surface feature type must be set to Hard Clip. This ensures that the basin boundary defines the clipped extent of the TIN.

blog5

At this point the 3D Analyst tool Surface Volume is needed to obtain the calculated volumes.  Unfortunately, the tool creates a separate results text file for each elevation analyzed and thus can result in a large number of files. To fix this problem the standard ESRI Volume Calculation script was modified to produce one file with the calculated volumes for all elevations analyzed. The output is a comma delimited text file listing the outputs of the surface volume function for multiple depths. You can view the modified script on GitHub. blog6

Here is the output for 13 elevations.blog7 The last step was to QAQC the results by comparing the basin acreage with the Area_2D of the maximum elevation.  In this case, the basin acreage is 464.36 acres and the volume calculated acreage is 20,227,347 / 43,560 = 464.355. With this basin complete its time to start another one…

Linda McCafferty, Geographer 3, Regulation GIS, SFWMD
Shakir Ahmed, Geographer 3, Regulation GIS, SFWMD
Juan Tobar, Supervisor – Geographers, Regulation GIS, SFWMD

SFWMD’s New Composite Topo

CaptureIn South Florida’s complex water management system, hydrologic models are used for evaluation, planning and simulating water control operations under different climatic and hydrologic conditions. These models need as input accurate topographic data as part of their inputs. It has been four years since SFWMD updated their composite topographic model here is a general description of the source data and the processing steps that went into the newest composite product for the Lower West Coast.

All work was done using ArcGIS v10.1, with a raster (grid) cell size of 100 ft, with elevations in feet relative to NAVD 1988, and X-Y coordinates in the standard State Plane Florida east zone, 1983-HARN, US feet.  The extent is X from 205000 to 750000 and Y from 270000 to 1045000, creating a gridded dataset of 5450-by-7750 cells.  Almost all of the source data had been previously compiled.  The process consisted of assembling the topographic data at 100-ft cell size, layering together the topographic data, blending the source layers to remove discontinuities along the edges, assembling and layering together the bathymetric data, joining the bathymetric with the topographic data, and filling in any remaining no-data holes along the shoreline.

Most of the project land area was covered by modern LiDAR data, which had already been processed to create DEMs at 100-ft cell size.Sfwmd_Lidar-5bSeveral areas lie in the western zone, and they had already been projected for the Florida east zone.  The LiDAR data includes:

  • FDEM 2007 Coastal LiDAR project, with partial or complete coverage by county:  Lee, Charlotte, Sarasota, Collier, Monroe, and Miami-Dade.
  • SWFWMD 2005 LiDAR:  Peace River South project, covering part of Charlotte County.
  • USGS 2012 LiDAR:  Eastern Charlotte project, covering part of eastern Charlotte and western Glades counties.
  • USACE 2007 LiDAR:  The HHD_EAA project as an add-on to the FDEM Coastal LiDAR project, covering part of Hendry, Glades, Palm Beach, and Okeechobee counties.  The USACE also processed and merged in bathymetric data from Lake Okeechobee (from USGS and other boat surveys) at 100-ft cell size.
  • USACE 2010 LiDAR:  The HHD_Northwest dataset was merged from two deliverables named HHD NW Shore and Fisheating Creek, and covers parts of Okeechobee, Glades, Highlands, and Charlotte counties.
  • USACE 2003 LiDAR:  The Southwest Florida Feasibility Study (SWFFS) LiDAR covered parts of Collier, Hendry, and Glades counties.  This dataset has lower quality than the other LiDAR data.

For the Everglades and Big Cypress areas, the collection of LiDAR data is problematic due to extremely dense vegetation cover.  The USGS conducted a project through 2007 to collected high-accuracy elevation points throughout those areas, essentially using a plumb bob hanging from a helicopter equipped with GPS.  This high-accuracy elevation dataset (HAED) consists of about 50,000 points collected at 400-m intervals.  The points were converted to a gridded surface using an ordinary-Kriging interpolation and resampled at 100-ft cell size.

Another problematic area is the well-known “Topo Hole” in SE Hendry and NE Collier counties, where no high-quality elevation data has been collected.  Several previous approximations of a topographic surface had been made for this area (2003 and 2005 for the SWFFS, and in early 2006 by the USACE), primarily using 5-ft contours and spot elevations from USGS topographic maps (quad sheets).  For this project, several newly available datasets from the C-139 Regional Feasibility Study were obtained, and two were included in the current processing:  ground spot-elevations from the “all-static” survey and 1-ft contours for the C-139 Annex area.  Unfortunately these new datasets cover only a small area at the margins of the Topo Hole.  The contours and spot elevations were converted to a gridded surface using the ArcGIS tool TopoToRaster, formerly known as GridTopo and sometimes referred to as the ANUDEM method, which applies “essentially a discretized thin plate spline technique” to create a “hydrologically correct digital elevation model.”  The method is not perfect, and the lack of detailed source data limits the accuracy, but still the result is better than other methods that are currently available.  A 20,000-ft buffer zone was added around the Topo Hole, and the TopoToRaster tool was applied.  Until LiDAR or similar data is collected for the Topo Hole, this is likely to remain as the best-available approximation of a topographic surface for this area.

C139_source-2a

C139_source-2bFor the Collier, Monroe, and Miami-Dade DEMs, “decorrugated” versions of the processed LiDAR data were used.  During the original processing of the accepted deliverables from the FDEM LiDAR project, significant banding was apparent.  This banding appears as linear stripes (or corn rows or corrugations) of higher and lower elevations along the LiDAR flightlines.  The DEM data can be “decorrugated” by applying a series of filters to the elevation dataset, but real topographic features can also be altered slightly in the process.  In the resulting product, the systematic errors are removed, but with the cost that every land elevation is altered to some extent; thus the decorrugated surface is considered a derivative product. The decorrugation work for the rural areas of these three counties was done in 2010, but the results had never been formally documented or added to the District’s GIS Data Catalog.  For this project, in comparison with the “original” processed DEMs, these datasets are considered the “best-available” because errors that are visually obvious have been removed.

The best-available topographic DEMs were mosaicked into a single DEM, with better data used in areas of overlap.  In order to remove discontinuities where the border of better data joins to other data, a special blending algorithm was used to “feather” the datasets into each other.  The result is a surface that is free from discontinuities along the “join” edges, but of course accuracy of the result is limited by the accuracy of the source data.  The width of the blending zone along the edges was varied according to the type of data and the amount of overlap that was available, and ranged from 4,000 to 20,000 feet.  The “blending-zone” adjusted data was retained and is available for review.

The layering of the topographic data, from best (top) to worst (bottom), is listed here with best listed first:

  • 2007 FDEM LiDAR, HHD_EAA, HHD_Northwest, and Eastern Charlotte  (all are roughly equivalent in quality)
  • SWFWMD 2005 LiDAR
  • SWFFS 2003 LiDAR
  • USGS HAED
  • Topo Hole

As noted above, two datasets for the land area were created.  One treated the major lake surfaces as flat areas, and the other used available lake bathymetry to add the lake-bottom elevations into the topo DEM surface.  The USACE-processed bathymetry was added for the area of Lake Okeechobee that had been treated as a water body in the 2007 HHD_EAA LiDAR deliverable.  Also, bathymetry data for Lake Trafford was available from a 2005 USACE project.  In the flat-surface version, elevations of 7.5 ft for Lake Okeechobee and 15.1 ft for Lake Trafford, relative to NAVD 1988, were imposed.

Offshore bathymetry was mosaicked at 100-ft cell size.  In 2005 the available data had been collected and mosaicked at 300-ft cell size.  Since that time, no significant bathymetry datasets are known to have been created within the Lower West Coast area.  In 2004, a best-available composite for the Lee County area had been created at 100-ft cell size from 2002 USACE channel surveys of the Caloosahatchee River, boat surveys by USGS for the Caloosahatchee Estuary in 2003 and other areas in 2004, experimental offshore LiDAR from USGS in 2004, and older NOAA bathymetric data for areas that were not otherwise covered.  Other bathymetric data down the shoreline consists of USGS boat surveys down to Cape Romano, Naples Bay boat surveys for SFWMD, and Florida Bay boat surveys that had been compiled in approx. 2004 by Mike Kohler.  For the remaining offshore areas, the older NOAA bathymetric data was used.  The bathymetric pieces were mosaicked together and blended along their edges.

When the offshore bathymetry was joined with the land topography, there were thousands of small no-data holes near the shoreline.  lwc_compositetopo_general-1e

These empty spaces were grouped into three categories.  First, some small and low-lying islands (essentially mangrove rookeries) in Florida Bay had no pertinent elevation data.  These were assigned an arbitrary land elevation of +1.5 ft NAVD88.  Secondly, empty spaces adjacent to offshore bathymetry were treated as shallow offshore areas.  For each no-data hole (polygon), the maximum elevation for adjacent offshore bathymetry was assigned.  (i.e. It’s water, and we’ll give it the shallowest water value that’s nearby).  Finally, empty spaces that were inland were treated as low-lying land or marsh.  For each no-data hole (polygon), the minimum elevation for adjacent “land” was assigned.  (i.e. If it’s a low area, it can’t be above anything that’s along its edge.)

After filling in all of the no-data holes, the bathymetry and land-with-lake topo were mosaicked together.  The deepest spot in the full dataset is about 100 ft at a distance of about 90 miles west of Cape Sable.  For most modeling and mapping purposes, however, the areas well off the shore (e.g. 10 miles or farther) are not needed.  Thus a buffer zone was defined as 20,000 ft beyond the LWC offshore boundary, and the full dataset was clipped to that boundary to create Lwc_TopoShore.

lwc_compositetopo_general-1c

These products can accurately be described as composites of the best-available data.  For example, both the 2005 SWF composite and the early-2006 USACE SF-Topo composite were based on older data and did not include any of the LiDAR that was collected after 2003.  It might be possible to make slight improvements to the current source data (say, by decorrugating rural Lee County), but such modifications would involve significant time and work that would go beyond the short turn-around time allocated for this project.

Timothy Liebermann, Senior Geographer, Regulation GIS, SFWMD

Organizing Geospatial Data Collections

Capture

When organizing geospatial data collections it is important to associate data with names that facilitate discovery. This applies to the names your users see and how you name features at the root level (think SDE Feature Names).

In 1997, I was appointed GIS Coordinator to the City of Bakersfield and while searching the internet for standards that I could use to organize the City’s data I came across the Tri-Services Spatial Data Standards (TSSDS) later renamed the Spatial Data Standards for Facilities, Infrastructure, and the Environment (SDSFIE). You can download the old standards here.

The TSSDS was developed in 1992 by the Tri-Services CADD/GIS Technology Center at the US Army Engineer Waterways Experiment Station in Vicksburg, MISS. The Center’s primary mission was to serve as a multi-service (Army, Navy, Air Force, Marine Corps) vehicle to set CADD and GIS standards, coordinate CADD/GIS facilities systems within DoD, and promote CADD/GIS system integration.

I was particularly interested in the hierarchical classification that assigned geospatial features to Entity Sets, Entity Classes, and Entity Types.

  • Entity Sets were a broad classification like boundaries, cadastre, fauna, flora, hydrography, transportation, and others.
  • Entity Classes were more narrow focused groups such as: transportation air, transportation marine, transportation vehicle, and others.
  • Entity Types represented the actual geographic features.

Some examples:
Transportation – Vehicle – Road Centerline
Cadastre – Real-estate – Parcels
Hydrography – Surface – Canal Centerline

In 1997, shapefiles were new and most of us were still using covers so most geospatial data was organized in folders and this standard worked pretty well.

In 2001, at SFWMD we implemented the same standard in SDE:

Some examples:
Transportation – Vehicle – Road Centerline (TRVEH_ROAD_CENTERLINES)
Cadastre – Real-estate – Parcels (CDREL_PARCELS)
Hydrography – Surface – Canal Centerline (HYSUR_CANAL_CENTERLINES)

This system also worked pretty well but by 2001 there were many more non-geospatial professionals using the system, and the need for something more user-friendly.

The solution was a look-up table that takes terse SDE names and renders them in a simpler to read style. In principal, such a system would consist of a parent table containing source information (SDE, services, layers, shapes, covers, etc…) and a daughter table containing your classification schema. The parent table would be populated automatically through ETL scripts but the daughter table would be crafted manually by a data steward. A graphical user interface would then provide users with access to the information in the daughter table while a backend uses the parent table to retrieve the data.

Some of you may be asking why create a library from scratch when GIS packages can automatically index data for you. The answer is that these automated systems are good but still not as effective as a hand crafted system. Automated systems perform only as well as their metadata is written. Now let me ask you, how well is your organization’s metadata written? In most sizable governmental agencies data is incoming from both internal and external sources and unless you have a robust procedure for vetting the metadata a lot of garbage will get in. Vetting not only means automatically verifying that required metadata elements are filled in but also verifying that they make semantic sense and that titles are standardized. Looked at from this perspective it is much easier to standardize a feature class name in a lookup table than feature class metadata.

Juan Tobar, Supervisor – Geographers, Regulation GIS, SFWMD