Picture of Blockchain Transaction

I have been reading “The Age of Cryptocurrancy” by Paul Yigma and Michael J. Casey. They provide this simple to understand image of how a blockchain operates.

blockchain explained.jpg

A Practical Example of Farmer John selling Betty some oranges for $1.50

Betty goes to the Saturday morning farmers market in her town and wants to purchase $1.50 worth of oranges from Farmer John. Betty will use a cryptocurrency for this transaction (Bitcoin). Farmer John presents Betty his payment address as a quick response code:

Bill's address

Betty uses a Bitcoin wallet on her smartphone to scan the code. She is presented a screen where she can enter an amount to send to John’s address. She types ‘$1.50’ and presses send. A moment later John’s tablet notifies him that there is an incoming payment pending, which is not confirmed yet. About ten minutes later, the payment is finalized when it gets confirmed.

Why 10 minutes? Will be left for another discussion.

Under the hood

1) The Payment Transaction:
The software on Betty’s smartphone checks whether she has a sufficient balance and then creates a payment transaction. This transaction is composed of three pieces of information: Which “coins” to spend, the recipient, and a signature.

Betty’s wallet is connected to other participants in the network. The wallet passes the transaction to all of them, who in turn pass it on to all of their connections. Within a few seconds, every participant in the network has received notification of Betty’s payment order. Each and every participant checks whether the listed “coins” exist, and whether Betty is the rightful owner.

2) Confirmation:
So far, Betty’s payment is only a promise, because it is still unconfirmed.

To change that, some network participants, which we’ll call miners, work on confirming these transactions. The miners grab all the unconfirmed transactions and try to pack them into a set. When their set doesn’t fit the requirements, they reshuffle it and try again. At some point, somebody finds a set with the right properties: A valid block.

Just as with the transactions before, they send this block to all their connections, who in turn forward it to theirs. Everyone checks the work (to confirm that the block follows the rules) and when satisfied, they apply the included transactions to their own ledger: The transactions get executed and the “coins” that were used by Betty get transferred to Farmer John as ordered by the transactions. Betty’s transaction (and all the others) is now confirmed. Betty can now eat her oranges and Farmer John can now spend his “coins”.

Miners are compensated for processing transaction through the issuance of newly minted cryptocurrency coins by the blockchain. Neither Betty nor Farmer John incur any cost for this transaction.

Advertisements

Blockchain

Blockchain

Blockchain

I have been aware of blockchain technology since 2010 when I came across an article on the subject, however, I was slow to realize the extent of the possibilities of the technology until mid-2015 when I started reading up on the topic.

At its core blockchain is relatively easy to understand. The blockchain is a public ledger where transactions are recorded and confirmed anonymously. It’s a record of events that is shared between many parties. More importantly, once information is entered, it cannot be altered. So the blockchain is a public record of transactions.

The blockchain was initially created to track the transactions (purchase/selling) of cryptocurrancies but it can be used for much more. Here are some examples:

Ownership Trading – The technology can be used to track any type of digital asset be that tickets, merchandise (digital down load of software, music), products, subscriptions among many others. See Peertracks

File Storage – Peer to Peer file sharing networks removes the need for centralized databases and heavy storage areas. IPFS (Planetary File System) – an innovative protocol is complimenting this big change. See  Storj

Voting, Authorization, and Authentication. An increasing number of organizations and political parties have proposed the creation of a blockchain-based system to build a fairer and more transparent voting environment. See Factom

The list of projects is endless. See below for some applications in FinTech:

UKF_web_dash11.jpg

 

Things I learned at the ESRI User Conference

Sunrise
ArcGIS API for Silverlight
  • Includes improved security, integrated with Visual Studio, new tool for querying, and edit tracking.
  • Stuff you will need: ArcGIS Viewer for Silverlight (Application builder, configurable viewer, extensibility kits), API for ArcGIS Silverlight, and Silverlight
  • Programming is done through VB/C# and XAMLs
ArcGIS Flex Viewer
  • Includes a new application builder that is much easier to use than previous versions.
  • Users can easily configure and deploy apps without programming.
  • Stuff you will need: Flash Player, SDK either Adobe Flex 4.6 or Apache 4.8 or later, Download API (http://links.esri.com)
  • Programming Language is Action Script and Javascript (mostly used in HTML wrapper)
  • Based on our experience with C#.NET Silverlight is our preferred option.
  • In this new environment RegGSS would fork into: A General Support Tool for GIS Professionals using ArcMap and A Spatial Decision Support System for Permit Review Staff using Silverlight.
ArcGIS Workflow Manager
  • Could replace the work distribution and history tracking functions in our custom coded Data Processing Center
  • Could replace the spatial notifications functions in our custom coded Early Notification Systems.
  • Regulatory data entry workflows are simple compared to the complex workflows that the tool can support. LiDAR to DEM processing would be able to use more of the workflow capabilities.
ArcGIS Data Reviewer
  • Could replace custom python coded QA/QC checks.
  • Workflow Manager and Data Reviewer separate data entry and QA/QC into two distinct functions implementation would require a paradigm shift for Regulation where data entry and QA/QC are preformed concurrently.
Python Map Automation
  • Additional functionality being added but ESRI does not want all ArcObject mapping functions converted to Python.
  • Works by modifying elements of a template.mxd thus requires graphic objects to be manipulated to have unique names.
  • Could replace some custom C# code in our Early Notification Systems, Area of Interest Reports, and MyApplications
  • Much sample code available at http://esriurl.com/4598…6465
Collector App
  • Could replace the RegGSS Report a Digitizing Errors/Omissions that allows reviewers to digitize/submit a correction.
  • High potential use for those folks that are often in the field: Environmental Resource Compliance, Everglades Regulation.
  • ESRI Staff stated they have no plans for COGO functions to be migrated to collector app.
Parcel Fabric
  • This new feature class brings highly needed flexability and spatial accuracy to COGOed features.
  • Currently, when COGOing metes and bounds are not preserved, and we are sometimes forced to move COGOed features for cartographic reasons. Parcel fabric solves these problems by allowing features to be shifted while preserving the underying COGOed data.
  • Regulatory Conservation Easements and Land Management COGOed parcels should be migrated to this feature data set.
Github
  • Github is a version control system that ESRI is using to upload sample application code in all flavors: flex, javascript, silverlight.

SFWMD’s New Composite Topo

CaptureIn South Florida’s complex water management system, hydrologic models are used for evaluation, planning and simulating water control operations under different climatic and hydrologic conditions. These models need as input accurate topographic data as part of their inputs. It has been four years since SFWMD updated their composite topographic model here is a general description of the source data and the processing steps that went into the newest composite product for the Lower West Coast.

All work was done using ArcGIS v10.1, with a raster (grid) cell size of 100 ft, with elevations in feet relative to NAVD 1988, and X-Y coordinates in the standard State Plane Florida east zone, 1983-HARN, US feet.  The extent is X from 205000 to 750000 and Y from 270000 to 1045000, creating a gridded dataset of 5450-by-7750 cells.  Almost all of the source data had been previously compiled.  The process consisted of assembling the topographic data at 100-ft cell size, layering together the topographic data, blending the source layers to remove discontinuities along the edges, assembling and layering together the bathymetric data, joining the bathymetric with the topographic data, and filling in any remaining no-data holes along the shoreline.

Most of the project land area was covered by modern LiDAR data, which had already been processed to create DEMs at 100-ft cell size.Sfwmd_Lidar-5bSeveral areas lie in the western zone, and they had already been projected for the Florida east zone.  The LiDAR data includes:

  • FDEM 2007 Coastal LiDAR project, with partial or complete coverage by county:  Lee, Charlotte, Sarasota, Collier, Monroe, and Miami-Dade.
  • SWFWMD 2005 LiDAR:  Peace River South project, covering part of Charlotte County.
  • USGS 2012 LiDAR:  Eastern Charlotte project, covering part of eastern Charlotte and western Glades counties.
  • USACE 2007 LiDAR:  The HHD_EAA project as an add-on to the FDEM Coastal LiDAR project, covering part of Hendry, Glades, Palm Beach, and Okeechobee counties.  The USACE also processed and merged in bathymetric data from Lake Okeechobee (from USGS and other boat surveys) at 100-ft cell size.
  • USACE 2010 LiDAR:  The HHD_Northwest dataset was merged from two deliverables named HHD NW Shore and Fisheating Creek, and covers parts of Okeechobee, Glades, Highlands, and Charlotte counties.
  • USACE 2003 LiDAR:  The Southwest Florida Feasibility Study (SWFFS) LiDAR covered parts of Collier, Hendry, and Glades counties.  This dataset has lower quality than the other LiDAR data.

For the Everglades and Big Cypress areas, the collection of LiDAR data is problematic due to extremely dense vegetation cover.  The USGS conducted a project through 2007 to collected high-accuracy elevation points throughout those areas, essentially using a plumb bob hanging from a helicopter equipped with GPS.  This high-accuracy elevation dataset (HAED) consists of about 50,000 points collected at 400-m intervals.  The points were converted to a gridded surface using an ordinary-Kriging interpolation and resampled at 100-ft cell size.

Another problematic area is the well-known “Topo Hole” in SE Hendry and NE Collier counties, where no high-quality elevation data has been collected.  Several previous approximations of a topographic surface had been made for this area (2003 and 2005 for the SWFFS, and in early 2006 by the USACE), primarily using 5-ft contours and spot elevations from USGS topographic maps (quad sheets).  For this project, several newly available datasets from the C-139 Regional Feasibility Study were obtained, and two were included in the current processing:  ground spot-elevations from the “all-static” survey and 1-ft contours for the C-139 Annex area.  Unfortunately these new datasets cover only a small area at the margins of the Topo Hole.  The contours and spot elevations were converted to a gridded surface using the ArcGIS tool TopoToRaster, formerly known as GridTopo and sometimes referred to as the ANUDEM method, which applies “essentially a discretized thin plate spline technique” to create a “hydrologically correct digital elevation model.”  The method is not perfect, and the lack of detailed source data limits the accuracy, but still the result is better than other methods that are currently available.  A 20,000-ft buffer zone was added around the Topo Hole, and the TopoToRaster tool was applied.  Until LiDAR or similar data is collected for the Topo Hole, this is likely to remain as the best-available approximation of a topographic surface for this area.

C139_source-2a

C139_source-2bFor the Collier, Monroe, and Miami-Dade DEMs, “decorrugated” versions of the processed LiDAR data were used.  During the original processing of the accepted deliverables from the FDEM LiDAR project, significant banding was apparent.  This banding appears as linear stripes (or corn rows or corrugations) of higher and lower elevations along the LiDAR flightlines.  The DEM data can be “decorrugated” by applying a series of filters to the elevation dataset, but real topographic features can also be altered slightly in the process.  In the resulting product, the systematic errors are removed, but with the cost that every land elevation is altered to some extent; thus the decorrugated surface is considered a derivative product. The decorrugation work for the rural areas of these three counties was done in 2010, but the results had never been formally documented or added to the District’s GIS Data Catalog.  For this project, in comparison with the “original” processed DEMs, these datasets are considered the “best-available” because errors that are visually obvious have been removed.

The best-available topographic DEMs were mosaicked into a single DEM, with better data used in areas of overlap.  In order to remove discontinuities where the border of better data joins to other data, a special blending algorithm was used to “feather” the datasets into each other.  The result is a surface that is free from discontinuities along the “join” edges, but of course accuracy of the result is limited by the accuracy of the source data.  The width of the blending zone along the edges was varied according to the type of data and the amount of overlap that was available, and ranged from 4,000 to 20,000 feet.  The “blending-zone” adjusted data was retained and is available for review.

The layering of the topographic data, from best (top) to worst (bottom), is listed here with best listed first:

  • 2007 FDEM LiDAR, HHD_EAA, HHD_Northwest, and Eastern Charlotte  (all are roughly equivalent in quality)
  • SWFWMD 2005 LiDAR
  • SWFFS 2003 LiDAR
  • USGS HAED
  • Topo Hole

As noted above, two datasets for the land area were created.  One treated the major lake surfaces as flat areas, and the other used available lake bathymetry to add the lake-bottom elevations into the topo DEM surface.  The USACE-processed bathymetry was added for the area of Lake Okeechobee that had been treated as a water body in the 2007 HHD_EAA LiDAR deliverable.  Also, bathymetry data for Lake Trafford was available from a 2005 USACE project.  In the flat-surface version, elevations of 7.5 ft for Lake Okeechobee and 15.1 ft for Lake Trafford, relative to NAVD 1988, were imposed.

Offshore bathymetry was mosaicked at 100-ft cell size.  In 2005 the available data had been collected and mosaicked at 300-ft cell size.  Since that time, no significant bathymetry datasets are known to have been created within the Lower West Coast area.  In 2004, a best-available composite for the Lee County area had been created at 100-ft cell size from 2002 USACE channel surveys of the Caloosahatchee River, boat surveys by USGS for the Caloosahatchee Estuary in 2003 and other areas in 2004, experimental offshore LiDAR from USGS in 2004, and older NOAA bathymetric data for areas that were not otherwise covered.  Other bathymetric data down the shoreline consists of USGS boat surveys down to Cape Romano, Naples Bay boat surveys for SFWMD, and Florida Bay boat surveys that had been compiled in approx. 2004 by Mike Kohler.  For the remaining offshore areas, the older NOAA bathymetric data was used.  The bathymetric pieces were mosaicked together and blended along their edges.

When the offshore bathymetry was joined with the land topography, there were thousands of small no-data holes near the shoreline.  lwc_compositetopo_general-1e

These empty spaces were grouped into three categories.  First, some small and low-lying islands (essentially mangrove rookeries) in Florida Bay had no pertinent elevation data.  These were assigned an arbitrary land elevation of +1.5 ft NAVD88.  Secondly, empty spaces adjacent to offshore bathymetry were treated as shallow offshore areas.  For each no-data hole (polygon), the maximum elevation for adjacent offshore bathymetry was assigned.  (i.e. It’s water, and we’ll give it the shallowest water value that’s nearby).  Finally, empty spaces that were inland were treated as low-lying land or marsh.  For each no-data hole (polygon), the minimum elevation for adjacent “land” was assigned.  (i.e. If it’s a low area, it can’t be above anything that’s along its edge.)

After filling in all of the no-data holes, the bathymetry and land-with-lake topo were mosaicked together.  The deepest spot in the full dataset is about 100 ft at a distance of about 90 miles west of Cape Sable.  For most modeling and mapping purposes, however, the areas well off the shore (e.g. 10 miles or farther) are not needed.  Thus a buffer zone was defined as 20,000 ft beyond the LWC offshore boundary, and the full dataset was clipped to that boundary to create Lwc_TopoShore.

lwc_compositetopo_general-1c

These products can accurately be described as composites of the best-available data.  For example, both the 2005 SWF composite and the early-2006 USACE SF-Topo composite were based on older data and did not include any of the LiDAR that was collected after 2003.  It might be possible to make slight improvements to the current source data (say, by decorrugating rural Lee County), but such modifications would involve significant time and work that would go beyond the short turn-around time allocated for this project.

Timothy Liebermann, Senior Geographer, Regulation GIS, SFWMD

Ruslan Enikeev’s Map of the Internet

Very impressive visualization with search capability of the entire Internet. Type in your favorite website and see it on the map.

Data Juice

The Internet Map is the culmination of an exciting project from a team headed by Ruslan Enikeev, in which the world wide web is mapped to produce the visualisation pictured below.  Web traffic from 35o thousand websites are mapped into a system where  circle size corresponds to overall visitors; and the position and proximity to other websites reveals where visitors most commonly surf to and from.  Rendering the graphic in this way shows that sites tend to cluster into groups relating to country of origin, which are represented by the colours in each circle.  Click the image below to explore the map, URL’s can be entered into the search bar on the top left.

written by Alastair Pidgen.

View original post