Search
Subscribe

Bookmark and Share

About this Blog

As enterprise supply chains and consumer demand chains have beome globalized, they continue to inefficiently share information “one-up/one-down”. Profound "bullwhip effects" in the chains cause managers to scramble with inventory shortages and consumers attempting to understand product recalls, especially food safety recalls. Add to this the increasing usage of personal mobile devices by managers and consumers seeking real-time information about products, materials and ingredient sources. The popularity of mobile devices with consumers is inexorably tugging at enterprise IT departments to shifting to apps and services. But both consumer and enterprise data is a proprietary asset that must be selectively shared to be efficiently shared.

About Steve Holcombe

Unless otherwise noted, all content on this company blog site is authored by Steve Holcombe as President & CEO of Pardalis, Inc. More profile information: View Steve Holcombe's profile on LinkedIn

Follow @WholeChainCom™ at each of its online locations:

Entries in Big Data (5)

Sunday
Jul212013

Beyond The Tipping Point: Interoperable Exchange of Provenance Data

Introduction

This is our fourth "tipping point" publication.

The first was The Tipping Point Has Arrived: Trust and Provenance in Web Communications. We highlighted there the significance of the roadmap laid out by the Wikidata Project - in conjunction with the W3C Provenance Working Group - to provide trust and provenance in its form of web communications. We were excited by proposals to granularize single facts, and to immutabilize the data elements to which those facts are linked. We opined this to be critical for trust and provenance in whole chain communications. But at that time, the Wikidata Project was still waiting on the W3C Provenance Working Group to establish the relevant standards. No longer is this the case.

The second post was The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications. We there emphasized the emerging market-based opportunities for information sharing between enterprises and consumers. We were particularly impressed with Google’s definition of "selective sharing" (made with GooglePlus in mind) to include controls for overcoming both over-sharing and fear of sharing by information producers. Our fourth post, below, includes a similar discussion but is this time skewed relative to the increasing needs for data interoperability among web-based file hosting services in the Cloud.

The third post - Why Google Must - And Will - Drive NextGen Social for Enterprises - introduced common point social networking which we defined as providing the means and functions for the creation and versioning of immutable data elements at a single location. Github was pointed to as a comparative but we proposed that Google would lead in introducing common point networking (or similar) with a roadmap of means and functions it was already backing in the Wikidata Project. We identified an inviting space for common point social networking between Google's Knowledge Graph and emerging GS1 (i.e., enterprise) standards for Key Data Elements (KDEs). We identified navigational search for selectively shared proprietary information (like provenance information) as a business model in support.

This fourth post posits the accessibility of data elements (like KDEs) from web-based data hosting services in the Cloud providing content-addressable storage. This is a particularly interesting approach in the wake of the recent revelations regarding PRISM, the controversial surveillance program of the U.S. National Security Agency. The NSA developed their Apache Accumulo NoSQL datastore based on Google's BigTable data model but with cell-level security controls. Ironically, those kind of controls allow for tagging of a data object with security labels for selective sharing. This kind of tagging of a data object within a data set represents a paradigm shift toward common point social networking (or the distributed social networking envisioned by Google's Camlistore as described below).

The PROV Ontology

The W3C Provenance Working Group published its PROV Ontology on 30 April 2013 in the form of "An Overview of the PROV Family of Documents". The PROV family of documents define "a model, corresponding serializations and other supporting definitions to enable the inter-operable interchange of provenance information in heterogeneous environments such as the Web."

The W3C recommends that a provenance framework should support these 8 characteristics:

  1. the core concepts of identifying an object, attributing the object to person or entity, and representing processing steps;
  2. accessing provenance-related information expressed in other standards;
  3. accessing provenance;
  4. the provenance of provenance;
  5. reproducibility;
  6. versioning;
  7. representing procedures; and
  8. representing derivation.

These 8 recommendations are more specifically addressed at 7.1 of the W3C Incubator Group Report 08 December 2010. In effect, the W3C Provenance Working Group has now established the relevant standards for exporting (or importing) trust and provenance information about the facts in Wikidata.

As we observed in our first tipping point, the Wikidata Project was first addressing the deposit by content providers of data elements (e.g., someone's birth date) at a single, fixed location for supporting the semantic relationships that Wikipedia users are seeking. The export of granularized provenance information about Wikidata facts was on their wish list. Now the framework for making that wish come true has been established. Again, the key aspect for us about the Wikidata Project is that it shouldn’t matter - from the standpoint of provenance - how the accessed data at that fixed location is exchanged or transported, whether via XML meta-data, JSON documents or other. But the fixing of the location for the granularized data provides a critical authentifying reference point within a provenance framework.

Interoperably Connecting Wikidata to Freebase/Knowledge Graph

On 14 June 2013 Shawn Simister, Knowledge Developer Relations for Google, offered the following to the Discussion list for the Wikidata project:

"Would the WikiData community be interested in linking together Wikidata pages with Freebase entities? I've proposed a new property to link the two datasets .... Freebase already has interwiki links for many entities so it wouldn't be too hard to automatically determine the corresponding Wikidata pages. This would allow people to mash up both datasets and cross-reference facts more easily."

Later on in the conversation thread with Maximilian Klein, Wikpedian in Residence, Simister also added "we currently extract a lot of data from [Wikidata Project] infoboxes and load that data into Freebase which eventually makes its way into the Knowledge Graph so [interoperably] linking the two datasets would make it easier for us to extract similar data from WikiData in the future."

See the discussion thread at http://lists.wikimedia.org/pipermail/wikidata-l/2013-June/002359.html

This is non-trivial conversation between agents of Google and Wikipedia about interoperably sharing and synchronizing data between two of the largest data sets in the world. But we believe that the marketplace for introducing provenance frameworks is to be found among the data sets of file hosting and storage services in the Cloud.

The Rise Of File Sharing In The Cloud

Check out the comparison of file hosting and storage services (with file sharing possibility) at http://en.wikipedia.org/wiki/Comparison_of_file_hosting_services. Identified file storage services include Google Drive, Dropbox, IBM SmartCloud Connections, DocLanding and others. All provide degrees of collaborative and distributed access to files stored in the Cloud. New and emerging services include allowing users to perform the same activities across all sorts of devices which will require similar sharing and synchronization of data across all devices. This has huge ramifications for not just the sharing of personal data in the Cloud, but also for the sharing of proprietary, enterprise data. And when one is talking about proprietary information, one has to consider the introduction of a provenance framework.

The Next Logical Step: Content-addressable Storage In the Cloud

"Content-addressable storage ... is a mechanism for storing information that can be retrieved based on its content, not its storage location. It is typically used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations." Github is an example that we mentioned in our third tipping point blog. CCNx (discussed a little later) is another example. Camlistore, a Google 20 Percent Time Project, while still in its infancy, is yet another example.

"Camlistore can represent both immutable information (like snapshots of file system trees), but can also represent mutable information. Mutable information is represented by storing immutable, timestamped, GPG-signed blobs representing a mutation request. The current state of an object is just the application of all mutation blobs up until that point in time. Thus all history is recorded and you can look at an object as it existed at any point in time, just by ignoring mutations after a certain point."

Camilstore has only revealed they are handling sharing similarly to Github. But NSA's Apache Accumulo - and a spinoff called Sqrrl - may currently be the only NoSQL solutions with cell-level security controls. Sqrrl, a startup that is comprised of former members of the NSA Apache Accumulo team, is commercially providing an extended version of Apache Accumulo with cell-level security features. Co-founder Ely Kahn of Sqrrl says, "We're essentially creating a premium grade version of Accumulo with additional analysis, data ingest, and security features, and taking a lot of the lessons learned from working in large environments." We suspect that Camlistore is similarly using security tags (though we can’t say for sure because it is a newly emerged feature not covered in Camlistore's documentation). Camlistore calls what is doing as decentralized social networking. This kind of activity (and more, below) gives us increasing reason to expect that content-addressable products will arise to break the silos of supply chains.

Surveying the Field

Global trade is a series of discrete transactions between buyers and sellers. It is generally difficult – if not impossible – to determine a clear picture of the entire lifecycle of products. The proprietary data assets (including provenance data) of enterprises large and small have commonly not been shared for two essential reasons:

  1. the lack of tools for selective sharing, and
  2. the fear of sharing offered under "all or nothing" social transparency sharing models.

We believe that with the introduction of content-addressable storage in the Cloud there will occur a paradigm shift toward the availability of tools for selective sharing among people and their devices. In that context it would be interesting to see the new activities and efforts of Wikidata, Github, Camlistore and Sqrrl connected with the already existing activities and efforts made in supply chains.

Ward Cunningham, inventor of the wiki, formerly Nike’s first Code for a Better World Fellow, and now Staff Engineer at New Relic, has innovated a paragraph level wiki for curating sustainability data. Ward shows how data and visualization plugins could serve the needs of organizations sharing material sustainability data. Cunningham’s visualization, below, once paired with content-addressable storage, simplifies greater authenticity and relevancy within the enterprise.

Leonardo Bonanni started SourceMap as his Ph.D. thesis project at the MIT Media Lab. Sourcemap’s inspiring visualizations are crowdsourced from all over the world to transparently show "where stuff comes from". Again, SourceMap, when paired with content-addressable storage, simplifies selective sharing in the cloud.

There is an evolution of innovation spurring new domain specific solutions. We've put together the following table to emphasize the technologies that we find of interest. The table represents a progression toward solving under-sharing in supply chains with content-addressable storage in the Cloud. Entities in the table below are not specially focused on supply chains and but some are thinking about supply chains. No matter. We are attempting to join the dots looking forward based on a progression of granular sharing technologies (i.e., revision control, named data networking, informational objects).

product

wikidata

git

sqrrl

CCNx

smallest federated wiki

pardalis

camlistore

creator and or sponsor  notes

google, paul allen,

gordon moore

git by linus torvalds

extends nsa security

design to enterprise

parc, named data networking

wiki inventor ward cunningham

holcombe, boulton, whole chain traceability consortium

google

20 pct

project

user experience

public human machine editable

revision control, social coding

software

tagged security labels

federates

content efficiently avoiding congestion

wiki like git

paragraph  forking

sharing, traceability, provenance

personal data storage system for life

selective sharing

hub data source

public / private

authorization systems

public by default

merging like github

hub data access

private by default

content addressable

storage

xml api

32-bit unix hashing

cell level security

content verifiable from the data itself

paragraph blobs

CCNx like informational objects

github-like json blobs, no meta data

database

mysql

above storage

apache accumulo

nosql

above storage

couchdb, leveldb

above storage

sqlite, mongodb, mysql, postgres, (appengine)

 

We'd like to take this opportunity to make special mention of the CCNx (Content-centric Networking) Project at PARC begun by Van Jacobson. The first CCNxCon of the CNNx Project is what brought us - Holcombe and Boulton - together. We were both fascinated with prospects of applying CCNx to the length of enterprise supply chains. In fact, the first ever "whole chain traceability" funding from the USDA came in 2011 in no small part because author Holcombe - as the catalyzer of the Whole Chain Traceability Consortium - proposed to extend Pardalis'  engineered layer of granular access controls using a content-centric networking data framework. It was successfully proposed to the USDA that the primary benefit of CCNx lay in its ability to retrieve data objects based on what the user wanted, instead of where the data was located. We perceive this even now to be critical to smoothing out the ridges of the "bullwhip effect" in supply chains.

Interoperability, Content-addressable storage and Provenance

Stonebraker et al. postulated the "The End of an Architectural Era (It's Time for a Complete Rewrite)" in 2007 by stating the hypothesis that traditional database management systems are no longer able to represent the holistic answer with respect to the variety of different requirements. Specialized systems will emerge for specific problems. The NoSQL movement made this happen with databases. Google (and Amazon) inspired the NoSQL movement underpinning Cloud storage databases. Nearly all enterprise application startups are now using NoSQL to build datastores at scale to the exclusion of SQL databases. The Oracle/Salesforce/Microsoft partnership announcement in late June, 2013, is well framed by the rise of NoSQL, too. Now we are seeing this begin to happen with an introduced layer of content-addressable storage leading to interoperable provenance.

In our third tipping point blog, we opined that Google must drive nextgen social for enterprises to overcome the bullwhip effects of supply chains. Google has been laying a solid foundation for doing so by co-funding the Wikidata Project, proposing the integration of Wikidata and Knowledge Graph/Freebase, nurturing navigational search as a business model, and gaining keen insights into selective sharing with Google Plus (and the defunct Google Affiliate Network). Call it common point social networking. Call it decentralized social networking. Call it whatever you want to. The tide is rising toward "the inter-operable interchange of provenance information in heterogeneous environments."

Is the siloed internet ever to be cracked? Well, it is safe to say that the NSA has already cracked the data silos of U.S. security agencies with its version of NoSQL Apache Accumulo (again, based on Google's BigData table model). Whatever your feelings or political views about surveillance by the NSA (or Google), it is an historical achievement. Now, outside of government surveillance programs, web-based file sharing is just beginning to shift toward content-addressable data storage and sharing in the Cloud. That holds forth tremendous promise for cracking the silos holding both consumer and enterprise data. There are very interesting opportunities for establishing "first mover" expectations in the marketplace for content-addressable access controls in the Cloud.

The future is here. It is an interoperable future. It is a content-addressable future. And when information needs to be selectively shared (to be shared at all), the future is also about the interoperable exchange of provenance data. Carpe diem

_______________________________

Authors: 

Steve Holcombe
Pardalis Inc.

 

 

Clive Boulton
Independent Product Designer for the Enterprise Cloud
LinkedIn Profile

 

Tuesday
Nov062012

Pardalis announces issuance of fourth U.S. patent

November 6, 2012 — Pardalis, Inc. announced the issuance today of the following patent by the United States Patent & Trademark Office:

  • Common point authoring system for the complex sharing of hierarchically authored data objects in a distribution chain, U.S. Patent No. 8,307,000.

The issuance of this patent represents another milestone in the continued, global expansion of Pardalis' parent patent, U.S. Patent No. 6,671,696, and its continuation patents and related applications.

The Pardalis '696 patent was issued by the United States in 2003 and is entitled Informational object authoring and distribution system. Pardalis' 696 patent is the parent patent for the Common Point Authoring™ system. The prior art that Pardalis' patents have been distinguished from stretch back to the 1987 filing of Xerox's Updating local copy of shared data in a collaborative system (U.S. Patent 5,220,657), the 1995 publication of CrystalWeb--A distributed authoring environment for the World-Wide Web (Computer Networks and ISDN Systems), and the 1999 publication of DAPHNE--A tool for Distributed Web Authoring and Publishing (the American Society for Information Science).

"The underlying philosophy of the Common Point Authoring system is to provide people with as much granular control over their information and data experience as is possible," said Steve Holcombe, CEO, Pardalis Inc. "The irony is that in order to increase the flow of proprietary information in supply chains, more granular control over that information must be provided in information sharing systems of any kind. Pardalis' patents apply to authoring by either human participants, or the machines that they automatically program, of immutable informational objects describing the pedigree of uniquely identified products in supply chains."

The critical means and functions of the Common Point Authoring™ system are directed to a system in which an author can create data which is then fixed (immutable) and users can access that immutable data but cannot change it without the creator's permission. They provide for user-centric authoring and registration of uniquely identified, immutable objects for further granular publication, by the choice of each author, among networked systems. The benefits of CPA include minimal, precise disclosures of personal and product identity data to networks fragmented by information silos and concerns over 'data ownership' about products and their ingredients or components.

"There is increasing interest in the application of social networks to the enterprise," Holcombe said. "For instance the selective sharing of Google Plus is a strong step in the direction of providing more granular controls in information sharing. Salesforce.com has linked up with Facebook for targeted advertising delivery that will merge social and business-contact data. The Wikidata Project is creating a free knowledge base by first fixing data elements at a single location with authorizations that may be read and edited by humans and machines alike. All of these activities are pushing in the direction of providing more efficient market mechanisms for the sharing of proprietary information in the Cloud. The more granular the control over information, the greater the chances that information about products in global supply chains will be efficiently shared. The ramifications for global sustainability are tremendous."

Filings relevant to Pardalis' USPTO issued patents are being successfully pursued under the Patent Cooperation Treaty (PCT) in the following countries: Australia, Brazil, Canada, China (PRC), Europe, Hong Kong, India, Japan, Mexico and New Zealand.

About Pardalis, Inc.

Pardalis' Common Point Authoring™ system provides an object-oriented solution for introducing trust and provenance in web communications. For more information, see Pardalis' Global IP.

Friday
Apr202012

Pardalis announces issuance of Canadian patent

April 20, 2012 — Pardalis, Inc. announced today the issuance of the following patent by the Canadian Intellectual Property Office:

  • Informational Object Authoring and Distribution System, Patent No. CA 2457936 issued February 28, 2012.

The issuance of this patent represents another milestone in the continued, global expansion of Pardalis' same-named parent patent, U.S. Patent No. 6,671,696, and its continuation patents and applications.

The Pardalis 696 Patent was issued by the United States in 2003 and is also entitled Informational object authoring and distribution system. Pardalis' 696 patent is the parent patent for the Common Point Authoring™ system.

The critical means and functions of the Common Point Authoring™ system are directed to a system in which a creator can create data which is then fixed (immutable) and users can access that immutable data but cannot change it. They provide for user-centric authoring and registration of radically identified, immutable objects for further granular publication, by the choice of each author, among networked systems. The benefits of CPA include minimal, precise disclosures of personal and product identity data to networks fragmented by information silos and concerns over 'data ownership'.

“Australia, Canada, China, Hong Kong, India, Mexico, New Zealand and, of course, the United States are the countries that have so far issued one or more patents to Pardalis,” said Steve Holcombe, Pardalis’ CEO. “We also have high expectations for similar actions on our applications pending in Brazil, Europe and Japan.”

About Pardalis, Inc.

Pardalis' Common Point Authoring™ system provides an object-oriented solution for introducing trust and provenance in web communications. For more information, see Pardalis' Global IP.

Thursday
Jan262012

Whole Chain Traceability: A Successful Research Funding Strategy

The following work product represents a critical part of the first successful strategy for obtaining funding from the USDA relative to "whole chain" traceability. It is the work of this author as weaved into a USDA National Integrated Food Safety Initiative (NIFSI) funding submission of the Whole Chain Traceability Consortium™ led by Oklahoma State University and filed in June 2011. This work highlights the usefulness of Pardalis' U.S. patents and patents pending to "whole chain" traceability. It highlights the efficacy of employing granular information objects in the Cloud for providing consumer accessibility to any agricultural supply chain. In August 2011 notification was received of an award ($543,000 for 3 years) under the USDA NIFSI for a project entitled Advancement of a whole-chain, stakeholder driven traceability system for agricultural commodities: beef cattle pilot demonstration (Funding Opportunity Number: USDA-NIFSI RFA (FY 2011), Award Number: 2011-51110-31044).

With the funding of the NIFSI project, the USDA has funded a food safety project that is distinguishable from the Food Safety Modernization Act projects being funded by the FDA and conducted by the Institute of Food Technologists (IFT). Unlike the IFT/FDA projects, the scope of the funded NIFSI project uniquely encompasses consumer accessibility to supply chain information.

A useful explanation of the benefits of a “whole chain” traceability system may be made with critical traceability identifiers (CTIDs), critical tracking events (CTEs) and Nodes as described in the IFT/FDA Traceability in Food Systems Report. CTEs are those events that must be recorded in order to allow for effective traceability of products in the supply chain. A Node refers to a point in the supply chain when an item is produced, process, shipped or sold. CTEs may be loosely defined as a transaction. Every transaction involves a process that may be separated into a beginning, middle and end.

While important and relevant data exists in any of the phases of a CTE transaction, the entire transaction may be uniquely identified and referenced by a code referred to as a critical tracking identifier (CTID). For example, with the emergence of biosensor development for the real-time detection of foodborne contamination, one may also envision adding associated real-time environmental sampling data from each node.

What is not described or envisioned in the IFT/FDA Traceability in Food Systems Report is the challenge of using even top of the line “one up/one down” product traceability systems that, notwithstanding the use of a single CTID, are inherently limiting in the data sharing options provided to both stakeholders and government regulators. Pause for a moment and compare the foregoing drawing with the next drawing. Compare CTID2 in both drawings with CTID2A, CTID2B, etc. in the next drawing. The IFT/FDA food safety projects described above are at best implementing top of the line "one up/one down" product traceability systems with the use of a single CTID. But with “whole chain” product traceability, in which CTID2 is essentially assigned down to the datum level, transactional and environmental sampling data may in real-time be granularly placed into the hands of supply chain partners, food safety regulators, or even retail customers.

The scope of “whole chain” chain information sharing within the funded USDA NIFSI project goes well beyond the “one up/one down” information sharing of the IFT/FDA projects. The NIFSI project addresses a new way of looking at information sharing for connecting supply chains with consumers. This is essentially accomplished with a system in which a content provider creates data which is then fixed (i.e., made immutable) and users can access that immutable data but cannot change it.

The granularity of Pardalis' Common Point Authoring (CPA) system (as is necessary for a “whole chain” product traceability system) is characterized by the following patent drawing of an informational object (e.g., a document, report or XML object) whose immutable data elements are radically and uniquely identified. The similarities between the foregoing object containing CTID2A, CTID2B, etc., and the immutable data element identifiers of the following drawing, should be self-evident.

For the purposes of the NIFSI funding opportunity, the Pardalis CPA system invention was appropriately characterized as a “whole chain” product traceability system.  A further, high-altitude drawing, characterized the application of the invention to a major U.S. agricultural supply chain:

Several questions were required in the USDA's NIFSI "Review Package" to be addressed before actual funding. The responses to two of those questions were crafted by this author. They are worth inserting here ....

Question 1: A reviewer was skeptical that the system would be capable of handling different levels of data (consumer, producer, RFID, bar code) seamlessly.

There is an assumption in the reviewer’s opinion that data is different because it is consumer, producer, RFID, bar code, etc. The proposed pilot project is based on a premise that data is data. The difference in data that is perceived by the reviewer is not in its categorization per se but in its proprietary nature. That is, it is perceived to be different because it is locked up (often in categories of consumer, producer, RFID, bar code, etc.) in proprietary data silos along the supply and demand chains. It is reasonable to have this viewpoint given the prevalence of "one-up/one-down" data sharing in supply chains. As stated in the Positive Aspects of the Proposal, “[t]he use of open source software and the ability to add consumer access to the tracability (sic) system set this proposal apart from other similar proposals.” The proposed pilot project will demonstrate how an open source approach to increasing interoperability between enterprise data silos (buttressed by metadata permissions and security controls in the hands of the actual data producers) will provide new "whole chain" ways of looking at information sharing in enterprise supply and consumer demand chains. For instance, consumers could opt for retailers to automatically populate their accounts from their actual point-of-sale retail purchases. Consumers could additionally populate accounts in a multi-tenancy social network (like Facebook) using smartphone bar code image capturing applications. Supplemented by cross-reference to an industry GTIN/GLN database, the product identifiers would be associated with company names, time stamps, location and similar metadata. This could empower consumers with a one-stop shop for confidentially reporting suspicious food to FoodSafety.gov. Likewise, consumers could be provided with real-time, relevant food recall information in their multi-tenancy, social networking accounts, and their connected smartphone applications.

Question 2: A member of the panel was skeptical that the consumer accessibility would be largely attractive as this capability currently has limited appeal among consumers.

We recognize this viewpoint to be a highly prevalent opinion within an ag and food industry predominantly sharing data in a “one-up/one-down” manner. When one uses a smartphone today to scan an item in a grocery store, the probability of being able to retrieve any data from the typical ag and food supply chain is very low. However, we have been highly influenced in our thinking by the existing data showing that many consumers do not take appropriate protective actions during a foodborne illness outbreak or food recall. Furthermore, 41 percent of U.S. consumers say they have never looked for any recalled product in their home. Conversely, some consumers overreact to the announcement of a foodborne illness outbreak by not purchasing safe foods. We have been further influenced by how producers of organic and natural products are adopting rapidly evolving smartphone and mobile technologies as a way of communicating directly with consumers, and increasing their market share. We contend that by increasing supply chain transparency with real-time, whole chain technologies, “consumer accessibility” will become more and more appealing.  We contend this to be especially true when there is a product recall and the products are already in the home. And so, again, our high interest in working with FoodSafety.gov.

The foregoing strategy and comments may be freely cited with attribution to this author as CEO of Pardalis, Inc. It is offered in the spirit of the "sharing is winning" principles of the Whole Chain Traceability Consortium™ (now being rebranded as @WholeChainTrace™). However, no right to use Pardalis' patent or patents pending is conveyed thereby. If you wish to be a research collaborator with Pardalis, or to license or use Pardalis' patented innovations, please contact the author.

Go to Part II

Monday
Jan092012

Clive Boulton: Whole Chain Traceability, pulling a Kobayshi Maru

 

A little background information about how this presentation came to be ....

Clive Boulton made this timely, impressive presentation at a luncheon held in Stillwater, Oklahoma on 6 January 2012. Stillwater is where Oklahoma State University - lead research institution of the Whole Chain Traceability Consortium - is located. The pathway to Stillwater from the Seattle area began with the CCNx conference held at the Palo Alto Research Center in September, 2011. I attended CCNxCon to make one or more connections relevant to the Whole Chain Traceability Consortium. Clive wasn't physically at the conference but he was looking in from north of Seattle via a live audio/video stream. Clive heard me asking a question from the audience about possibly applying CCN to supply chain traceability needs in food safety. Like me, Cliive has a passion for food traceability and so he tweeted "Who's that?" to one of the CCNxCon managers. A Twitter introduction was made.

Clive is currently a co-organizer of the Seattle Google Technology User Group at GTUG - http://www.meetup.com/seattle-gtug. He has a "finger on the pulse" of technology developments in Seattle and Silicon Valley which he commonly blogs about at http://cliveboulton.com/. And Clive has specially blogged there about Pardalis' Common Point Authoring at http://cliveboulton.com/post/12071791931/pardalis-is-banking-on-granular-information.

Clive is particularly interested in enterprise connected consumer solutions at web scale with polyglot technologies. Clive has opinions on how MSFT SQL Azure (or other "Big Data" databases) may be horizontally sharded (i.e., partitioned) with immutable informational objects for massive scalability. He is also very knowledgeable of the need to balance scalability against inherent latency issues that may result, for instance, in slow consumer access via mobile devices. And he has practical ideas about how to syngergistically leverage the resources and relationships of the Whole Chain Traceability Consortium for fostering an ecosystem of API development.

As a result of his visit to Stillwater, I am pleased announce that Clive will be serving as a consultant to the Whole Chain Traceability Consortium a/k/a @WholeChainTrace. This should make for a potent connection between the #CollabEnt (i.e., collaboration enterprise) of Clive's 20th slide and the increasingly critical need for real-time food traceability. Stay tuned.