Search
Subscribe

Bookmark and Share

About this Blog

As enterprise supply chains and consumer demand chains have beome globalized, they continue to inefficiently share information “one-up/one-down”. Profound "bullwhip effects" in the chains cause managers to scramble with inventory shortages and consumers attempting to understand product recalls, especially food safety recalls. Add to this the increasing usage of personal mobile devices by managers and consumers seeking real-time information about products, materials and ingredient sources. The popularity of mobile devices with consumers is inexorably tugging at enterprise IT departments to shifting to apps and services. But both consumer and enterprise data is a proprietary asset that must be selectively shared to be efficiently shared.

About Steve Holcombe

Unless otherwise noted, all content on this company blog site is authored by Steve Holcombe as President & CEO of Pardalis, Inc. More profile information: View Steve Holcombe's profile on LinkedIn

Follow @WholeChainCom™ at each of its online locations:

Entries in Granularity (12)

Monday
Jan072013

The Roots of Common Point Authoring (CPA)

Common Point Authoring (CPA) is timely and relevant for amerliorating the fear factors revolving around data ownership. Those fears are multiplying from the every increasing usage of unique identification on the Internet as applied to both people (e.g., social security numbers) and products (e.g., unique electronic product numbers and RFID tags).

Q&A: What is an informational object?

Consider the electronic form of this document (the one you are reading right now) as an example of a informational object. Imagine that you are the author and owner of this informational object. Imagine that each paragraph of this object has a granular on/off switch that you control. Imagine being able to granularly control who sees which paragraph even as your informational object is electronically shared one-step, two-steps, three-steps, etc., down a supply chain with people or businesses you have never even heard of. Now further imagine being able to control the access to individual data elements within each of those paragraphs.

The methods for CPA were first envisioned in regards to transforming the authoring of paper-based material safety data sheets (MSDSs) in the chemical industry into a market-driven, electronic service provided by chemical manufacturers for their supply chain customers. You may think of MSDSs as a type of chemical pedigree document authored by chemical manufacturers and then handed down a multi-party supply chain as it follows the trading of the chemical.

At the time, we crunched some numbers and found that MSDSs offered as a globally accessible software service could be provided to downstream users for significantly less than what it cost them to handle paper MSDSs. But we further recognized that our business model for global software services wouldn’t work very well unless the fear factors revolving around MSDSs offered as a service were technologically addressed.

That is, we asked the question, “How can electronic information be granularly controlled by the original author (i.e., creator) as it is shared down a supply chain?”

When it comes to information sharing in multi-tenancies, the prior art (i.e., the prior patents and other published materials) to CPA at best refers to collaborative document editing systems where multiple parties share in the authoring of a single document. A good example of the prior art is found in a 1993 Xerox patent entitled 'Updating local copy of shared data in a collaborative system' (US Patent 5,220,657 - Xerox) covering:

“A multi-user collaborative system in which the contents as well as the current status of other user activity of a shared structured data object representing one or more related structured data objects in the form of data entries can be concurrently accessed by different users respectively at different workstations connected to a common link.”

By contrast, CPA's methods provide for the selective sharing of informational objects (and their respective data elements) without the necessity of any collaboration. More specifically, CPA provides the foundational methods for the creation and versioning of immutable data elements at a single location by an end-user (or a machine). Those data elements are accessible, linkable and otherwise usable with meta-data authorizations. This is especially important when it comes to overcoming the fear factors to the sharing of enterprise data, or allowing for the semantic search of enterprise data. To the right is a representation from Pardalis' parent patent, "Informational object authoring and distribution system" (US Patent 6,671,696), of a granular, author-controlled, structured informational object around which CPA's methods revolve.

That is, the critical means and functions of the Common Point Authoring™ system provide for user-centric authoring and registration of radically identified, immutable objects for further granular publication, by the choice of each author, among networked systems. The benefits of CPA include minimal, precise disclosures of personal and product identity data to networks fragmented by information silos and concerns over 'data ownership'.

When it comes to "electronic rights and transaction management", CPA's methods have further been distinguished from a significant patent held by Intertrust Technologies. See Methods for matching, selecting, narrowcasting, and/or classifying based on rights management and/or other information (US Patent 7,092,914 - Intertrust Technologies). By the way, in a 2004 announcement Microsoft Corp. agreed to take a comprehensive license to InterTrust's patent portfolio for a one-time payment of $440 million.

CPA's methods have been further distinguished worldwide from object-oriented, runtime efficiency IP held by these leaders in back-end, enterprise application integration: Method and system for network marshalling of interface pointers for remote procedure calls (US Patent 5,511,197 - Microsoft), Reuse of immutable objects during object creation (US Patent 6,438,560 - IBM), Method and software for processing data objects in business applications (US Patent 7,225,302 - SAP), and Method and system to protect electronic data objects from unauthorized access (US Patent 7,761,382 - Siemens).

For more information, see Pardalis' Global IP.

Friday
Jan042013

Why Google Must - And Will - Drive NextGen Social for Enterprises

Preface

This is our third "tipping point" publication.

The first was The Tipping Point Has Arrived: Trust and Provenance in Web Communications. We highlighted there the significance of the roadmap laid out by the Wikidata Project. It was our opinion that:

"[a]s the Wikidata Project begins to provide trust and provenance in its form of web communications, they will not just be granularizing single facts but also immutabilizing the data elements to which those facts are linked so that even the content providers of those data elements cannot change them. This is critical for trust and provenance in whole chain communications between supply chain participants who have never directly interacted."

The second post was The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications. We there emphasized the emerging market-based opportunities for information sharing between enterprises and consumers:

"We know this is a big idea but in our opinion the dynamic blending of Google+ and the Google Affiliate Network could over time bring within reach a holy grail in web communications – the cracking of the data silos of enterprise class supply chains for increased sharing with consumers of what to-date has been "off limits" proprietary product information."

Introducing Common Point Social Networking

For the purposes of this post we introduce and define Common Point Social Networking:

Common point social networking provides the means and functions for the creation and versioning of immutable data elements at a single location by an end-user or a machine which data elements are accessible, linkable and otherwise usable with meta-data authorizations.

The software developers reading this post may recognize similarities with Github. Github is perhaps the canonical proxy for fixed, common point sharing adoption. Software developers publish open source software development projects, providing source code distribution and means for others to contribute changes to the source code back to a common repository. Version control provides a code level audit trail.

In July 2012 Github took a $100M venture capital investment from Andreessen Horowitz. There’s no doubt that some of this funding will be used by Github to compete in the enterprise space. But we further offer here that Google is better positioned to lead the current providers of enterprise software and cloud services in introducing a new generation of online social networks in the fertile ground between enterprises and consumers. We propose that Google so lead by introducing and/or further encouraging a roadmap of means and functions it is already backing in the Wikidata Project. We have identified an inviting space for common point social networking to serve as a bridge between Google's Knowledge Graph and the emerging GS1 standards for Key Data Elements (KDEs). 

A Sea Change in Understanding

In 2012 there was a sea change in understanding that greater access to proprietary enterprise data is necessary for creating new business models between enterprises and consumers. Yet there remains confusion on how to do so. There is much rhetorical cross-over these days between the social networking of "personal data" and "enterprise data" but enterprise data is - and will long remain - different from personal data. Again, in our opinion, enterprise data is overwhelmingly a proprietary asset that must be selectively accessed at a granular level from a fixed, common point to have any chance of being efficiently shared.

GS1 and Whole Chain Traceability

From 2010 through 2011, Pardalis Inc. catalyzed a successful research funding strategy in a series of “whole chain traceability” funding submissions seeking to employ the use of granular, immutable data elements in networked communications.[1] The computer networking aspects of this food supply chain research was based upon a granularization of critical tracking events (CTEs) with a high-level derivation of Pardalis’ patented processes for registering immutable data elements and their informational objects at a fixed location with meta-data authorizations. See Whole Chain Traceability: A Successful Research Funding Strategy. At the solicitation of co-author Holcombe, GS1 gave an early letter of support to this process, and GS1 was subsequently kept “in the loop”, too. This successful research funding strategy has from all appearances subsequently been given a favorable nod by GS1 in one of its recent publications, Achieving Whole Chain Traceability in the U.S. Food Supply Chain - How GS1 Standards make it possible. Here’s an excerpt -

"To achieve whole chain traceability, trading partners must be able to link products with locations and times through the supply chain. For this purpose, the work led by the Institute of Food Technologists described two foundational concepts: Critical Tracking Events (CTEs) and Key Data Elements (KDEs). With GS1 Standards as a foundation, communicating CTEs and KDEs is achievable."

So who is GS1, you ask? GS1 is "the international not-for-profit association dedicated to the development and implementation of global standards and solutions to improve the efficiency and visibility of supply and demand chains globally and across multiple sectors." You know that unique barcode symbology you see on the products you purchase? That barcode is standardized by GS1 and may include KDEs.

We applaud the introduction of KDEs by GS1. The inclusion of KDEs is a necessary step for moving beyond the lugubrious one-up/one-down information sharing that is overwhelmingly prevalent in today’s enterprise supply chains. Enterprises have long been comfortable with one-up/one-down pushing generic products down the chain. But it is a mode of information sharing that doesn’t fit well at all into today’s consumer demand chains that desire to pull real-time, trustworthy information. Furthermore, one-up/one-down information sharing significantly contributes to the "bullwhip effect" within supply chains that cost enterprises in a number of ways as explained in more detail in The Bullwhip Effect:

"The challenge is not one of fixing the latest privacy control issue that Facebook presents to us. Nor is the challenge fixed with an application programming interface for integrating Salesforce.com with Facebook. The challenge is in providing the software, tools and functionalities for the discovery in real-time of proprietary supply chain data that can save people's lives and, concurrently, in attracting the input of exponentially more valuable information by consumers about their personal experiences with food products (or products in general, for that matter) …."

But KDEs by themselves will not necessarily rid supply chains of the bullwhip effect. Without implementing a more social, fluid nature to the sharing of information in supply chains, KDEs may even increase the brittleness of one-up/one-down information sharing between database administrators, just more granularly so with "digital sand". For instance, industry standards for granular XML objects may be a bane … or a bon. It largely depends on the effectiveness of hierarchical administrative decision-making processes overseeing each data silo. Common point social networking holds forth a promise for implementing KDEs in a manner that overcomes the bullwhip effect.

But even with the most efficient and effective management processes, it is almost unimaginable to us that the first movement toward enterprise-consumer social networking will come from incumbent enterprise software systems. Sure, the first movement could potentially come from that direction, but we’ve just had too many experiences with enterprises and software vendors to put much faith in that actually happening. Conversely, we can much more easily imagine a first movement toward nextgen social from the "navigational search" demands of consumers. In our second tipping point blog we illustrated this point in some respect with Google Affiliate Networks. This time we are making our point with Google’s Knowledge Graph.

Navigational Search As A Business Model

Google's Knowledge Graph was announced this year as having being added to Google's search engine. Knowledge Graph is a semantic search system. Of course it’s not the only semantic search system. Bing incorporates semantic search. So do Ask.com and Wolfram Alpha. Siri provides a natural language user interface. But no matter what the semantic search engine, the search results are revealed as a list of ranked, relevant “answers” (or perhaps no answer at all because there isn’t one to give). Searching for real answers in real-time is still kind of a navigational mess either in commission or omission.

"For the semantic web to reach its full potential in the cloud, it must have access to more than just publicly available data sources. It must find a gateway into the closely-held, confidential and classified information that people consider to be their identity, that participants to complex supply chains consider to be confidential, and that governments classify as secret. Only with the empowerment of technological ‘data ownership’ in the hands of people, businesses, and governments will the Semantic Cloud make contact with a horizon of new, ‘blue ocean’ data." Cloud Computing: Billowing Toward Data Ownership - Part II.

Knowledge Graph is a "baby step" toward navigational search that provides a kind of Wikipedia "look and feel" experience designed to help users navigate more easily toward specific answers. Ever used the "I’m Feeling Lucky" button provided by Google? That button taps into Google's semantic search system to provide a navigational search resulting in a single result. This is an attempt to provide a purposeful effect instead of an exploratory effect to your search request. Yes, it's still a "hit or miss" artifice but - make no mistake - it is has been introduced for pushing forward navigational search as a business model.  Google's business intent for navigational search is to discourage you from going to other search engines for your search needs. Knowledge Graph is designed to cut short a process of discovery which may take you away from Google to a competitive search engine. This move toward navigational search is exactly why we are proposing that now is the time for common point social networking. Without common point social networking, navigational search will largely remain a clever, albeit unsatisfactory, solution for what consumers really want. What consumers want is real-time, meaningful, trustworthy information about the products they buy or are interested in buying. As Amit Singhal, Senior Vice-President of Engineering at Google, says:

"We’re proud of our first baby step - the Knowledge Graph - which will enable us to make search more intelligent, moving us closer to the "Star Trek computer" that I've always dreamt of building. Enjoy your lifelong journey of discovery, made easier by Google Search, so you can spend less time searching and more time doing what you love."

Conclusion: Whole Chain Communications from Navigational Search

So much of the information that consumers desire about the products they buy - or may buy - is currently locked up in enterprise data silos. But the realistic prospects for common point social networking means that navigationally searching for enterprise data - as a business model - is no longer an impossible challenge akin to Starfleet Academy's Kobayashi Maru. The ultimate goal for Google's navigational search is essentially that of providing not just whole chain traceability but real-time, whole chain communications for consumers via their mobile devices. The ultimate goal for GS1's standards for granular whole chain traceability is to similarly provide opportunities for real-time, navigational search.

 Google’s Knowledge Graph indeed represents the first step of a toddler. To fully develop a “Star Trek Enterprise computer” Google must drive nextgen social for enterprises by fostering the placement of common point social networking between the the bookends of navigational search and whole chain traceability. There is no other technology company better positioned or more highly motivated to do so. And we believe that it will. In backing the Wikidata Project, Google is already on a pathway to promoting common point social.

_______________________________

Authors:

 

Steve Holcombe
Pardalis Inc.

 

 

Clive Boulton
Independent Product Designer for the Enterprise Cloud
LinkedIn Profile

 

_______________

Endnotes
1. In these funding submissions co-author Holcombe introduced and defined the phrase of "whole chain traceability" in reference to his company's patents.
Tuesday
Nov062012

Pardalis announces issuance of fourth U.S. patent

November 6, 2012 — Pardalis, Inc. announced the issuance today of the following patent by the United States Patent & Trademark Office:

  • Common point authoring system for the complex sharing of hierarchically authored data objects in a distribution chain, U.S. Patent No. 8,307,000.

The issuance of this patent represents another milestone in the continued, global expansion of Pardalis' parent patent, U.S. Patent No. 6,671,696, and its continuation patents and related applications.

The Pardalis '696 patent was issued by the United States in 2003 and is entitled Informational object authoring and distribution system. Pardalis' 696 patent is the parent patent for the Common Point Authoring™ system. The prior art that Pardalis' patents have been distinguished from stretch back to the 1987 filing of Xerox's Updating local copy of shared data in a collaborative system (U.S. Patent 5,220,657), the 1995 publication of CrystalWeb--A distributed authoring environment for the World-Wide Web (Computer Networks and ISDN Systems), and the 1999 publication of DAPHNE--A tool for Distributed Web Authoring and Publishing (the American Society for Information Science).

"The underlying philosophy of the Common Point Authoring system is to provide people with as much granular control over their information and data experience as is possible," said Steve Holcombe, CEO, Pardalis Inc. "The irony is that in order to increase the flow of proprietary information in supply chains, more granular control over that information must be provided in information sharing systems of any kind. Pardalis' patents apply to authoring by either human participants, or the machines that they automatically program, of immutable informational objects describing the pedigree of uniquely identified products in supply chains."

The critical means and functions of the Common Point Authoring™ system are directed to a system in which an author can create data which is then fixed (immutable) and users can access that immutable data but cannot change it without the creator's permission. They provide for user-centric authoring and registration of uniquely identified, immutable objects for further granular publication, by the choice of each author, among networked systems. The benefits of CPA include minimal, precise disclosures of personal and product identity data to networks fragmented by information silos and concerns over 'data ownership' about products and their ingredients or components.

"There is increasing interest in the application of social networks to the enterprise," Holcombe said. "For instance the selective sharing of Google Plus is a strong step in the direction of providing more granular controls in information sharing. Salesforce.com has linked up with Facebook for targeted advertising delivery that will merge social and business-contact data. The Wikidata Project is creating a free knowledge base by first fixing data elements at a single location with authorizations that may be read and edited by humans and machines alike. All of these activities are pushing in the direction of providing more efficient market mechanisms for the sharing of proprietary information in the Cloud. The more granular the control over information, the greater the chances that information about products in global supply chains will be efficiently shared. The ramifications for global sustainability are tremendous."

Filings relevant to Pardalis' USPTO issued patents are being successfully pursued under the Patent Cooperation Treaty (PCT) in the following countries: Australia, Brazil, Canada, China (PRC), Europe, Hong Kong, India, Japan, Mexico and New Zealand.

About Pardalis, Inc.

Pardalis' Common Point Authoring™ system provides an object-oriented solution for introducing trust and provenance in web communications. For more information, see Pardalis' Global IP.

Wednesday
Jul112012

The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications

By Steve Holcombe (@steve_holcombe) and Clive Boulton (@iC)

A Glimmer of Market Validation for Selective Sharing

In late 2005 Pardalis deployed a multi-tenant, enterprise-class SaaS to a Texas livestock market. The web-connected service provided for the selective sharing of data assets in the U.S. beef livestock supply chain.  Promising revenues were generated from a backdrop of industry incentives being provided for sourced livestock. The industry incentives themselves were driven by the specter of mandatory livestock identification promised by the USDA in the wake of the 2003 "mad cow" case.

At the livestock market thousands of calves were processed over several sessions. Small livestock producers brought their calves into the auction for weekly sales where they were RFID tagged. An affordable fee per calf was charged to the producers which included the cost of a RFID tag. The tags identifiers were automatically captured, a seller code was entered, and affidavit information was also entered as to the country of origin (USA) of each calf. Buyers paid premium prices for the tagged calves over and above untagged calves. The buyers made money over and above the affordable fee per calf.  After each sale, and at the speed of commerce, all seller, buyer and sales information was uploaded into an information tenancy in the SaaS that was controlled by the livestock market. For the first time ever in the industry, the livestock auction selectively authorized access to this information to the buyers via their own individual tenancies in the SaaS.

That any calves were processed at all was not possible without directly addressing the fear of information sharing that was held by both the calf sellers and the livestock market. The calf sellers liked that their respective identities were selectively withheld from the calf buyers. And they liked that a commercial entity they trusted – the livestock market – could stand as a kind of trustee between them and governmental regulators in case an auctioned calf later turned out to be the next ‘mad cow’. In turn the livestock market liked the selectiveness in information sharing because it did not have to share its confidential client list in an “all or nothing” manner to potential competitors on down the supply chain. At that moment in time, the immediate future of selective sharing with the SaaS looked very bright. The selective sharing design deployed by Pardalis in its SaaS fixed data elements at a single location with authorizations controlled by the tenants. Unfortunately, the model could not be continued and scaled at that time to other livestock markets. In 2006 the USDA bowed to political realities and terminated its efforts to introduce national mandatory livestock identification.

And so, too, went the regulatory-driven industry incentives. But … hold that thought.

Talking in Circles: Selective Sharing in Google+

Google+ is now 1 year old. In conjunction with Google, researchers Sanjay Kairam, Michael J. Brzozowski, David Huffaker, and Ed H. Chi have published Talking in Circles: Selective Sharing in Google+, the first empirical study of behavior in a network designed to facilitate selective sharing:

"Online social networks have become indispensable tools for information sharing, but existing ‘all-or-nothing’ models for sharing have made it difficult for users to target information to specific parts of their networks. In this paper, we study Google+, which enables users to selectively share content with specific ‘Circles’ of people. Through a combination of log analysis with surveys and interviews, we investigate how active users organize and select audiences for shared content. We find that these users frequently engaged in selective sharing, creating circles to manage content across particular life facets, ties of varying strength, and interest-based groups. Motivations to share spanned personal and informational reasons, and users frequently weighed ‘limiting’ factors (e.g. privacy, relevance, and social norms) against the desire to reach a large audience. Our work identifies implications for the design of selective sharing mechanisms in social networks."

While selective sharing may be characterized as being available on other networks (e.g. ‘Lists’ on Facebook), Google is sending signals that making the design of selective sharing controls central to the sharing model offers a great opportunity to help users manage their self-presentations to multiple audiences in the multi-tenancies we call online social networks. Or, put more simply, selective sharing multiplies opportunities for online engagement.

For the purposes of this blog post, we adopt Google’s definition of "selective sharing" to mean providing information producers with controls for overcoming both over-sharing and fear of sharing. Furthermore, we agree with Google that that the design of tools for such selective sharing controls must allow users to balance sender and receiver needs, and to adapt these controls to different types of content. So defined, we believe that almost seven years since the Texas livestock market project, a tipping point has been reached that militates in favor of selective sharing from within supply chains and on to consumers. Now, there have been a lot of things happen over the last seven years that bring us to this point (e.g., the rise of social media, CRM in the Cloud, the explosion of mobile technologies, etc.). But the tipping point we are referencing "follows the money", as they say. We believe that the tipping point toward selective sharing is to be found in the incentives provided by affiliate networks like Google Affiliate Networks.

Google Affiliate Networks

Google Affiliate networks provide a means for affiliates to monetize websites. Here’s a recent video presentation by Google, Automating the Use of Google Affiliate Links to Monetize Your Web Site:


Presented by Ali Pasha & Shaun Cox | Published 2 July 2012 | 47m 11s

The Google Affiliate Network provides incentives for affiliates to monetize their websites based upon actual sales conversions instead of indirectly based upon the number of ad clicks. These are web sites (e.g., http://www.savings.com/) where ads are the raison d'etre of the web site. High value consumers are increasingly scouring promotional, comparison, and customer loyalty sites like savings.com for deals and generally more information about products. Compare that with websites where ads are peripheral to other content (e.g., http://www.nytimes.com/) and where ad clicks are measured using Web 2.0 identity and privacy sharing models.

In our opinion the incentives of affiliate networks have huge potential for matching up with an unmet need in the Cloud for all participants - large and small - of enterprise supply chains to selectively monetize their data assets. For example, data assets pertaining to product traceability, source, sustainability, identity, authenticity, process verification and even compliance with human rights laws, among others, are there to be monetized.

Want to avoid buying blood diamonds? Go to a website that promotes human rights and click on a diamond product link that has been approved by that site. Want to purchase only “Made in USA” products? There’s not a chamber of commerce in the U.S. that won’t want to provide a link to their members’ websites who are also affiliates of an incentive network. Etc.

Unfortunately, these data assets are commonly not shared because of the complete lack of tools for selective sharing, and the fear of sharing (or understandable apathy) engendered under “all or nothing” sharing models. As published back in 1993 by the MIT Sloan School in Why Not One Big Database? Ownership Principles for Database Design: "When it is impossible to provide an explicit contract that rewards those who create and maintain data, ‘ownership’ will be the best way to provide incentives." Data ownership matters. And selective sharing – appropriately designed for enterprises – will match data ownership up with available incentives.

Remember that thought we asked you to hold?

In our opinion the Google Affiliate Network is already providing incentives that are a sustainable, market-driven substitute for what turned out to be unsustainable, USDA-driven incentives. We presume that Google is well aware of potential synergies between Google+ and the Google Affiliate Network. We also presume that Google is well aware that "[w]hile business-critical information is often already gathered in integrated information systems, such as ERP, CRM and SCM systems, the integration of these systems itself (as well as the integration with the abundance of other information sources) is still a major challenge."

We know this is a "big idea" but in our opinion the dynamic blending of Google+ and the Google Affiliate Network could over time bring within reach a holy grail in web communications – the cracking of the data silos of enterprise class supply chains for increased sharing with consumers of what to-date has been "off limits" proprietary product information.

A glimpse of the future may be found for example in the adoption of Google+ by Cadbury UK, but the design for selective sharing of Google+ is currently far from what it needs to attract broad enterprise usage. Sharing in Circles brings to mind Eve Maler’s blog post, Venn and the Art of Data Sharing.  That’s really cool for personal sharing (or empowering consumers as is the intent of VRM) but for enterprises Google+ will need to evolve its selective sharing functionalities. Sure, data silos of commercial supply chains are holding personal identities close to their chest (e.g., CRM customer lists) but they’re also walling off product identities with every bit as much zeal, if not more. That creates a different dynamic that, again, typical Web 2.0 "all or nothing" sharing (designed, by the way, around personal identities) does not address.

It should be specially noted, however, that Eve Maler and the User-Managed Access (UMA) group at the Kantara Initiative are providing selective sharing web protocols that place "the emphasis on user visibility into and control over access by others".  And Eve in her capacity at Forrester has more recently provided a wonderful update of her earlier blog post, this one entitled A New Venn of Access Control for the API Economy.

But in our opinion before Google+, UMA or any other companies or groups working on selective sharing can have any reasonable chance of addressing "data ownership" in enterprises and their supply chains, they will need to take a careful look at incorporating fixed data elements at a single location with authorizations. It is in regard to this point that we seek to augment the current status of selective sharing. More about that line of thinking (and activities within the WikiData Project) in our earlier “tipping point” blog post, The Tipping Has Arrived: Trust and Provenance in Web Communications.

What do you think? Share your conclusions and opinions by joining us at @WholeChainCom on LinkedIn at http://tinyurl.com/WholeChainCom.

Thursday
Apr262012

The Tipping Point has Arrived: Trust and Provenance in Web Communications

By Steve Holcombe (@steve_holcombe) and Clive Boulton (@iC)

"The Web was originally conceived as a tool for researchers who trusted one another implicitly. We have been living with the consequences ever since." Sir Tim Berners-Lee

"One of the issues of social networking silos is that they have the data and I don't … There are no programmes that I can run on my computer which allow me to use all the data in each of the social networking systems that I use plus all the data in my calendar plus in my running map site, plus the data in my little fitness gadget and so on to really provide an excellent support to me." Sir Tim Berners-Lee.

The tipping point has arrived for trust and provenance in web communications. And it is not just because Tim Berners-Lee thinks it is a good idea. The control of immutable data in the Cloud by content providers is on the verge of moving out of research projects and into commercial platforms. The most visible, first-mover example known to us is provided by the Wikidata Project.

The rapidly emerging Wikidata Project, the next iteration of Wikipedia, will in its first phase (to be finished within the next 6 months) implement the deposit by content providers of data elements (e.g., someone's birth date) at a single, fixed location for supporting in Phase 2 (targeted to be completed by the end of 2012) the semantic relationships (i.e., ontologies) that Wikipedia users are seeking. Paul Allen's Institute of Artificial Intelligence and Google are two of the three primary benefactors of the Wikidata Project. And it is no surprise that the base of operations for this ground-breaking work is in Germany. The European Commission proposed in January, 2012 a comprehensive reform of data protection rules to strengthen online privacy rights and boost Europe's digital economy.

This blog site exists to discuss whole chain communications between enterprises and consumers. Along that line the Wikipedia folks aren't really thinking about the Wikidata Project in terms of supply chains. But that is what they are backing into. Daniel Matuschek (@matuschd) would seem to agree in his blog post, Wikidata - some expectations. Here's an excerpt:

"Some ideas for open databases that could make our live easier or better [include] Product data: Almost every product has an EAN code. There are some companies building and selling databases for specific products (e.g. food, DVDs), sometimes generated with community support .... The Wikidata project is currently not addressing [this kind of database], but if a platform is available, there’s a good chance that users start creating databases like this."

And granular permissions (in the hands of content providers) over individual data elements are on Wikipedia's wish list to be introduced later this year during Phase 2:

  • O2.5. Add a more fine granular approach towards protecting single facts instead of merely the whole entity.
  • O2.6. Export trust and provenance information about the facts in Wikidata. Since the relevant standards are not defined yet, this should be done by closely monitoring the W3C Provenance WG.

We suspect that as the Wikidata Project begins to provide "trust and provenance" in its form of web communications, they will not just be granularizing single facts but also immutabilizing the data elements to which those facts are linked so that even the content providers of those data elements cannot change them. This is critical for trust and provenance in whole chain communications between supply chain participants who have never directly interacted.

What are the other signs of the "tipping point"?

Another sign is the shift to forecasting demand certainty directly from a consumer interest graph. Walmart purchased Kosmix in 2011 to push into social commerce and to integrate products with social identity. This ia an important new way to give shoppers information, and get information from them. Analysts at the research firm Booz and Company said in a 2010 report.

“Social media, or places where people congregate to share information and mutual understanding, are replacing broadcast media as the primary way many people learn about products and services.”

"Doc" Searls, co-author of The Cluetrain Manifesto, and a former Fellow of the Berkman Center for Internet & Society at Harvard University, calls this a shift to the Intention Economy, Where Consumers Take Charge. Here is an excerpt from his May, 2012 publication:

Today, Walmart and Tesco and other global grocers have to wait for the checkout register to record a sale and pass the product sale information through a network of EDI processing to reforecast demand. Imagine the improvements when Walmart can see supply chain intent before the sale. Unlike Walmart, the FT calls Tesco tired.   

Indeed, Keith Teare on Tech Crunch posits Facebook's purchase of Instagram (and Google's falling earnings) signals the end of the Web 2.0 era. In the Web 2.0 era we consumed services on a web browser monetized by display ads. Now we are moving to a mobile app-centric world without desktop display ads. This is fertile ground for a shift into sharing at the identity and granular detail level via trust and provenance.

Does the Instagram purchase signal that Facebook will become a "trusted site" for granular information saved and shared in immutable objects? Facebook has to aggregate more and more data to build better services and makes its post IPO numbers. Will Facebook services come to provide W3C-type trust and provenance? We will see. But it is interesting to imagine that the Wikidata Project will be a "tipping point" for Facebook and other Web 2.0 providers toward granular trust and provenance in the Cloud.

 

What do you think? Share your conclusions and opinions by joining us at @WholeChainCom on LinkedIn at http://tinyurl.com/WholeChainCom.