Search
Subscribe

Bookmark and Share

About this Blog

As enterprise supply chains and consumer demand chains have beome globalized, they continue to inefficiently share information “one-up/one-down”. Profound "bullwhip effects" in the chains cause managers to scramble with inventory shortages and consumers attempting to understand product recalls, especially food safety recalls. Add to this the increasing usage of personal mobile devices by managers and consumers seeking real-time information about products, materials and ingredient sources. The popularity of mobile devices with consumers is inexorably tugging at enterprise IT departments to shifting to apps and services. But both consumer and enterprise data is a proprietary asset that must be selectively shared to be efficiently shared.

About Steve Holcombe

Unless otherwise noted, all content on this company blog site is authored by Steve Holcombe as President & CEO of Pardalis, Inc. More profile information: View Steve Holcombe's profile on LinkedIn

Follow @WholeChainCom™ at each of its online locations:

Entries in Identity (11)

Wednesday
Jul112012

The Tipping Point Has Arrived: Market Incentives for Selective Sharing in Web Communications

By Steve Holcombe (@steve_holcombe) and Clive Boulton (@iC)

A Glimmer of Market Validation for Selective Sharing

In late 2005 Pardalis deployed a multi-tenant, enterprise-class SaaS to a Texas livestock market. The web-connected service provided for the selective sharing of data assets in the U.S. beef livestock supply chain.  Promising revenues were generated from a backdrop of industry incentives being provided for sourced livestock. The industry incentives themselves were driven by the specter of mandatory livestock identification promised by the USDA in the wake of the 2003 "mad cow" case.

At the livestock market thousands of calves were processed over several sessions. Small livestock producers brought their calves into the auction for weekly sales where they were RFID tagged. An affordable fee per calf was charged to the producers which included the cost of a RFID tag. The tags identifiers were automatically captured, a seller code was entered, and affidavit information was also entered as to the country of origin (USA) of each calf. Buyers paid premium prices for the tagged calves over and above untagged calves. The buyers made money over and above the affordable fee per calf.  After each sale, and at the speed of commerce, all seller, buyer and sales information was uploaded into an information tenancy in the SaaS that was controlled by the livestock market. For the first time ever in the industry, the livestock auction selectively authorized access to this information to the buyers via their own individual tenancies in the SaaS.

That any calves were processed at all was not possible without directly addressing the fear of information sharing that was held by both the calf sellers and the livestock market. The calf sellers liked that their respective identities were selectively withheld from the calf buyers. And they liked that a commercial entity they trusted – the livestock market – could stand as a kind of trustee between them and governmental regulators in case an auctioned calf later turned out to be the next ‘mad cow’. In turn the livestock market liked the selectiveness in information sharing because it did not have to share its confidential client list in an “all or nothing” manner to potential competitors on down the supply chain. At that moment in time, the immediate future of selective sharing with the SaaS looked very bright. The selective sharing design deployed by Pardalis in its SaaS fixed data elements at a single location with authorizations controlled by the tenants. Unfortunately, the model could not be continued and scaled at that time to other livestock markets. In 2006 the USDA bowed to political realities and terminated its efforts to introduce national mandatory livestock identification.

And so, too, went the regulatory-driven industry incentives. But … hold that thought.

Talking in Circles: Selective Sharing in Google+

Google+ is now 1 year old. In conjunction with Google, researchers Sanjay Kairam, Michael J. Brzozowski, David Huffaker, and Ed H. Chi have published Talking in Circles: Selective Sharing in Google+, the first empirical study of behavior in a network designed to facilitate selective sharing:

"Online social networks have become indispensable tools for information sharing, but existing ‘all-or-nothing’ models for sharing have made it difficult for users to target information to specific parts of their networks. In this paper, we study Google+, which enables users to selectively share content with specific ‘Circles’ of people. Through a combination of log analysis with surveys and interviews, we investigate how active users organize and select audiences for shared content. We find that these users frequently engaged in selective sharing, creating circles to manage content across particular life facets, ties of varying strength, and interest-based groups. Motivations to share spanned personal and informational reasons, and users frequently weighed ‘limiting’ factors (e.g. privacy, relevance, and social norms) against the desire to reach a large audience. Our work identifies implications for the design of selective sharing mechanisms in social networks."

While selective sharing may be characterized as being available on other networks (e.g. ‘Lists’ on Facebook), Google is sending signals that making the design of selective sharing controls central to the sharing model offers a great opportunity to help users manage their self-presentations to multiple audiences in the multi-tenancies we call online social networks. Or, put more simply, selective sharing multiplies opportunities for online engagement.

For the purposes of this blog post, we adopt Google’s definition of "selective sharing" to mean providing information producers with controls for overcoming both over-sharing and fear of sharing. Furthermore, we agree with Google that that the design of tools for such selective sharing controls must allow users to balance sender and receiver needs, and to adapt these controls to different types of content. So defined, we believe that almost seven years since the Texas livestock market project, a tipping point has been reached that militates in favor of selective sharing from within supply chains and on to consumers. Now, there have been a lot of things happen over the last seven years that bring us to this point (e.g., the rise of social media, CRM in the Cloud, the explosion of mobile technologies, etc.). But the tipping point we are referencing "follows the money", as they say. We believe that the tipping point toward selective sharing is to be found in the incentives provided by affiliate networks like Google Affiliate Networks.

Google Affiliate Networks

Google Affiliate networks provide a means for affiliates to monetize websites. Here’s a recent video presentation by Google, Automating the Use of Google Affiliate Links to Monetize Your Web Site:


Presented by Ali Pasha & Shaun Cox | Published 2 July 2012 | 47m 11s

The Google Affiliate Network provides incentives for affiliates to monetize their websites based upon actual sales conversions instead of indirectly based upon the number of ad clicks. These are web sites (e.g., http://www.savings.com/) where ads are the raison d'etre of the web site. High value consumers are increasingly scouring promotional, comparison, and customer loyalty sites like savings.com for deals and generally more information about products. Compare that with websites where ads are peripheral to other content (e.g., http://www.nytimes.com/) and where ad clicks are measured using Web 2.0 identity and privacy sharing models.

In our opinion the incentives of affiliate networks have huge potential for matching up with an unmet need in the Cloud for all participants - large and small - of enterprise supply chains to selectively monetize their data assets. For example, data assets pertaining to product traceability, source, sustainability, identity, authenticity, process verification and even compliance with human rights laws, among others, are there to be monetized.

Want to avoid buying blood diamonds? Go to a website that promotes human rights and click on a diamond product link that has been approved by that site. Want to purchase only “Made in USA” products? There’s not a chamber of commerce in the U.S. that won’t want to provide a link to their members’ websites who are also affiliates of an incentive network. Etc.

Unfortunately, these data assets are commonly not shared because of the complete lack of tools for selective sharing, and the fear of sharing (or understandable apathy) engendered under “all or nothing” sharing models. As published back in 1993 by the MIT Sloan School in Why Not One Big Database? Ownership Principles for Database Design: "When it is impossible to provide an explicit contract that rewards those who create and maintain data, ‘ownership’ will be the best way to provide incentives." Data ownership matters. And selective sharing – appropriately designed for enterprises – will match data ownership up with available incentives.

Remember that thought we asked you to hold?

In our opinion the Google Affiliate Network is already providing incentives that are a sustainable, market-driven substitute for what turned out to be unsustainable, USDA-driven incentives. We presume that Google is well aware of potential synergies between Google+ and the Google Affiliate Network. We also presume that Google is well aware that "[w]hile business-critical information is often already gathered in integrated information systems, such as ERP, CRM and SCM systems, the integration of these systems itself (as well as the integration with the abundance of other information sources) is still a major challenge."

We know this is a "big idea" but in our opinion the dynamic blending of Google+ and the Google Affiliate Network could over time bring within reach a holy grail in web communications – the cracking of the data silos of enterprise class supply chains for increased sharing with consumers of what to-date has been "off limits" proprietary product information.

A glimpse of the future may be found for example in the adoption of Google+ by Cadbury UK, but the design for selective sharing of Google+ is currently far from what it needs to attract broad enterprise usage. Sharing in Circles brings to mind Eve Maler’s blog post, Venn and the Art of Data Sharing.  That’s really cool for personal sharing (or empowering consumers as is the intent of VRM) but for enterprises Google+ will need to evolve its selective sharing functionalities. Sure, data silos of commercial supply chains are holding personal identities close to their chest (e.g., CRM customer lists) but they’re also walling off product identities with every bit as much zeal, if not more. That creates a different dynamic that, again, typical Web 2.0 "all or nothing" sharing (designed, by the way, around personal identities) does not address.

It should be specially noted, however, that Eve Maler and the User-Managed Access (UMA) group at the Kantara Initiative are providing selective sharing web protocols that place "the emphasis on user visibility into and control over access by others".  And Eve in her capacity at Forrester has more recently provided a wonderful update of her earlier blog post, this one entitled A New Venn of Access Control for the API Economy.

But in our opinion before Google+, UMA or any other companies or groups working on selective sharing can have any reasonable chance of addressing "data ownership" in enterprises and their supply chains, they will need to take a careful look at incorporating fixed data elements at a single location with authorizations. It is in regard to this point that we seek to augment the current status of selective sharing. More about that line of thinking (and activities within the WikiData Project) in our earlier “tipping point” blog post, The Tipping Has Arrived: Trust and Provenance in Web Communications.

What do you think? Share your conclusions and opinions by joining us at @WholeChainCom on LinkedIn at http://tinyurl.com/WholeChainCom.

Thursday
Apr262012

The Tipping Point has Arrived: Trust and Provenance in Web Communications

By Steve Holcombe (@steve_holcombe) and Clive Boulton (@iC)

"The Web was originally conceived as a tool for researchers who trusted one another implicitly. We have been living with the consequences ever since." Sir Tim Berners-Lee

"One of the issues of social networking silos is that they have the data and I don't … There are no programmes that I can run on my computer which allow me to use all the data in each of the social networking systems that I use plus all the data in my calendar plus in my running map site, plus the data in my little fitness gadget and so on to really provide an excellent support to me." Sir Tim Berners-Lee.

The tipping point has arrived for trust and provenance in web communications. And it is not just because Tim Berners-Lee thinks it is a good idea. The control of immutable data in the Cloud by content providers is on the verge of moving out of research projects and into commercial platforms. The most visible, first-mover example known to us is provided by the Wikidata Project.

The rapidly emerging Wikidata Project, the next iteration of Wikipedia, will in its first phase (to be finished within the next 6 months) implement the deposit by content providers of data elements (e.g., someone's birth date) at a single, fixed location for supporting in Phase 2 (targeted to be completed by the end of 2012) the semantic relationships (i.e., ontologies) that Wikipedia users are seeking. Paul Allen's Institute of Artificial Intelligence and Google are two of the three primary benefactors of the Wikidata Project. And it is no surprise that the base of operations for this ground-breaking work is in Germany. The European Commission proposed in January, 2012 a comprehensive reform of data protection rules to strengthen online privacy rights and boost Europe's digital economy.

This blog site exists to discuss whole chain communications between enterprises and consumers. Along that line the Wikipedia folks aren't really thinking about the Wikidata Project in terms of supply chains. But that is what they are backing into. Daniel Matuschek (@matuschd) would seem to agree in his blog post, Wikidata - some expectations. Here's an excerpt:

"Some ideas for open databases that could make our live easier or better [include] Product data: Almost every product has an EAN code. There are some companies building and selling databases for specific products (e.g. food, DVDs), sometimes generated with community support .... The Wikidata project is currently not addressing [this kind of database], but if a platform is available, there’s a good chance that users start creating databases like this."

And granular permissions (in the hands of content providers) over individual data elements are on Wikipedia's wish list to be introduced later this year during Phase 2:

  • O2.5. Add a more fine granular approach towards protecting single facts instead of merely the whole entity.
  • O2.6. Export trust and provenance information about the facts in Wikidata. Since the relevant standards are not defined yet, this should be done by closely monitoring the W3C Provenance WG.

We suspect that as the Wikidata Project begins to provide "trust and provenance" in its form of web communications, they will not just be granularizing single facts but also immutabilizing the data elements to which those facts are linked so that even the content providers of those data elements cannot change them. This is critical for trust and provenance in whole chain communications between supply chain participants who have never directly interacted.

What are the other signs of the "tipping point"?

Another sign is the shift to forecasting demand certainty directly from a consumer interest graph. Walmart purchased Kosmix in 2011 to push into social commerce and to integrate products with social identity. This ia an important new way to give shoppers information, and get information from them. Analysts at the research firm Booz and Company said in a 2010 report.

“Social media, or places where people congregate to share information and mutual understanding, are replacing broadcast media as the primary way many people learn about products and services.”

"Doc" Searls, co-author of The Cluetrain Manifesto, and a former Fellow of the Berkman Center for Internet & Society at Harvard University, calls this a shift to the Intention Economy, Where Consumers Take Charge. Here is an excerpt from his May, 2012 publication:

Today, Walmart and Tesco and other global grocers have to wait for the checkout register to record a sale and pass the product sale information through a network of EDI processing to reforecast demand. Imagine the improvements when Walmart can see supply chain intent before the sale. Unlike Walmart, the FT calls Tesco tired.   

Indeed, Keith Teare on Tech Crunch posits Facebook's purchase of Instagram (and Google's falling earnings) signals the end of the Web 2.0 era. In the Web 2.0 era we consumed services on a web browser monetized by display ads. Now we are moving to a mobile app-centric world without desktop display ads. This is fertile ground for a shift into sharing at the identity and granular detail level via trust and provenance.

Does the Instagram purchase signal that Facebook will become a "trusted site" for granular information saved and shared in immutable objects? Facebook has to aggregate more and more data to build better services and makes its post IPO numbers. Will Facebook services come to provide W3C-type trust and provenance? We will see. But it is interesting to imagine that the Wikidata Project will be a "tipping point" for Facebook and other Web 2.0 providers toward granular trust and provenance in the Cloud.

 

What do you think? Share your conclusions and opinions by joining us at @WholeChainCom on LinkedIn at http://tinyurl.com/WholeChainCom.

Friday
Aug052011

A New Way of Looking at Information Sharing in Supply & Demand Chains

The Internet is achieved via layered protocols. Transmitted data, flowing through these layers are enriched with metadata necessary for the correct interpretation of the data presented to users of the Web. Tim Berners-Lee, inventor of the Web says, “The Web was originally conceived as a tool for researchers who trusted one another implicitly …. We have been living with the consequences ever since ….” “[We need] to provide Web users with better ways of determining whether material on a site can be trusted ….”

Our lives have nonetheless become better as a result of Web service providers like Google and Facebook. Consumers are now conditioned to believe that they can – or should be able to - search and find information about anything, anytime. But the service providers dictate their quality of service in a one-way conversation that exploits the advantages of the Web as it exists. What may be considered trustworthy content is limited to that which is dictated by the service providers. The result is that consumers cannot find real-time, trustworthy information about much of anything.

Despite all the work in academic research there is still no industry solution that fully supports the sharing of proprietary supply chain product information between “data silos”. Industry remains in the throes of one-up/one down information sharing when what is needed is real-time “whole chain” interoperability. The Web needs to provide two-way, real-time interoperability in the content provided by information producers. Immutable objects have heretofore been traditionally used to provide more efficient data communications between networked machines, but not between information producers. Now researchers are innovatively coming up with new ways of using immutable objects in interoperable, two-way communications between information content providers.

A New Way of Looking at Information Sharing in Supply & Demand chains

Pardalis’ protocols for immutable informational objects make possible a value chain of two-way, interoperable sharing that makes information more available, trustworthy, and traceable. This, in turn, incentivizes increases in the quality and availability of new information leading to new business models.

Tuesday
Mar012011

Real-time, supply chain test marketing of new product lines 

Assume that a retailer, a class of beef product pre-retailers (i.e., wholesalers, processors and vertically integrated operators), and a class of consumers are all multi-tenant members of a centralized personal data store for sharing supply chain information.

I gave an example of a multi-tenant Food Recall Data Bank in an earlier blog entitled Consortium seeks to holistically address food recalls. At the time I wrote this earlier blog I vacillated between calling it what I did, or calling it a VRM Data Bank. I refrained from calling it the latter because while the technology application is potentially very good for consumers (i.e., food recalls tied to point of sale purchases) it still felt too much like it was rooted in the world of CRM. For more about the VRM versus CRM debate see The Bullwhip Effect.

Below is a technology application whereby supply chain tenants may register their CPA informational objects with permissions and other instructions for how those objects may be minimally accessed, used and further shared by and to other supply chain participants. What one then has is what may more appropriately be called a VRM Data Tenancy System (VRM DTS).

So what can one do with this architecture? How can it get started in the marketplace of solutions? A reasonable beginning point is with real-time, supply chain test marketing of new food product lines. And by supply chain test marketing, I mean something clearly more than just consumer test marketing. What I am describing below is multi-directional, feedback loop for:

  1. test marketing a new consumer product line for the purpose of driving retail sales, and
  2. concurrently generating procurement and wholesale interest and support from pre-retailers.

Assume that a retailer has been receiving word of mouth consumer interest in a particular beef product class (e.g., "ethically raised" beef products). Assume that pre-retailers have heretofore not been all that interested in raising, processing or purchasing "ethically raised" meat products for wholesale.

An initial "test market" object is authored and registered by the retailer for polling consumer interest via asynchronous authoring by individual consumers of their store outlet preferences, likely beef product quantity purchases of the new product line per week, etc. This object is revealed to a consumer class via their tenancies in the VRM DTS. The object is concurrently revealed to a class of pre-retailers via their tenancies, too. Each consumer is anonymous to the other consumers, and anonymous to the pre-retailers. Each pre-retailer is anonymous to the other pre-retailers, and anonymous to the consumers. But each consumer, as is each pre-retailer, is nonetheless privy to the consumers' real-time poll and the pre-retailers' real-time poll. The retailer watches all, being privy to the actual identities of both consumers and pre-retailers, while at the same time the retailer’s customer and pre-retailer client lists remain anonymous.

With this kind of real-time sharing of information, one can begin to imagine a competitive atmosphere arising among the pre-retailers. Furthermore, there is no reason the retailer's object could be further authored by the retailer to solicit real-time offers from the pre-retailers to procure X quantities of the beef products for delivery to identified outlets of the retailer by dates certain, in the same specific beef product class, etc. And there's no reason the "test market" object could not be further used to finalize a procurement contract with one or more pre-retailers ...

... which at the moment of execution shares real-time, anonymized information over to consumers as to dates of delivery of X quantity of beef products at identified retail outlets.

The "test market" object could be further designed for the consumer class to asynchronously provide real-time feed-back to the retailer regarding their experiences with the purchased product, and to perhaps do so even back to the pre-retailers based upon GLNs and GTINs. Depending on the retailer's initial design of the "test market" object, this consumer feedback to pre-retailers may be anonymous or may specifically identify a branded product. And, because food safety regulators are seeking "whole chain" traceability solutions, the government can be well apprised with minimal but real-time disclosures.

The dynamic business model for employing a VRM DTS includes greater supply chain transparency (increased, ironically, with consumer and pre-retailer anonymity), food discount incentives, real-time visualizations, new data available for data mining, and new product outlets for pre-retailers who have not previously provided products to the retailer. Perhaps most significantly there is the identification by the retailer of best of breed pre-retailers and loyal, committed consumers via an “auction house” atmosphere ...

 

... created from the sharing of real-time, sometimes anonymous information, between and among the pre-retailers and consumers.

Thursday
May202010

Internet Identity Workshop 10 - Favorite Tweets

I unfortunately wasn't able to attend IIW 10 but did some retweeting. Here they are in chronological order -

  • RT @marcedavis learned that #infocards support LOA2 and LOA3 ("Level Of Assurance") and #OpenID does not. #iiw @ http://twb.cc/s/712 2:09 PM May 17th via tweebus
  • RT @nobantu IIW 10 - is 3 days of Open Space in the Techie Community - specifically On Line Identity - been happening for 5 yrs now #openspace #iiw 2:19 PM May 17th via web
  • RT @xmlgrrl One refreshing thing about #IIW vs other conferences: the f'in salty language. 2:57 PM May 17th via Twitter for iPhone
  • RT @mjsoaps I don't know what's more important, Identity or Reputation #IIW 7:03 PM May 17th via web
  • RT @idworkshop Day 2 of #iiw is going to be AMAZING! Here is the twitter list of attendees http://twitter.com/idworkshop/iiw10 9:37 AM May 18th via web
  • RT @xmlgrrl Once again finding myself recommending Chris Palmer's EXCELLENT talk on fixing HTTPS. Trust On First Use (TOFU)! http://is.gd/ceSc8 #iiw 12:26 PM May 18th via Twitter for iPhone
  • RT @IdentityWoman INTRO to Internet Identity Workshop 10 now up online. http://slidesha.re/cGQ3AR #iiw please retweat 5:20 PM May 18th via web
  • RT @gffletch OH "Every distributed system begets a centralized system created to make the distributed system useful" (or something like that) #iiw 7:13 PM May 18th via Twitter for iPhone
  • RT @paulmadsen Put 20 non-techies in a room and its only matter of time before somebody says 'its not a technical problem'. You never hear reverse #iiw [May 19th] via Twee
  • RT @xmlgrrl Rights and obligations of membership are nontechnical but tech may enable them (e.g. can you "unsay" something in a thread?) #iiw [May 19th] via Twitter for iPhone
  • RT @xmlgrrl =JeffH suggests looking at "operational transformation" work to solve the tech problems here. http://is.gd/cgoN5 #iiw [May 10th] via Twitter for iPhone
  • RT @rolfvb Thankyou Kaliya, thankyou #iiw - just fantastic! These seeds will lead to wonderful fruit. /cc @IdentityWoman #identity #data #privacy [May 19th] via Twitter for iPhone

I highlighted, above, the introductory presentation by Kaliya Hamlin to the workshop. Well worth a look.

For my take on IIW9 held last November, take a look at Data Identity & Supply Chains in this blog.

And for even more comments and discussion about the IIW and the "identity movement", check out the Data Ownership in the Cloud networking group on LI.