You are currently browsing the tag archive for the ‘Standards’ tag.

Image from Beth Zimmerman - Pain

Image from Beth Zimmerman – Pain

Are Niche Social Media networks the future?  This was a question in a recent #SWChat that I attended.  Niche networks, it was explained, meant either private or bespoke networks using twitter or yammer-like platforms, although niche could be applied to any functional clone of current social platforms.  While the chat concluded that this is not the face of the future, most participants expected niche alternatives to be part of it.

The reasons for this were twofold.  Firstly the general preference across all industries is to maintain corporate privacy in communications other than PR and Marketing.  Most companies today are gradually enabling social communications within their firewalls and seeing the benefits.  However they are also reluctant to extend that capability outside the firewall unless a Virtual Private Network (VPN) has been established for connecting external parties.  VPNs have overheads and rapidly become difficult to scale when the number of parties being serviced reaches into the 10,000’s.

Mass connectivity means public connectivity and so limiting exposure can only be achieved by either no connectivity or by using smaller community platforms or niche solutions.

The second reason has more to do with application access to large social platforms such as Facebook, LinkedIn, Google+ and especially Twitter.  In August Twitter announced significant changes in their Application Programmable Interface (API V1.1) through which other applications like Hootsuite, and access the twitter stream.  In the view of many the changes were restrictive to the point where they considered alternatives such as, which originally offered Twitter-like capabilities for a flat annual fee of $50.

Both arguments drive fragmentation, one for reasons of security and the other to avoid control and restriction by the third party platform. Fragmentation will meet some of these perceived expectations but it is also likely that many of the offshoots will encounter similar challenges of scale and security, possibly even invoking similar or harsher constraints on usage.  Any communication with a member of the public can find its way onto any one of the social platforms. That is the magic of digitization; scanning, OCD, cut and paste allows any thing said, signed or written to be copied.  And any social platform, niche or otherwise, that offers an API will provide rules and constraints.

The biggest detriment, however, is not the fact that niche alternatives can’t fully satisfy the needs of either group.  Fragmentation separates and dilutes the social stream.  Additional fragmentation, possibly caused by further experimentation with security and flexibility options amongst others, further separates and dilutes the stream.  Instead of access to large and global communities niche solutions will restrict social participation to those communities in which we are most comfortable.  The value of the social network is diversity, immediacy and the pulse on our collective thoughts and actions.  Niches can only provide a window onto the communities they serve, and these become increasingly homogenized as membership and contribution is limited to a smaller set of like-minded or similarly cultured participants.

There are alternative approaches that may reach a higher level of satisfaction for the disaffected parties.

On the enterprise side: a more comprehensive and informative set of policies around information and communication.  An education program that will help internal and external participants understand the appropriate tone, content and behavior; not just the do’s and don’ts but the rationale and reasons why certain information is private and should remain so, or why good standards of behavior improve the quality and value of interactions.  Establish guidelines for how to conduct research, collaboration and networking.  Technology may be able to check any dialogue against policy, which is a boon for regulated industries, but for others it is far better to have employed resources aware and well-practiced at good social interaction.

Eventually enterprises might learn that applying control and security to every asset is not scalable. As digital information increases exponentially it is more effective to identify core private information and ensure security for that domain. For everything else publish in the cloud according to the comprehensive policies mentioned earlier.

On the unconstrained platform, and in particular Twitter, consider a proactive dialogue with your peers and Twitter representatives.  The August announcements could have been phrased differently, they certainly did not evoke a sense of synergy between the platform and the development community.  However there is little in the new requirements that isn’t reasonable other than the style in which it was delivered.  Polishing the guidelines and making them requirements ensures quality and consistency.  Authentication is a valid requirement to prevent easy abuse.  Endpoint rate limits and user counts are reasonable statistics to conduct dialogue between Twitter and application development businesses, even though the communication did not phrase it that way, providing instead hard limits with an inference of future discussion but not necessarily expansion.

Support those requirements you agree with, and for those you have concerns about find a way to modify the requirements to something more acceptable to both parties.  This is public innovation and one of the main charms and promises of Twitter.  Find others who agree and can further modify the requirements.  With community support and a viable approach you could engage Dick Costolo, Twitter’s CEO, to encourage progress and improvement. We could even call it the API spring.

I want Twitter to continue providing the simplest and best social media dialogue platform.  It is not in my interest for niche platforms to dilute and detract from the stream that Twitter offers. Do what you can to educate, promote and support what is good about open communications, help build a set of policies and standards that improve communications and the API requirements for the platform that hosts them.  If you don’t Twitter will be well and truly forked.

Enhanced by Zemanta
Image: nuttakit /

Image: nuttakit /

How does one define Big Data and is “big” the best adjective to describe it?  There are many voices trying to come up with answers to this topical question.  Gartner and Forrester both agree that a better word would be “extreme”. Between the two major consulting firms they have determined four characteristics that extreme can qualify:  they are agreed on three: volume, velocity and variety.  On the fourth they diverge, Forrester postulates variability while Gartner prefers the word complexity.   These are reasonable contributions and may form the foundation for the definition of big data that the Open Methodology Group is seeking to create within their open architecture Mike 2.0.

However the definition still falls short of the mark, as any combination of these characteristics can be found in many of today’s large data warehouses and parallel databases operating in outsourced or in-house data centers.  No matter how extreme the data eventually Moore’s Law* and technology will asymptotically accommodate and govern the data.  I could suggest that the missing attribute is volatility or the rate of change, but that too can be applied to current serviced capabilities.  Another important attribute that is all too often missed by analysts is that Big Data is world data, it is data in many formats and many languages contributed by almost every nationality and culture and the noise generated by the systems and devices they employ.

Yet the characteristic that seems to address this definition shortfall best is openness, where openness means accessible (addressable or through API), shareable and unrestricted.  This may be controversial as it raises some key issues around privacy, property  and rights, but these problems for big data still need to be resolved independent of any definition.  Why openness?  Here are six observations:

  1. Any data that is not open, ie that is private, covert or obscured is by default protected and confined to the private architecture and data model(s) of that closed system.  While sharing many of the attributes of “big data” and possibly  the same data sources at best this can only represent a subset of big data as a whole.
  2. Big data does not and cannot have a single owner, supplier or agent (heed well ye walled gardens), and is the sum of many parts including amongst others social media streams, communication channels and complex signal networks
  3. There will never be a single Big Data Analytic Application/Engine , but there will be a multitude of them , each working on different or slightly different subsets of the whole.
  4. Big Data analysis will demand multi-pass processing including some form of abstract notation, private systems will develop their own notation but public notation standards will evolve, and open notation standards will improve the speed and consistency of analysis.
  5. Big Data volumes are not just expanding, they are accelerating especially as visual/graphic data communications becomes established (currently trending).  Cloning and copying of Big Data will expand global storage requirements exponentially.  Enterprises will recognize the impractical economy of this model and support industry standards that provide a robust and accessible information environment.
  6. As enterprises cross into crowd-sourcing and collaboration in the public domains it will be increasingly difficult and expensive to maintain private information and integrate or cross reference with public Big Data.  The need to go open to survive will be accompanied by the recognition that contributing private data and potentially intellectual property is more economic and supportive of rapid open innovation.

The conclusion remains that one of the intrinsic attributes of Big Data is that it is and must be maintained as “open”.

Related Links

  1. Gartner and Forrester “Nearly” Agree on Extreme / Big Data
  2. Single-atom transistor is ‘end of Moore’s Law’ and ‘beginning of quantum computing’.
Enhanced by Zemanta

Twitter Updates

  • RT @ScottEganhouse: A7 Interesting question, If I made less mistakes I'd be a lot dumber #printchat 6 days ago
  • A7 start pushing open source/standards sooner, more achievable in collaboration than competition. Collective vs individual smarts #printchat 6 days ago
  • A6 most businesses r intellectual silos well versed in own disciplines. Learning their language is only 1 of print's disciplines #printchat 6 days ago
  • + Swiftest RT @davekrawczuk Being the "smartest" isn't always the best. Be a good team player. Teams make the biggest changes. #printchat 6 days ago
  • A5 Emphasizes the 6 degrees of separation rule, even suggests it might be smaller. We're all interconnected, directly/indirectly #printchat 6 days ago