You are currently browsing the category archive for the ‘Information Lifecycle Management’ category.

CPHC home page

I have been following a number of Healthcare sites in and around the internet, looking for examples or case studies of how social media are used.  In particular I wanted to find examples of social collaboration or communities that are focused on helping other members of society to better understand and contribute to the public wealth and health.

I attend #CPHC- Carpool Health Community, which has already garnered a strong and eager following, that meets weekly on a Tweetchat and has recently established a Google+ community to progress their ideas into actions and achievable care.  Furthering their mission they are about to launch their own web community site, where Communities of Practice (CoP) can focus around specific conditions, diseases, traumas and behaviors.  The biggest attraction of these CoPs is that community comprises more than just patients, it also contains experts from the medical side as well as members of families that have knowledge and experience of the specific topic.

Dr Steven Eisenberg is an Oncologist and he is one of the principal contributors on the Cancer Community of Practice.  His contributions to the Tweetchat are listed below.  Please feel free to engage with Dr Eisenberg through these embedded tweets, whether you need clarfication or simply want to extend the conversation or contribute to the value of his content.

He starts by defining is community of practice (CoP)

He believes that it is this comprehensive family that holds the key to increased knowledge, understanding, improved care even to the point of opening up new areas of research and discovery. When asked how this would be effected, Dr Eisenberg provided a 12 step guide to engagement, including a prologue for the journey that needs to be taken.

Please feel free to seek clarification on any of these steps directly with Dr Eisenberg, I am sure he will be more than delighted to help extend the discussion and bring further clarity to his vision and goals.

Enhanced by Zemanta
Advertisements

data graphic

First published in Internet Media Labs Blog – 27th October 2012

We are amassing data at an unprecedented rate.  In the course of a day the internet handles more than 1,000 Petabytes of data (2011 figures) and is projected to double in less than three years.  That’s a million terabytes or a billion gigabytes  just on the public internet alone.   Granted there is a lot of duplication and the amount of image and video content is greatly contributing to the accelerated growth. Furthermore our growing dependency on mobility demands even greater participation and production that further magnifies digital traffic.

That is a lot of data and a very large amount of noise carrying a decreasing ratio of signal.  How do we operate in such an environment and meet our objectives for education, career, parenting, healthcare, community participation, consumerism and entertainment? How do we locate and recognize the availability and qualities of resources that will help us live our lives productively and successfully?

A complex question no doubt, but one that highlights the current capabilities and shortcomings of the network today.

The short and most common answer would be search engines.  To a degree that is a reasonable response, but given the immensity of available data it is woefully short of satisfying anything but the last two on my list of objectives (consumerism and entertainment).

The issue starts with search engines and the demands of commercialism.  Commerce sustains our civilization and provides the impetus for innovation and discovery.  But it also dominates the way we create and prepare content, and the way we search for information.  We are also largely dependent on a single search engine, which is still evolving though firmly rooted in textual analysis. Yes there are other search options but the majority of us use Google.

Search technology is beginning to branch out as witnessed by Google’s goal of producing a knowledge graph. Currently it has the ability to determine sentiment which is the first step in semantic analysis.  Yet there is a long way to go before search can provide an accurate return on how, what and who we are searching for.

Google spends a lot of capital on developing and improving search algorithms, which are obscured to prevent gaming the system. Those algorithms perform a large number of calculations that include the analysis and synthesis of web content, structure and performance.

Providers of content and information are aware that they can improve the ranking of their published material by optimizing their web site  through Search Engine Optimization (SEO), Conversion Rate Optimization (CRO) or improving the quality and attractiveness of their content. In addition the search engine vendor(s) provide consulting services to assist content providers in achieving approved “white hat” SEO status as opposed to “black hat” SEO which is risky, unapproved, and has the potential to be banned.

Any search results in an index of entries ranked by how well they have been produced and optimized.  The more content humankind produces the more commercial entities will spend in order to ensure high ranking so that we consume their products or services, after all few consumers go beyond the first page of search results.  Hence my assertion above that consumerism and entertainment (which for sake of argument includes news and events) are the principal beneficiaries of the current solutions. And that’s great if you are catching up on news, wish to be entertained or shopping either actively or casually.  The ranking system will give you the most up to date, the most popular and the most advertised consumables.

However the ranking system doesn’t scale down for the individual, the community or small businesses or enterprises, unless predetermined keywords are used in the content and search.  A small voice cannot be heard where shouting is encouraged even demanded.  The more we use search engines the louder that shouting becomes.  Furthermore the ranking system doesn’t really scale economically for SEO content as globalization will introduce more competition for the coveted top ranked entries, demanding increased effort and optimization.

But this post is not about search engines and optimization of content.  It’s about locating resource and identifying quality and relevancy that will help in collaboration; finding people, ideas, material, skills and availability so the other objectives on my list can be fulfilled.

We need something more than simple signposts or lists, valuable as they are.  We need a capability that will not only locate a resource, but one that will also provide us with much needed information about the resource, its properties, location, status, history and relationships to other resources. In short we need directories, repositories of resources and their attributes that are easily accessible and extensible.

Directory databases have been around for a long time and are currently in operation in most large enterprises,  most commonly behind corporate firewalls.  They meet many of the requirements outlined above, although their use has been necessarily constrained to a management and security function. In most implementations they perform that function well.  That style of directory is also appropriate beyond the firewall, especially when authentication amongst diverse communities and populations needs to be supported.

Yet we can do so much more with directories, especially if we liberate their extensibility and open them up to collaborative contributions and housekeeping.  Today we keep our own lists and collaborate on those in communities of interest. There are several listing applications on Social Media such as list.ly, Twitchimp or the late lamented Formulists.  These are great applications and no social media maven can exist without one.  But they are only lists and they only carry a small number of entries and attributes.

Open collaborative directories will be able to scale to support large numbers of entries and attributes, including attributes that are determined by the participants and their communities. In other words directories will carry the hard facts about a resource as well as attributes that are determined by those who use and collaborate with those resources.

This is very similar to Facebook’s like, (and imaginary don’t like), but applied to the performance or quality of resource as experienced in collaboration.  Such peer review and measurement lies at the heart of Open Source development, a meritocracy where your contributions are evaluated by peers to determine your value and position within the group.  Such information will prove invaluable to those seeking knowledge and the resources to get things done.

And why stop at people? Open Collaborative Directories can support any resource be it curated knowledge bases, dictionaries, almanacs and compendiums.

As long as they are open and accessible they will serve and be served by the communities that need them. Because directory searches have little need for ranking they will be the first port of call for those who want more than the latest news or consumable.

Data image via Tom Woodward in Flickr Creative Commons

Enhanced by Zemanta
ellis-island-arrival

New arrivals at Ellis Island. Photo courtesy of the Library of Congress

Changing social platforms is like moving to live in a new country.

How do I know?  Because I have done the latter three times and met the same hurdles to a settled existence as I now detect in moving to a new platform on social media.

The largest of those hurdles is collateral.  When I came to live in the US, for example, I had no credit rating, because there was no record stateside of my economic conduct.  I had no guarantors other than my employer because friends and family lived in Europe.  Slowly I established myself, connecting with the economy and communities until my rating facilitated the more desirable loan rates.

The second of the major hurdles is equity or net worth.  Equity comprises assets, liquid and fixed.  Liquidity or cash is necessary for every day living, the small transactions that allow us to commute, feed ourselves and be entertained. Fixed assets are a little more problematic, because they are usually hard to convert to liquid status.  Furthermore they tend to be anchored in the environment from which you have departed, and have little value in the new environment.  Owning a house in Europe has no weight when trying to buy a house in the US, and vice versa.

The same holds true when one considers investing effort in an additional or alternate social platform.  While you may have a generic social score aggregated across active platforms, your credit rating on a new, or seldom used platform is non-existent.  Collateral in this case is not about your financial credit rating, it is your trustworthiness as a social participant.  Just as in immigration that rating has to be built gradually  and cannot be transferred from the old to the new.

The analogy is consistent for equity as well.  Equity in social terms is the value of contributions.  These most commonly are the status updates, messages, tweets, replies, mentions that make up the social media conversations of each second, hour and day of our lives.  It is also the knowledge base and territorial familiarity of that platform, knowing who does or knows what, where expertise lies, or when particular events occur, or what time is best to capture the attention of your networking collaborators.

All this is platform equity.  Not surprisingly very little, if any, of that equity is transferable.  Those contacts, the followers and those followed, like the friends and relations in the old world, belong and remain on that platform.  Those contributions and the manner in which you supplied them is also tied to the platform.  Unlike property or disposable assets these cannot be liquidated into cash.

The new platform requires new equity and collateral, it cannot easily be bought, at least not without compromising trustworthiness. The only alternative is to invest a similar amount of time and effort in building equity on the new platform, thus forcing a decision on whether to build and then maintain multiple platform equity and collateral.  That factorial investment might be too high a price to pay, especially for those individuals whose roles do not include 100%  social engagement.

There is one positive to this situation and it is somewhat paradoxical in the fact that the fixed social equity  is more versatile than the liquid.  I refer to blogs.  The platform that best supports communicating complexity, rationale and clarification.  Blog posts like this one allow ideas and insights to be expanded, formatted and packaged for distribution through any social media platform.   However they do only offer the foundational piece; the interactions, connections and short communications still have to be performed.

There are several implications of the above, especially as we consider scale:

Consistency: Equity and collateral are both affected by inconsistency.  And we all know that consistency is more than desirable in social media, it is almost obligatory.  However context can vary and what might be considered consistent in one platform could be seen as inconsistent even contradictory in another.  Furthermore maintaining dialogues and connections across multiple platforms can easily foster miscommunications, especially if the connections themselves participate on multiple platforms.  Since we cannot easily store our contributions, we cannot easily reference our interlocutors’ or our own previous conversations.  The more platforms we engage with the higher the likelihood of miscommunication and inconsistency.

Social Marketing Investment: It would be fair to assume that few social-media active consumers will engage heavily on a large number of platforms and will more likely inhabit and contribute on a manageable handful (2-4).  It is also unlikely that consumers of specific brands will inhabit the same platforms.   This is not dissimilar to the position industry faced with the proliferation of television channels in the latter part of last century.  The answer then as now is to promote on the most popular channels or platforms. Unlike television however marketing organizations would be cautioned against abandoning platforms that drop in popularity, since their collateral and equity will remain, albeit diminished over time.  The danger is of course that the least attended platform then becomes the greatest liability.  Such platforms are more prone to negative activity that could fester unaddressed.

Social Collaboration:  Perhaps the biggest challenge for industry will be in the requirements for and selection of collaborative services, especially if the components and resources have preferred social platforms of participation that are different. Ideally a common platform solves this problem, one where context integrity is assured.  Multiple platforms dilute that integrity unless all contributors and contributions are consistent across all platforms, though such purity would inevitably be  strained by diversity of geography and culture.  This suggests that established collaborative groups and activities will be more conservative and less exploratory of new platforms.  It also suggests that new collaborative groups and activities can explore new platforms, especially those that offer better functionality or efficiencies.  But these organizations also warrant caution in deciding for a new platform, for it may well exclude them from collaborating  with resources and communities on the older platforms.

I am sure there are many other points to consider, but one thing is certain: adding or moving to a new social platform is a non-trivial event, and one that demands a lot of adjustment and effort.  This post is my attempt to bridge the increasing number of platforms to which I contribute as I will distribute it on all. Hopefully it will spark further discussions on the challenges as well as progress on removing the walled garden barriers to the preferred open environment.

Enhanced by Zemanta
Image from Beth Zimmerman - Pain

Image from Beth Zimmerman – Pain

Are Niche Social Media networks the future?  This was a question in a recent #SWChat that I attended.  Niche networks, it was explained, meant either private or bespoke networks using twitter or yammer-like platforms, although niche could be applied to any functional clone of current social platforms.  While the chat concluded that this is not the face of the future, most participants expected niche alternatives to be part of it.

The reasons for this were twofold.  Firstly the general preference across all industries is to maintain corporate privacy in communications other than PR and Marketing.  Most companies today are gradually enabling social communications within their firewalls and seeing the benefits.  However they are also reluctant to extend that capability outside the firewall unless a Virtual Private Network (VPN) has been established for connecting external parties.  VPNs have overheads and rapidly become difficult to scale when the number of parties being serviced reaches into the 10,000’s.

Mass connectivity means public connectivity and so limiting exposure can only be achieved by either no connectivity or by using smaller community platforms or niche solutions.

The second reason has more to do with application access to large social platforms such as Facebook, LinkedIn, Google+ and especially Twitter.  In August Twitter announced significant changes in their Application Programmable Interface (API V1.1) through which other applications like Hootsuite, Kred.ly and Sees.aw access the twitter stream.  In the view of many the changes were restrictive to the point where they considered alternatives such as app.net, which originally offered Twitter-like capabilities for a flat annual fee of $50.

Both arguments drive fragmentation, one for reasons of security and the other to avoid control and restriction by the third party platform. Fragmentation will meet some of these perceived expectations but it is also likely that many of the offshoots will encounter similar challenges of scale and security, possibly even invoking similar or harsher constraints on usage.  Any communication with a member of the public can find its way onto any one of the social platforms. That is the magic of digitization; scanning, OCD, cut and paste allows any thing said, signed or written to be copied.  And any social platform, niche or otherwise, that offers an API will provide rules and constraints.

The biggest detriment, however, is not the fact that niche alternatives can’t fully satisfy the needs of either group.  Fragmentation separates and dilutes the social stream.  Additional fragmentation, possibly caused by further experimentation with security and flexibility options amongst others, further separates and dilutes the stream.  Instead of access to large and global communities niche solutions will restrict social participation to those communities in which we are most comfortable.  The value of the social network is diversity, immediacy and the pulse on our collective thoughts and actions.  Niches can only provide a window onto the communities they serve, and these become increasingly homogenized as membership and contribution is limited to a smaller set of like-minded or similarly cultured participants.

There are alternative approaches that may reach a higher level of satisfaction for the disaffected parties.

On the enterprise side: a more comprehensive and informative set of policies around information and communication.  An education program that will help internal and external participants understand the appropriate tone, content and behavior; not just the do’s and don’ts but the rationale and reasons why certain information is private and should remain so, or why good standards of behavior improve the quality and value of interactions.  Establish guidelines for how to conduct research, collaboration and networking.  Technology may be able to check any dialogue against policy, which is a boon for regulated industries, but for others it is far better to have employed resources aware and well-practiced at good social interaction.

Eventually enterprises might learn that applying control and security to every asset is not scalable. As digital information increases exponentially it is more effective to identify core private information and ensure security for that domain. For everything else publish in the cloud according to the comprehensive policies mentioned earlier.

On the unconstrained platform, and in particular Twitter, consider a proactive dialogue with your peers and Twitter representatives.  The August announcements could have been phrased differently, they certainly did not evoke a sense of synergy between the platform and the development community.  However there is little in the new requirements that isn’t reasonable other than the style in which it was delivered.  Polishing the guidelines and making them requirements ensures quality and consistency.  Authentication is a valid requirement to prevent easy abuse.  Endpoint rate limits and user counts are reasonable statistics to conduct dialogue between Twitter and application development businesses, even though the communication did not phrase it that way, providing instead hard limits with an inference of future discussion but not necessarily expansion.

Support those requirements you agree with, and for those you have concerns about find a way to modify the requirements to something more acceptable to both parties.  This is public innovation and one of the main charms and promises of Twitter.  Find others who agree and can further modify the requirements.  With community support and a viable approach you could engage Dick Costolo, Twitter’s CEO, to encourage progress and improvement. We could even call it the API spring.

I want Twitter to continue providing the simplest and best social media dialogue platform.  It is not in my interest for niche platforms to dilute and detract from the stream that Twitter offers. Do what you can to educate, promote and support what is good about open communications, help build a set of policies and standards that improve communications and the API requirements for the platform that hosts them.  If you don’t Twitter will be well and truly forked.

Enhanced by Zemanta

“Who wants yesterday’s papers?

Who wants yesterday’s girl?

Who wants yesterday’s paper ?

Nobody in the world” 

Rolling Stones 1967

We are besieged by information, knee deep and beyond.  If you have a smart mobile device it comes at you from all directions, in almost all circumstances.  Like the Sorcerer’s Apprentice we are drowning in a flood of communications freed by the spell of inexpensive ubiquitous technology, and try as we might we know of no counter spell to stem the tide.

This image was selected as a picture of the we...

This image was selected as a picture of the week on the Farsi Wikipedia for the 13th week, 2011. (Photo credit: Wikipedia)

The consequences of this growing tsunami are multifold, many as yet unsuspected or undetected, but the only sure thing is that life now is very different from what is was before.

The Shallows – What the Internet Is Doing to Our Brains” by Nicholas Carr provides an insightful examination of some of the effects. One of his most important points is that we are becoming increasingly distracted.  Our attention span is decreasing, as is our ability to digest information and commit  information to long term memory.  Since the well of known information, the internet, is always available, recovering information that we have consumed but not digested is only a simple search string away.

There is an argument that suggests that this frees up our brains for different and possibly more productive activities, and there is some evidence that this may be the case.

But the issue remains that we are constantly encouraged to deal more and more with the present and less and less with the future and past.  Brevity is key, as any person on Twitter will attest; yet it seems to apply to all of our communications.  Short, pithy soundbites or images, moving or otherwise, are the order of the day. Content is king, or so I am told; and those that excel at amplifying these messages, whether their own or others, are quickly harnessed by marketeers to prime the pump for their brand(s) content.

Over the last year there has been a sea change in this approach, and while content seemingly remains supreme, some are beginning to recognize the value of context.  Now it’s not just content, but related content that brings value.  Sites that “curate” content,  that is collecting and displaying additional content that augments the value of the original content, are seeing factorial increases in year on year traffic, see Greg Bardwell’s post on Content Curation Sweetspot.  Content remains king, and though context is queen, curation has become a pawn close to being promoted to queen as well.

But that is not exactly the way I see it.  I have a slightly different perspective:

Firstly information has value beyond the present, depending on its relevancy.  Over time that value can and will change according to the quality of the information; the lower the quality the lower the value. Information created in the past can be critical to knowledge and understanding in both the present and future.  At the same time ephemeral information will only have transient value, usually its 15 seconds in the spotlight.

Secondly the role of curation is not just to assemble topical and stylish content.  While that may be the purpose and goals of stimulating appetites for fashion and consumables, greater depth is required by those in search of deeper knowledge, usually provided by a context made wider with the dimension of time.  My definition of curation more closely resembles the profession as practiced in museums and galleries.  It requires a knowledge of history and an understanding of influences, qualities and intentions that produced the thoughts and artifacts under custody.

We have a duty to future generations to ensure that quality content is preserved, including the context that contributed to and proceeded from its publication.  In the face of the rising flood we need to curate responsibly, identifying the quality contributions and marking the relationships to authors and content that define their contexts.  And we have to do this is a uniform and open manner so that we have common access to riches of the past that help navigate our present and future.

Open Linked Data might be one of the more viable approaches afforded by technology, however it is in our interests to collaborate on the framework and standards that will enables us to preserve contextual relationships in content. Making curated content consistent and optimally shareable helps us all.

Nobody wants yesterday’s papers, but yesterday’s girls grew up to become Joan of Arc, Hidlegard of Bingen, Marie Curie and Marie Stopes, and the world would be a poorer place without them.

Enhanced by Zemanta

Originally posted on Internet Media Labs Blog – September 6th, 2012

Sometimes  it is the little things that are the most useful in life: using a paperclip to retrieve a disc locked in a computer or as emergency back up when the hems on your clothing are in disrepair.

One virtual paperclip that has huge potential for Social Media is versioning.

Versioning, or more accurately, Version Control System (VCS),  is the secret sauce that keeps agile development agile and multi-threaded tasks in synch.  Versioning maintains content and context for any given artifact and is most commonly used in software development – in particular maintaining code bases or code trees.

Versioning is much more than a way to ensure edits and changes are not lost and can be tracked.

Version Control Systems have evolved to enable a protected, searchable environment, allowing individuals to create separate branches and then merge their modifications or augmentations back into the base.  Each version can be searched and reconstructed, providing both stability and maintainability.

The quality of code is improved as bugs can be traced back to the time of their introduction.  Quality can be further improved by including relevant comments and logs, all of which help provide richer history and valuable context when revisions or replacements for the code base are being considered.

While this is all very useful – I would suggest essential – for application development, versioning has even greater potential to support and improve the quality of most, if not all, collaborative projects.

Like the paperclip, VCS can be applied to any creative activity where content changes frequently,  particularly where multiple contributors are involved.  VCS allow contributors to create and evolve their own branches which can then be merged back to become the latest version.  Using a VCS is so much simpler than using “track changes” in an office productivity document, which does not support multiple branches nor keep each saved change.

Reconstruction of office productivity documents case tempts the patience of even the most tolerant of individuals.

Let’s look at a few cases  where the approach would be integral to effective effort and overall success of a collaboration.

Case 1: Collaboration Dictionary:

Standard Definitions and Terms are easy to establish when co-workers are part of a specific group.   Common vocabularies usually develop in most communities, but writing down the words and their definitions is critical to ensuring that there are no ambiguities or misinterpretations.

When co-workers belong to different groups with their own vocabularies, the challenge becomes larger and the value of a dictionary rises.

As the group’s reach continues to expand, so too does the potential for miscommunication and misunderstanding.  Authoring and maintaining the dictionary can  be onerous, especially where it is approached from within a hierarchy, where one group or individual controls the content and holds the sole authority to augment, modify and publish.

Opening up the effort to joint collaboration is both expedient and efficient, providing there is sufficient control to ensure integrity and maintainability.  A version control system will allow co-workers to define their respective  areas of the dictionary, treating each term or collection of terms as a branch of the information base.

The VCS will facilitate the merging of the branches, as well as the ability to roll back to any version should it be required.
Case 2: Risk Assessment

Risk assessment is another key part of planning and demands copious amounts of input, discussion, review and revision.  Similar to the Dictionary case above, risk assessment is relatively easy when performed in a small discrete group.  Again when the scope of the project extends to other groups the complexity and effort required increases factorally.

Collaboration can ameliorate these difficulties,  dependent on good governance and control.  In this case VCS offers a bonus benefit, which is a full context of the discussions and determinations made during the lifetime of the risk that is being assessed.

Before VCS Risk Assessment documents were static and usually represented the final summary of assessment.  But VCS allows that assessment to continue as a living artifact, providing historical context when new events and conditions demand a fresh analysis of the solution and its environment.

Case 3: Curation

I have often stressed the need to treat curation, and especially organizational curation, as a form of Information Lifecycle Management.

Organizational curation means that information is not just a publication, with fresh content for every issue.  Information needs to be cultivated, nurtured, refreshed and made available when and where it may be needed.

Old information never dies, it awaits to inform future consumers of ideas and knowledge.  So content is more than the data presented either visually or verbally, it is augmented by meaning and context, both of which can be accommodated in a versioning approach.

External Collaboration

The cases above are fairly common, but are usually contained within a particular organization or enterprise –  in other words behind the corporate firewall.

Generally. in these cases the individuals, are part of the same organization (at least for the project at hand) and in efficient companies experience a common purpose, culture, and set of standards and policies.

The ever-increasing possibility of external collaboration on projects makes the value of a Version Control System reach the level within Software Development – i.e. Essential.

Moving Version Control to the cloud and enabling a distributed model makes the “essential” desirable.  DVCS (Distributed Version Control System) removes the need for centralized management and the dilemma of either supporting every known platform and stack, or limiting the number of contributors to those that comply with corporate standards.

Distributed Version Control opens the door to wider communities, unrestricted by culture, location or time.

It proves the paperclip that keeps collaborative efforts organized, manageable and crowdsourced.

Photo by Tyler Howarth via Flickr Creative Commons
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Enhanced by Zemanta

In a very short time curation has evolved from a minor supporting role to a major or even leading role in Social Media engagement.  It is no longer sufficient to just share items of interest, breaking news and opinion, not if you want to be regarded as authentic and taken seriously.

Information Filter

Knowledge Condenser

Curation has many definitions, including my own: “Curation is the acquisition, evaluation, augmentation, exhibition, disposition and maintenance of digital information, usually centered around a specific topic or theme”.  The Digital Curation Center (DCC) in the United Kingdom puts it more succinctly

Digital curation, broadly interpreted, is about maintaining and adding value to a trusted body of digital information for current and future use. (DCC)

Both definitions infer an information lifecyle process, that manages the digital objects from creation to deletion. Both suggest that capturing and adding value, whether by commentary or related material, is vital to the end product which is knowledge or information that can be referenced now and in the future.

Message Amplification

However the evolution of digital curation is experiencing some fragmentation.  Not that this is bad, but it does suggest the differences should be understood  as curation tools will differ in features and capabilities as each tries to satisfy its target customer base.  So far I have  identified 3 major distinctions in curation:

  1. Marketing Content: comes in several forms as marketeers move away from landing pages on Facebook and web sites, and seek to amplify brand presence through curated content.
  2. Information (or Knowledge Content): More focused on collecting and condensing information to support a topic or subject. Most commonly a reference site usually set up for either internal or external collaboration
  3. Personal Content – less dependent on content management features and capabilites: can either be used for amplification (self-branding) or condensing (information).

The question I would like to pose is who visits these curated sites and what are their preferences.  The following poll offers choices in the style and content of  curated sites.  Please let me know which sites you prefer to access for either information or shareable content.  I have made a further distinction for sites that are the result of either employee or community collaboration as they possibly differ from information sites in the degree of social participation (ie more social).

Photo from New Exhibit! Native American Cultural Objects at the CHP – Contributed by Francisca Ugalde and Cathy Faye.

A recent post by Brian SolisThe Curation Economy and the 3 C’s of Information Commerce” neatly deconstructed the information flow within the Social Network.  The 3 C’s are creation, curation and consumption, and while consumption remains the largest activity he correctly identified curation as a vital part of the social information chain, as it is the intermediary and often principle connecting service between the authors and readers of content

There are many curation tools available (@williampearl Shirley Williams’ blog post references 40).  Most serious Social Media participants use one or several of them to save interesting content discovered or referenced in their daily pursuit of engagement.

Though the name curation is applied to such tools as scoop.it list.ly Pinterest and others all too often these tools act as nothing more than scrapbooks, with photos and articles appended to pages because they caught our imagination, piqued our interest or satisfied our desire to be seen as a member of a community of interest.

It is true that many curating users perform a rudimentary evaluation to classify the curated content and to position it within a relevant category;  an even smaller number provide some commentary on the content.  But like a scrapbook these collections remain static with a last-in first-presented view of the collection that has been assembled.  Content that was first collected generally remains buried under more recent entries, and interactive commentary is almost non existent.  As a result the value of such collections is greatly diminished and the prime activity of social media curators appears to be browsing the curated pages of others in search of new content to display on their own.

This observation may be harsh, yet I believe that there are many curators who do far more than I have indicated here, however the current tools have limitations. Furthermore to raise curation to the level required to act as the intermediary between creation and consumption, as indicated by Brian Solis, we need to bring aspects of Information Lifecycle Management disciplines and processes to bear on the problem.   In a previous post on the network weaver I had already identified curation as one of the 5 major components of the social networking architecture.  It is notable that it takes up to 2 years for a post graduate to obtain an MFA in curatorial studies or a Curation Diploma from the British Museum.   I have used the British Museum course curriculum as a basis for identifying  the sub components of Social Media Information Curation.

Information Lifecycle Management concept applied to Social Media Curation

  1. Attribution – The first step on receiving any new content it to examine its provenance, determining source and history (journey) to the curation site.  Part of this is validation, in social media terms checking that is not spam or spoofing,  and part of it is ensuring the links and references are still active and, if not, refreshing them or marking them inactive.  Once validated it is important to attribute the content to the author (direct) or those who have shared the content (indirect).  The reason for doing this extends beyond mere politeness as it promotes the contributors and increases their relevance as possible collaborators in this or any related collection.
  2. Evaluation – the analytical step in the process and one that should not be embarked upon lightly, as it takes a high level of expertise to properly evaluate content.  It is not just determining classification and category, it involves going several layers deeper to ascertain the nature and value of the content.  Is the content authoritative, supportive, contrary, derivative, anecdotal or coincidental for example and, as a lead in to the next step, what is the etiology of the content and how is it related to other content entities?
  3. Organization – as with any information repository the key to consistent value is the way the content is organized, and the flexibility of the structures that support it.  The value of content is greatly increased if the relationships between entities can be indicated and that links are flexible enough to be easily orchestrated when new content or understanding modifies the relationship.
  4. Commentary – Curators are also creators of content, a slight divergence from the Solis model which limits the curation role to an intermediary who is not part of the digirati (his description of the authoring elite).  Commentary is an essential part of curation as it explains and amplifies the content and the relationships of content in any collection.  However in an open collaborative environment commentary is not limited to just the curator or curation team.  It can and should be as interactive as comment sections on blogs or message boards, with the curator as the default moderator.  This is the activity that augments the content and extends the knowledge and value of the information.
  5. Exhibition – First and foremost the purpose of curation is to care for and promote the collected content and bring it to the attention of the consuming public.  This is more than just broadcast and communication it is preparing and mounting a rich and informative display of connected artifacts, which illustrate the themes, dimensions and complexities of the subject at hand.  Successful exhibitions are compelling,  relevant and often topical.  They also do not last forever, but can be dismantled and recreated with fresh insight and perspective at a later date.
  6. Disposition – unlike transactional data that needs to be aged and archived, social data is more like the objects in a museum, they are never destroyed or deleted, and rarely put into forgotten repositories.  They are stored and maintained as objects with variable value and possibly potential future reuse, they are out of immediate sight but always available for reference or inclusion in other contemporary collections.

As can be seen from the diagram the information lifecyle has no end.  Disposed (ie stored) information still needs to be maintained and re-evaluated and this is the task I have described as  Collaborative Husbandry or collective farming.  This is equivalent to the constant reexamination of requirements in The Open Group Architecture Framework (TOGAF), as current and new information can change curated landscape very quickly, and  skilled curators should be able to adjust the curated content to accommodate this.   The more sophisticated and comprehensive the collection the more curating resources are needed to maintain the information quality, which leads me to believe that enterprises will seek and appoint skilled curators and possibly even a Chief Curation Officer as they become increasingly dependent on external information and resources.

I would be interested to hear of additional requirements for Social Media Curation, as I believe we are still in discovery mode on what is needed to better identify, collect, discuss and exhibit the knowledge that is cascading  through the global Social Media.

Enhanced by Zemanta
Image: nuttakit / FreeDigitalPhotos.net

Image: nuttakit / FreeDigitalPhotos.net

How does one define Big Data and is “big” the best adjective to describe it?  There are many voices trying to come up with answers to this topical question.  Gartner and Forrester both agree that a better word would be “extreme”. Between the two major consulting firms they have determined four characteristics that extreme can qualify:  they are agreed on three: volume, velocity and variety.  On the fourth they diverge, Forrester postulates variability while Gartner prefers the word complexity.   These are reasonable contributions and may form the foundation for the definition of big data that the Open Methodology Group is seeking to create within their open architecture Mike 2.0.

However the definition still falls short of the mark, as any combination of these characteristics can be found in many of today’s large data warehouses and parallel databases operating in outsourced or in-house data centers.  No matter how extreme the data eventually Moore’s Law* and technology will asymptotically accommodate and govern the data.  I could suggest that the missing attribute is volatility or the rate of change, but that too can be applied to current serviced capabilities.  Another important attribute that is all too often missed by analysts is that Big Data is world data, it is data in many formats and many languages contributed by almost every nationality and culture and the noise generated by the systems and devices they employ.

Yet the characteristic that seems to address this definition shortfall best is openness, where openness means accessible (addressable or through API), shareable and unrestricted.  This may be controversial as it raises some key issues around privacy, property  and rights, but these problems for big data still need to be resolved independent of any definition.  Why openness?  Here are six observations:

  1. Any data that is not open, ie that is private, covert or obscured is by default protected and confined to the private architecture and data model(s) of that closed system.  While sharing many of the attributes of “big data” and possibly  the same data sources at best this can only represent a subset of big data as a whole.
  2. Big data does not and cannot have a single owner, supplier or agent (heed well ye walled gardens), and is the sum of many parts including amongst others social media streams, communication channels and complex signal networks
  3. There will never be a single Big Data Analytic Application/Engine , but there will be a multitude of them , each working on different or slightly different subsets of the whole.
  4. Big Data analysis will demand multi-pass processing including some form of abstract notation, private systems will develop their own notation but public notation standards will evolve, and open notation standards will improve the speed and consistency of analysis.
  5. Big Data volumes are not just expanding, they are accelerating especially as visual/graphic data communications becomes established (currently trending).  Cloning and copying of Big Data will expand global storage requirements exponentially.  Enterprises will recognize the impractical economy of this model and support industry standards that provide a robust and accessible information environment.
  6. As enterprises cross into crowd-sourcing and collaboration in the public domains it will be increasingly difficult and expensive to maintain private information and integrate or cross reference with public Big Data.  The need to go open to survive will be accompanied by the recognition that contributing private data and potentially intellectual property is more economic and supportive of rapid open innovation.

The conclusion remains that one of the intrinsic attributes of Big Data is that it is and must be maintained as “open”.

Related Links

  1. Gartner and Forrester “Nearly” Agree on Extreme / Big Data
  2. Single-atom transistor is ‘end of Moore’s Law’ and ‘beginning of quantum computing’.
Enhanced by Zemanta

The cycle of network weaving activities – the larger the scale the more skilled the practitioner

June Holley, author of “The Network Weaver Handbook”,  was the guest on a recent #ideachat , hosted by @blogbrevity,  where she conducted  a spirited and vigorous discussion on the role of the network connector and collaborator whom she describes as a network weaver.  June believes that this is something we all do, often without realizing it.  The skills can be learned and improved, it’s all about how we are aware of and relate to each other. Ultimately we should be able to transform the world we live in.  To a large degree this is true especially in small to medium sized communities.  However scaling to the immensity of the Social Media Universe requires those skills to be refined, amplified and extended to the point where the role is highly specialized and potentially very much in demand.

The chart above is an attempt to summarize the collective input from the participants in #ideachat, none of whom contested the notion that network weaving was learn-able, necessary or trans-formative.  Indeed the flow of positive thinking provided a tsunami of skills and activities that were deemed necessary network weaver attributes.

Acquisition

The receptor phase of weaving represents the intake of content, context and resources.  This includes searches and information gathered from multiple sources in monitoring and participating in community conversations and chats.  Acquisition is equivalent to sourcing in a supply chain and represents the raw intelligence needed to fuel productivity.

Review

The review phase is the first stage of refining the raw intelligence.  Analysis is the primary activity and is applied to understanding the meaning, authenticity and importance of content and resource.

Curation

The second stage of refinement is curation, taking the analyzed information and making it transparent to the served communities and the world at large.  The refinement includes categorization (ie topics), classification, (eg value and relevancy) and commentary.

Association

The third stage of refinement is associating resources with communities, content or most importantly with each other, understanding how to apply the relevancy of information and resources to each other.  The third stage is also the mapping stage of the process, and is vital to the success of the network weaver.  As the weaver’s reach extends to national or even global scale other maps from trusted weavers can be incorporate into the weaver’s sphere of connectedness.

Construction

Construction is the implementation phase of network weaving.  It is establishing connections based on the refinement process, closing the triangle as June Holley describes it, between resources, communities and other network weavers.  Here the weaver is more than just a connector they are catalysts to action and innovation, whether directly contributing or standing back and monitoring the resulting activity.

Central to all these activities and processes is the governing principle of Cultivation.  This is the set of nurturing skills that separates the good network weavers from the great ones.  Cultivation is farming or husbandry in its highest form,  not just building connections but feeding them, nurturing them, strengthening them and understanding their needs.  It means being endlessly curious, constantly vigilant and forever questioning to ensure that the woven networks are as efficient and healthy as possible.

It is within nearly everyone’s reach to acquire, analyze and curate information on the social network; billions of Tweets, blogs, circles and walls are testimony to these skills being learned and practiced on a daily basis.  Everyone has the capability to imitate the African Weaver bird  and weave their own network of resource and content.  But it takes special skills to associate, construct and maintain vast networks of content and resource.  It takes a proficient weaver to connect each nest in the tree, and a master weaver to connect all trees within a region, all regions within a country and so on across the language and geographic divides that impede global connectivity.

Enhanced by Zemanta

Twitter Updates