Category Archives: Reflect

Retrospective: The Past and Future of Information Programs

Over the course of the last three months I’ve talked about a wide variety of things in my search for some kind of internal understanding about information programs and services. Other topics and subjects have found their way onto this blog in that time, and while they’re all still highly relevant, in the interest of good bookkeeping here’s an all-you-can-eat buffet of the INN333 highlights from the semester that was:

Cue a wavy distortion filter and tinkling music for a walk down memory lane.

Week 1 Primer on Personal Learning Networks

Week 2 Reflection on Information Overload 

Week 3 Reflection on Social Media & Micro Blogging 

Week 4 Reflection on Content Curation, Aggregation, and Web 2.0 

Play for Week 4 in which I tried my hand at some content curation of my own 

Week 5 Reflection on Library VoIP services 

Play for Week 5 in which Ben Harkin and I played silly buggers with Skype

Week 6 Reflection on The Failure of Google+ 

Week 7 Reflection on The Intersection of Creative Commons and 3D Fabrication

Play for Week 7 in which I finally joined the filter brigade on Instagram

Week 9 Reflection on Device Agnostic Applications 

Play for Week 9 in which I declare my love for the humble QR code 

And last but not least, some playful reflections for Week 10 in which I discuss games, gaming, and gamification in a two part after school special. (Part1) & (Part 2)

Now that we’re done wiping away the tears from all the good times we’ve shared I ask that you endure one last soapboxing for the semester as I reflect on what this all meant to me.

As a lifelong hobbyist of technology I have always had an insatiable desire to tinker with gadgets, play with the latest software and games, and test out the newest platforms and services.  Back in 2010—when the iPad first launched—I became increasingly interested in the potential effects of how technology was increasingly incorporated into day-to-day life and how profoundly transformative an effect this was having on how content was authored, published, and consumed.

As information consumers, we all live in an age of unimaginable abundance, uninhibited by traditional notions of material scarcity. The infinite shelf space of the internet paired with zero-cost reproduction of content had created a world inundated with increasingly complex and almost magical platforms for technologies and experiences that even a scant few decades ago would have been nothing more than science fiction. And here I was trying to make sense of it all.

At the time I was finishing a master’s degree in writing, editing, and publishing at the University of Queensland, and I began obsessing over how the commoditisation of information and digital publishing was converging and—by extension—devaluing digital works. I became convinced that the devices and services that were simultaneously enabling these transformative, futuristic experiences in our lives also represented the greatest danger to the quality of those experiences. Tapping into social media, RSS feeds, and inviting ‘push’ notifications into my life seemed to be nothing more than inviting a never-ending supply of indiscriminate garbage information to bother me.

My fleeting dalliance as a neo-Luddite evaporated fairly quickly, but left me with a healthy scepticism for the efficacy and capabilities of the digital products that had so quickly come to dominate our lives.

Fast-forward to July 2013, and I find myself facing down a semester entirely concerned with examining the practical application of these technologies. I was actually keen to welcome the chance to explore many of the themes I was fascinated by, and eagerly got stuck right in.

As I have a fairly high level of comfort with complicated IT systems, I had doubts that I would get anything worthwhile out of the practical side of the subject that—at face value—appeared to be an odd assortment of activities designed to familiarise students with crucial information services in the wild rather than do anything particularly complicated or revelatory with them.

I could not have been more wrong!

Being pushed to engage in—and more importantly—reflect on these experiences made me actually take the time to examine the many-faceted uses of these services, and better appreciate how they might be utilised by others. Enough time has passed that I can’t call to mind any anecdotes about the specific challenges I faced this semester, but I have a nebulous idea that I found the process of discovery and investigation to be meaningful and cool, and genuinely enjoyed playing with how I approached and interacted with the activities.

Paradoxically, although this subject has concerned itself primarily with social technologies, I couldn’t shake the divided and diminishing effect of lacking more direct social collaboration with my peers. Despite the ongoing conversations on Twitter and Facebook and regularly touring my colleague’s blogs, I felt displaced and distanced from my peers—despite our shared engagement. The handful of face-to-face workshops were undeniably highlights of the semester, but just drove home how much I regretted that more face time with the cohort was not available.

Ultimately, this subject helped me appreciate and articulate a deeper understanding not only of how and why certain services function the way they do, but also how crucially important it is to take the time to interrogate and examine these functions fully.

Although the heady season of INN333 has come to a close I’m not going anywhere. My journey of professional development is far from over and there are scores of blogs unwritten just waiting in the wings.

So stick around, the best is yet to come.

 

Gamification Part 2 A Plan for Play

So, I’ve soap boxed about what’s wrong with gamification but how would I actually do something good? What would *I* do to design library-games for a school that worked?

Anyone who’s listened to me talk about this for any length of time will know I’m a big fan of Amazon’s ecosystem. And their Kindle FreeTime initiative—pictured here—is an amazing leap forward in integrating gamification in a meaningful way with kid’s reading practices.PAperwhiteFreetime

There’s no universal panacea for getting gamification to work across different contexts. But, the FreeTime idea of allowing parents or teachers to set individually customised goals, and reward them appropriately really resonates with me.

Rewarding students for borrowing and returning books simply invites cheating, or gaming, the system—a delicious irony I realise. Integrating meaningful tracking metrics into a digital-reading experience is a far more robust approach to fusing play and engagement with ordinary reading activities. There’s an element of mastery to the experience too! Tracking and improving reading speed in a session or over time gives readers goals to meet and surpass. Tackling longer, harder books allows them to see their growth over time, and the achievement is a real, measurable thing.

This of course is entirely dependent on using eReading devices. But devices are becoming so ubiquitous in children these days that there’s no reason to hold back on this idea. Moreover, issues of attention and engagement are critical in young students, and reaching out to them on the platforms and devices they already use is key to getting them on board with literacy. If they see reading–and the rewards for reading–as just another thing they do on the devices they already use then traditional reluctance to pickup a book may diminish.

I’m not in favour of ‘tricking’ students into reading or getting involved in the library. But, coming to them on their own terms and saying “Hey, I get that you like your devices; I get that you like games; did you know we offer a way for you to access library content on your device in a way that acknowledges and tests your reading skills?” seems like a reasonable approach that is low-key enough to at least be worth a shot.

To launch a pilot program like this, I’d be in favour of custom developing unique apps that reflect the character of a given school environment. Every school–and every student–is different, and there’s not going to be a one-size-fits-all approach to designing activities that work for every library.

Failing all that, I’m keeping an eye on what Amazon do next with the rollout of FreeTime. When the new Paperwhite Kindle’s launch in October I’m certainly expecting a gamechanger!

 

Games, Gaming, and Gamification – Part 1

This is the first in a series of posts about Games, Gamification, and Libraries.

There’s a certain stubborn snobbery that associates gaming with children, which therefore assumes that adults who engage in meaningful play-based activities must be in some way juvenile too. It’s a huge generalisation, but there’s a pervasive unpleasantness demonstrated by a certain, narrow-minded breed of professionals that ‘objectively’ rules out games and gamification in a completely arbitrary way that doesn’t need to be explained, defended, or evaluated.

And that’s hugely frustrating!

To derail from the broader discussion for a moment, I’d just like to contextualise my soap boxing a little. I’m pretty savvy when it comes to games and gaming—in fact, if Gladwell’s litmus test of ’10 000 hours of an activity make you an expert’ then I’ve certainly covered the requirements a few times over.  Just this past week I participated in the Indie Speed Run 48 hour game jam—an excellent idea in the middle of a solid block of university assessment. This game jam tasked my team and me with the complete design and development of a game in just 48 hours with a random set of elements to include and adhere to.

GameJam

Immersing myself in a rapid-fire development environment for a whole weekend left me asking some of the high-level, design questions that are fundamental to all sorts of manifestations of ‘games’.

These core principals of play design—whether they in a 2D side-scroller or a library catalogue—ask the same sorts of questions about attention, motivation, and engagement. In taking on the mantle of a game designer I had to ask myself these questions in order to find meaningful verbs to describe play-based activities beyond the mundane tasks being performed.

Take Trove’s text correction initiative for example. Here we have a dull set of actions such as laborious information parsing and data entry. But, it’s packaged in such a way that we see it as exploration, discovery, and competition. Checking and correcting OCR’d articles becomes a quixotic race-to-the-top of who can demonstrate the highest level of commitment. Sure, it’s a highly insular community. But, within the space provided by Trove an entire currency of reputation has grown up around the dubious honour of correcting the most articles.

This isn’t profoundly high-level game design; but it works. The layers of progression and incentives give a positive feedback loop that rewards commitment and engagement. But, bootstrapping game systems onto existing mechanics is always going to be inherently flawed.

This is no ‘ground-up’ design. It’s simply re-contextualising something that already works and trying to make it fun and engaging. It’s really just expressing an existing system through new mechanics.

Play shouldn’t feel like work, and there is a wealth of work that needs to be contributed to libraries. Social data that enriches a catalogue such as tagging, reviewing, recommending, relating, and classifying records enriches a library in an amazing way. Slapping the veneer of a game onto the catalogue to ‘trick’ people into doing this work feels disingenuous, and smacks of exploitation.

Really, it’s a fundamental question of putting content into a game, rather than bootstrapping a game onto content.

Serious games have an element of mastery: your engagement and fun come from a progressive, skill-based unlocking of content. Gamification without meaningful mechanics might as well be a workplace KPI that just tracks your threshold for filling arbitrary quotas to achieve recognition.

 

The Age of Agnostic Applications

So many features of the social-web rely on the idea of an ‘always on’ or interconnected set of experiences. Location data, check-ins, running commentaries on social media, and ‘smart’ data is all dependent on making things universally accessible and always providing what is needed when it’s needed.

The monolithic power of a service like Facebook is only so successful because it is ubiquitous. Having Facebook locked to a single proprietary ‘Facebook’ device would fragment users at a device level in addition to divisions at the level of the platform or ecosystem they’re tapping into. Fragmentation and interoperability are already among the chief problems online, and dividing already fractured user-bases ultimately benefits nobody.

Device agnostic platforms–such as Amazon’s Kindle reading application–demonstrate not only the value of cross-platform interoperability, but the amazing potential for apps that transcend the narrow boundaries of single-function devices. Amazon realised earlier than most that tying content to their platform–rather than a device–not only extended their potential user-base, but retained that user-base despite their migrations to the newest technologies and gadgets. These users could confidently build their content libraries through Amazon without feeling ‘locked-in’ or trapped by a certain hardware provider. Furthermore, Amazon intelligently leveraged the multi-function nature of many of the devices that Kindle content appears on to create experiences that exceed the capabilities of any device on its own. Amazon’s Audible recorded audio-books synchronise seamlessly with progress in the text of an eBook, and regardless of which device you have with you be it an Android phone, an iPad, or a Amazon eReader your reading progress will always be matched by the Kindle app across platforms and media formats. Allowing users to freely move between listening to an audiobook during their commute to a dedicated reading devices at home provides the sort of everyday, user-focused experience that separates the merely functional devices from those that make our hearts sing.

There’s plenty of scope to see how this sort of interoperability is truly transformative. So long as portable content adheres to some sort of extensible, flexible standard that can be interpreted and parsed by a variety of devices, any number of asymmetrical interactions could be possible.

As the ubiquity of smart devices grows, creating mature workflows for harmonising content between devices permits not just portability, but enables hitherto unimaginable levels of potential that we are only just scratching the surface of.

You Probably Would Download a Car

The issues of copyright and ownership of media is one of the most widely disseminated issues on the Internet. A proliferation of free and pilfered material on the Internet has opened to floodgates to an epidemic where digital is synonymous with free and theft is no longer perceived as a crime. The entrenched indifference of today’s youth to copyright has created a grim situation for content creators and distributors alike.

Clearly, consumers love free. Personal preferences fades away at zero-cost. The psychology of free makes perfect sense because of the certainty that free affords There is no risk to free. But, if the collective trend of opinions dictates that digital goods are too cheap to matter, then no cost equates to no value; and no one can be expected to do quality work if it is simply going to be devalued, stolen, remixed, and re-purposed by others.

However, it’s not all so bleak! Grassroots movements like the Creative Commons License are affording producers of content some measure of control over the proliferation of the worthwhile work that they are doing.

Equitable access to media has never been more prolific, and libraries are able to assess and include vast amounts of new media every day under the umbrella of their collections thanks to the robust flexibility of CC. But, there’s a ticking time-bomb lurking beneath this unchecked exuberance.

3D printing and open-source fabrication–sometimes referred to as ‘making’ or ‘maker’ pursuits–are  already paving the way to profoundly transformative uses for technology. Thinigverse is a hub for capturing the raw creations of users from around the globe, and feeding the blueprints back into a community of amateur and professional printers, tinkers, and makers. Content is posted, traded, and printed under a shared understanding of the terms of use that allow users to distribute their work under Creative Commons licensing.

This could have an amazing effect on libraries and other information-repositories that choose to build the necessary infrastructure for physically realising the potential of making. Artwork could be download and fabricated at the press of a button. Replacement or custom parts for repairing devices would become universally accessible. Physical trinkets, ephemera, and minatures could become as commonly shared and distributed online as songs, stories, and paintings.

And then somebody has to go and ruin it for everyone by uploading the blueprints for copyrighted materials such as Disney action figures or Matchbox cars.

Taken one step further, there a printable models for complex machines available online including fully-functional automobiles and guns. This isn’t a pipe dream either. A cursory trip to Google will pull up more stories than I’d care to link here about open source firearms, 3D-printed cars, and the unfortunate precedents they’ve set.

And there’s the rub: the robust flexibility of Creative Commons to encapsulate all sorts of content means it has to include those that can be weaponised. What library is going to risk allowing a minor access to amazing technology that can be used to print a firearm?

And so the powerful tool that is CC–when applied to physical fabrication–has been placed in the firing line. Government intervention and regulation of content is anathema to what CC represents, but it’s the only solution currently on the table for stemming the tide of objectionable objects. Regulating CC would impose rigidity on a fundamentally fluid system, and erode the pillars on which the Internet’s maturing approach to copyright has been built.

But what other choice is there?

Facing the Facts

Google+ failed to dislodge Facebook in any meaningful way. Google is fully integrated into my workflow, I make use of a wide variety of Google online services, I manage multiple accounts, and use Google’s MX records to manage my domain’s email.

I use Google for a huge portion of my online activities, but I don’t use Google+.

The service is amazingly slick. It’s attractive and engaging. It’s more logically laid out than Facebook and harness the tremendous power of Google’s backend to do amazing, magical things like recognising, identifying, and tagging images automatically and offering best-in-class features like realtime video chat via Hangouts.

Sure, it doesn’t do quite as many things as Facebook does, but what it does do it does well. In fact, I’d be hard pressed to pick any single category where Facebook offers a superior experience.

So, why is Google+ a ghost town?

Because social networks are intrinsically valueless. The entire purpose of these networks is generated and propped up by the connections you have within the system—and nobody I interact with is invested in the Google+ ecosystem.

Being the best at something doesn’t matter if the audience is entrenched elsewhere. Everyone would probably prefer to use Google+, it’s one less account you need to juggle, it’s better integrated into your devices, and the forthcoming Google Glass will hook it directly to your face. We’d be crazy to move to Google+ on our own if none of the people in our lives moved too.

But, in a Heller-esque case of circular logic, everybody likes Google+, everybody agrees it’s great, and nobody moves.

Shouting into the void

Instant messaging, ‘chat with a librarian’, and VoIP reference services all suggest a great paradigm shift in information services. Reaching out and engaging with clients through modern communication tools and directly answering queries sounds fantastic. What’s not to love?

My own support experiences (on the receiving end of queries) have shown me that communicating through these channels can be a boon to improving customer experiences, provide timely and accurate resolutions, and facilitate a culture of complacency, belligerence, apathy, and exploitation among lazy and petulant users.

That went off the rails quickly.

Don’t get me wrong, I undeniably see the benefit of these services. But the ubiquitous availability of helpdesks, instant chat services, and support lines has only fed a growing sense of entitlement that is pervasive in today’s information seeking behaviours. Clients, customers, or library patrons expect to get exactly what they want with minimal effort expended. A certain subset of these users will simply expect that any complicated work is done for them—and often become belligerent when they aren’t catered to immediately.

This isn’t exactly a new problem. Unruly clients and surly customers have always had problems with customer service structures. Users with a certain pre-disposition are always going to cause issues. But, the anonymity afford by IM chat services gives a layer of abstraction that suddenly ‘permits’ abandoning social norms, and it quickly becomes acceptable to act like a petulant child or a raving lunatic.

I realise I’m being hyperbolic here, but technology is not a universal panacea. Creating a chat or IM service to support users will certainly allow new avenues of support. But, time-and-time-again I encounter people who have no sense of online etiquette or decorum. People who treat anything that happens online as a free-for-all where they are entitled to act however they please, and exploit and take advantage of anyone willing to put up with it.

Chat and IM services are great for the consciences user; they often provide that magical experience of meeting and exceeding your expectations. For the lazy, angry, or mindlessly indulgent user they’re nothing more than another service that implicitly owes them something or can be exploited.

And for the diligent operator? These foul exploiters do nothing more than ruin it for everyone else.

Can you Digg it?

Curation, Aggregation, and Web 2.0

Tools for curating, sorting, and managing web content usually take the form of social aggregators such as Digg or Reddit. The act of curating is not one of careful selection by a trained expert, but rather the weighted consensus of the masses promoting or up-voting content they find notable.

Web 2.0—nay, the entire information profession—has a problem; the barriers to information creation and storage have fallen in recent years. This has resulted in the amount of information on the Web proliferating beyond all expectations. Finding the right information among the endless supply of trivial and irrelevant data has become almost impossible. The rational response would be to trust our curation to trained professionals, able to disseminate and sort through this wealth of information and categorise it based on merits of accuracy and quality.

Instead, popular aggregators and the wisdom of crowds have emerged as the determining values of qualitative merit on the Web.

There is a very real risk that the Web—the most powerful source of knowledge available—is mislabelling, misrepresenting, and misplacing important data, and being unable to distinguish it from the unfiltered noise of the masses. We have trusted the most important resource in human history to the collective rule of enthusiastic amateurs.

This pollution of data poses a threat of eroding and fragmenting any real information stored on the Web. Users have come to rely on the anonymous and amorphous ‘rest of the Web’ as their authoritative filter. Content aggregators remix information drawn from multiple sources and republish them free of context or editorial control. These aggregated opinions of the masses are vulnerable to misinformation as users have too much control and too little accountability. The risk of aggregating information is the risk of privileging the inaccurate, banal, and trivial over the truth.

Digg.com, founded in 2004, was the first notable aggregator of Web 2.0 content. Voting content up or down is the core of the site: respectively ‘digging’ and ‘burying’ material based on contributors input. This supposedly democratic system allows content of merit to be promoted and displayed. But, this assumes that all opinions and user-generated regulations are equally valuable and relevant in determining merit.

The collective judgements of a group—the clichéd ‘wisdom of the crowds’—can be an effective measure of certain types of quantitative data. Called upon to guess at the number of jellybeans in a jar, the aggregated guesses of a thousand contributors would provide a relatively accurate figure. However, if that same group was called upon to disseminate the value of a news story their opinions would not represent a collective truth about the value or merits of the piece. The voting process of Digg or Reddit is transparent and instant, and causes contributors to cluster around popular opinions—promoting sensationalism and misinformation. Content that grabs the attention of users will quickly be promoted and rise to be seen by more users, regardless of its accuracy.

The momentum of a popular story is exponential: the more users see something, the more popular it becomes—exposing it to even more users. The infinite shelf-space and shelf-life of the Web means that once a piece of information has seen any exposure it almost impossible to control. Instantly a lie can spread across the Web by the zeal of its promoters, and be cross-referenced by a dozen news aggregators. Lies become widespread and pollute enough aggregation sites that they become the valid—supposedly authoritative—result of any Google search on the topic. The wisdom of the crowds is fickle and closer to a mob mentality; it is impossible to aggregate their wisdom without aggregating their madness as well. After all, there is a fine line between the wisdom of crowds and the ignorance of mobs

However, non-trivial and important content is still being created, promoted, and viewed on the Web; aggregated information services do capture these notable pieces of data in their trawling. In practice an old problem remains: time and effort must be manually expended to sort out the real information from the useless noise. Exactly the sort of time and effort that professional curators, librarians, and information professionals were traditionally employed to expend.

Digital media theorist Andrew Keen, in his book The Cult of the Amateur (2007) likens the community of Web 2.0 to evolutionary biologist T.H Huxley’s humorous theory that infinite monkeys on infinite typewriters would eventually create a masterpiece such as Shakespeare. Keen sees this infinite community of empowered amateurs as undermining expertise and destroying content control on the Web. He argues that their questionable knowledge, credentials, biases, and agendas means they are incapable of guiding the public discourse of the Web with any authority at all.

Another perspective on this comes from the 1986 book Amusing Ourselves to Death, wherein television commentator Neil Postman theorised about the erosion of the public discourse by the onslaught of the media. He frames the media in terms of the dystopian scenarios offered by Huxley’s grandson—science fiction author Aldous Huxley—in the novel Brave New World, and compares them to the similar dystopia of George Orwell’s 1984:

 

‘There are two ways by which the spirit of a culture may be shrivelled. In the first—the Orwellian—culture becomes a prison. In the second—the Huxleyan—culture becomes a burlesque’ (Postman, 1986, p.155).

 

In one dystopia, Orwell feared those who would deliberately deprive us of information; in another, Huxley feared those who would give us so much information that the truth would be drowned in a sea of irrelevance.

And, the culture of Web 2.0 is essentially realising Huxley’s dystopia. It is cannibalising the content it was designed to promote, and making expert opinions indistinguishable from that of amateurs.

User-generated content is creating an endless digital wasteland of mediocrity: uninformed political commentary; trivial home videos; indistinguishable amateur music; and unreadable poems, essays, and novels. This unchecked explosion of poor content is devaluing the work of librarians, knowledge managers, professional editors and content gatekeepers. As Keen suggests ‘What is free is actually costing us a fortune. By stealing away our eyeballs, the blogs and wikis are decimating the publishing, music, and news-gathering industries that created the original content these Websites ‘aggregate’ (Keen, 2007, p.32).

In a world with fewer and fewer professional editors or curators, knowing what and whom to believe is impossible. Because much of the user-generated content of the Web is posted anonymously—or under pseudonyms—nobody knows who the real author of much of this self-generated content is.

No one is being paid to check their credentials or evaluate their material on Wiki’s, aggregators, and collaboratively edited websites. The equal voice afforded to amateurs and experts alike has devalued the role of experts in controlling the quality and merit of information. So long as information is aggregated and recompiled anonymously then everyone is afforded an equal voice. As Keen dramatically states, ‘the words of wise men count for no more than the mutterings of a fool (2007, p.36)’.

We need professional curation of the internet now more than ever. We need libraries and information organisations to embrace the idea of developing collections that include carefully evaluated and selected web resources that have been subject to rigorous investigation. Once upon a time we relied on publishers, booksellers, and news editors to do the sorting for us. Now, we leave it to anonymous users who could be a marketing agency hired to plant corporate promotions; it could be an intellectual kleptomaniac, copy-pasting other’s work together and claiming it as their own; or it could be, as Keen fears, a monkey.

Without professional intervention, the future is a digital library where all the great works of human history sit side-by-side with the trivial and banal under a single, aggregated category labelled ‘things’. And we would have no-one to blame but ourselves.

Microblogging and Me

There is no such thing as a tool that is good even if used without consideration. Social media, microblogging, and corporate communications platforms are no exception to this. That being said, they are a powerful way to flatten hierarchies and open up the conversation within an organisation.

My previous employer–a major web-hosting company–were heavily invested in creating an open, integrated communications system within the company. With offices in a number of locations around Australia and the world, there was often a significant disconnect between all but the most closely integrated departments. To combat this isolation, the organisation rolled out Yammer across the company. Yammer, for those unfamiliar with the platform, is more-or-less a facebook news-feed clone for closed, internal use. Much like familiar social media platforms, Yammer invites users to post, comment, and follow discussions and share links and the such like.

Because the organisation had not followed through with a comprehensive internal communications policy for Yammer, the results were mixed. The posting quickly turned into inane, trivial, and mundane minutia such as: ‘The coffee pot on level 5 is empty’, ‘Lol who turned out the lights,’ and ‘Woo! Go accounts. More sales!’…you get the idea. There were some flashes of inspired thinking on the service, such as the CFO opening up a forum for discussing summer reading titles relevant to business and technology, which invited a rare opportunity to speak candidly (about books!) with the managing directors of a multi-national corporation. These opportunities to make my voice heard were few and far between, but I welcomed the fact that such a conversation could not have been possible without a tool like Yammer.

Ultimately, the problem with corporate microblogging and social feeds is one of restraint and management. Unchecked, it becomes yet another source for information-bloat and distraction. Too-regulated and it becomes a cork-board for posting internal PR releases.

Organisations take note! You should start hiring internal social media moderators and curators to better direct, manage, and engage the use of these platforms in your organisation. I’m certain that my peers and I would welcome the challenge!

 

Too Much Information

Web 2.0 pundit and theorist Andrew Keen writes in his book Digital Vertigo (2012):

 Instead of making us happier and more connected, social media’s siren song—the incessant calls to digitally connect, the cultural obsession with transparency and openness, the never-ending demand to share everything about ourselves with everyone else— is, in fact, both a significant cause and effect of the increasingly vertiginous nature of twenty-first—century life.

The inconvenient truth is that social media, for all its communitarian promises, is dividing us, rather than bringing us together (p. 67)

There’s a great deal of wisdom in what Keen is saying: The overwhelming wealth of information available online lends itself to a perverse idea of obsessive over-sharing and digital exhibitionism. Ideas of transparency and openness have to be considered against the alternative of constructing a carefully limited, constructed persona online to be completely disingenuous.

Ultimately, either end of the spectrum is still driving us towards an online culture that is divided, fragmented, and essentially at odds with itself.

So what’s the middle ground? What balance can there be between honestly engaging in a rich, participatory culture online, and protecting our individual privacy and identity.

For my own part, I choose to present myself as a professional fully and absolutely online. Anything relevant to my professional development, career aspirations, and written work is funnelled into the same set of linked channels. I keep a unified identity across media platforms (@mjjfeeney on Twitter; www.mjjfeeney.com on this, my blogging domain; /mjjfeeney/ as my Facebook username etc.). Since our online identities span so many platforms today, I feel that presenting a consistent set of values and sharing limits across each platform is vital. I would hate for someone who follows me on twitter to discover this blog and be disoriented by an overabundance of personal content.

I feel that keeping this consistency about what we’re sharing—and where—is vital. What you put online will be found, no matter where you think it’s hidden away. Making sure it’s something you’d be willing to share in *any* of your other channels of communication is vital.