Tag Archives: reflection

Gamification Part 2 A Plan for Play

So, I’ve soap boxed about what’s wrong with gamification but how would I actually do something good? What would *I* do to design library-games for a school that worked?

Anyone who’s listened to me talk about this for any length of time will know I’m a big fan of Amazon’s ecosystem. And their Kindle FreeTime initiative—pictured here—is an amazing leap forward in integrating gamification in a meaningful way with kid’s reading practices.PAperwhiteFreetime

There’s no universal panacea for getting gamification to work across different contexts. But, the FreeTime idea of allowing parents or teachers to set individually customised goals, and reward them appropriately really resonates with me.

Rewarding students for borrowing and returning books simply invites cheating, or gaming, the system—a delicious irony I realise. Integrating meaningful tracking metrics into a digital-reading experience is a far more robust approach to fusing play and engagement with ordinary reading activities. There’s an element of mastery to the experience too! Tracking and improving reading speed in a session or over time gives readers goals to meet and surpass. Tackling longer, harder books allows them to see their growth over time, and the achievement is a real, measurable thing.

This of course is entirely dependent on using eReading devices. But devices are becoming so ubiquitous in children these days that there’s no reason to hold back on this idea. Moreover, issues of attention and engagement are critical in young students, and reaching out to them on the platforms and devices they already use is key to getting them on board with literacy. If they see reading–and the rewards for reading–as just another thing they do on the devices they already use then traditional reluctance to pickup a book may diminish.

I’m not in favour of ‘tricking’ students into reading or getting involved in the library. But, coming to them on their own terms and saying “Hey, I get that you like your devices; I get that you like games; did you know we offer a way for you to access library content on your device in a way that acknowledges and tests your reading skills?” seems like a reasonable approach that is low-key enough to at least be worth a shot.

To launch a pilot program like this, I’d be in favour of custom developing unique apps that reflect the character of a given school environment. Every school–and every student–is different, and there’s not going to be a one-size-fits-all approach to designing activities that work for every library.

Failing all that, I’m keeping an eye on what Amazon do next with the rollout of FreeTime. When the new Paperwhite Kindle’s launch in October I’m certainly expecting a gamechanger!

 

Camera Obscura

My reticence to jump on the filter bandwagon has finally crumbled.

Rejoice readers! After a brief struggle with the arcane vagaries of WordPress shortcode and some bootstrapped php I managed to wrangle together an effective way to pull my Instagram feed into this blog (and only broke everything completely once). There’s a lesson in here somewhere about keeping your plugins up to date and remembering to switch from visual layouts to manual text entry…but who has time for these things?

My wacky japes and capers surviving post-graduate study are now available in a full-colour (well, sepia mostly).

I’ve since removed the gallery from the core of this blog, but I’m still out there on Instagram!

You Probably Would Download a Car

The issues of copyright and ownership of media is one of the most widely disseminated issues on the Internet. A proliferation of free and pilfered material on the Internet has opened to floodgates to an epidemic where digital is synonymous with free and theft is no longer perceived as a crime. The entrenched indifference of today’s youth to copyright has created a grim situation for content creators and distributors alike.

Clearly, consumers love free. Personal preferences fades away at zero-cost. The psychology of free makes perfect sense because of the certainty that free affords There is no risk to free. But, if the collective trend of opinions dictates that digital goods are too cheap to matter, then no cost equates to no value; and no one can be expected to do quality work if it is simply going to be devalued, stolen, remixed, and re-purposed by others.

However, it’s not all so bleak! Grassroots movements like the Creative Commons License are affording producers of content some measure of control over the proliferation of the worthwhile work that they are doing.

Equitable access to media has never been more prolific, and libraries are able to assess and include vast amounts of new media every day under the umbrella of their collections thanks to the robust flexibility of CC. But, there’s a ticking time-bomb lurking beneath this unchecked exuberance.

3D printing and open-source fabrication–sometimes referred to as ‘making’ or ‘maker’ pursuits–are  already paving the way to profoundly transformative uses for technology. Thinigverse is a hub for capturing the raw creations of users from around the globe, and feeding the blueprints back into a community of amateur and professional printers, tinkers, and makers. Content is posted, traded, and printed under a shared understanding of the terms of use that allow users to distribute their work under Creative Commons licensing.

This could have an amazing effect on libraries and other information-repositories that choose to build the necessary infrastructure for physically realising the potential of making. Artwork could be download and fabricated at the press of a button. Replacement or custom parts for repairing devices would become universally accessible. Physical trinkets, ephemera, and minatures could become as commonly shared and distributed online as songs, stories, and paintings.

And then somebody has to go and ruin it for everyone by uploading the blueprints for copyrighted materials such as Disney action figures or Matchbox cars.

Taken one step further, there a printable models for complex machines available online including fully-functional automobiles and guns. This isn’t a pipe dream either. A cursory trip to Google will pull up more stories than I’d care to link here about open source firearms, 3D-printed cars, and the unfortunate precedents they’ve set.

And there’s the rub: the robust flexibility of Creative Commons to encapsulate all sorts of content means it has to include those that can be weaponised. What library is going to risk allowing a minor access to amazing technology that can be used to print a firearm?

And so the powerful tool that is CC–when applied to physical fabrication–has been placed in the firing line. Government intervention and regulation of content is anathema to what CC represents, but it’s the only solution currently on the table for stemming the tide of objectionable objects. Regulating CC would impose rigidity on a fundamentally fluid system, and erode the pillars on which the Internet’s maturing approach to copyright has been built.

But what other choice is there?

Facing the Facts

Google+ failed to dislodge Facebook in any meaningful way. Google is fully integrated into my workflow, I make use of a wide variety of Google online services, I manage multiple accounts, and use Google’s MX records to manage my domain’s email.

I use Google for a huge portion of my online activities, but I don’t use Google+.

The service is amazingly slick. It’s attractive and engaging. It’s more logically laid out than Facebook and harness the tremendous power of Google’s backend to do amazing, magical things like recognising, identifying, and tagging images automatically and offering best-in-class features like realtime video chat via Hangouts.

Sure, it doesn’t do quite as many things as Facebook does, but what it does do it does well. In fact, I’d be hard pressed to pick any single category where Facebook offers a superior experience.

So, why is Google+ a ghost town?

Because social networks are intrinsically valueless. The entire purpose of these networks is generated and propped up by the connections you have within the system—and nobody I interact with is invested in the Google+ ecosystem.

Being the best at something doesn’t matter if the audience is entrenched elsewhere. Everyone would probably prefer to use Google+, it’s one less account you need to juggle, it’s better integrated into your devices, and the forthcoming Google Glass will hook it directly to your face. We’d be crazy to move to Google+ on our own if none of the people in our lives moved too.

But, in a Heller-esque case of circular logic, everybody likes Google+, everybody agrees it’s great, and nobody moves.

The Map Defines the Territory

Information architecture (IA) is a fascinating and compelling part of information organisation. Search is not a solved problem. Indeed, searchability and findability are among the most nagging problems facing information professionals. Findability—or lack thereof—is one of the defining elements of user experience. Unfortunately, it is also a source of endless frustration. It has the worst usability problem on the web. User needs and goals are obstructed by systemic failures in findability, and a website that is not easy to navigate is functionally useless.

My own experiences with technology have taught me that developers do not necessarily have the perspective to design usable systems. What they may think is an effective system is designed to only support an aspirational, best-case-scenario for navigation. What looks elegant from a coders perspective may not provide an optimal—or acceptable—experience.

Examining the council websites of the cities of Melbourne, Hobart, and Perth highlighted some of these elements. Melbourne and Hobart both demonstrated integrate, sensible designs that kept the same standard layout for the overall navigation of the website, and loaded pertinent content into a dynamically updated frame. Perth, however, insisted on having new subsets of menus and navigations for different sections, many of which had no visible ties to overall structure and obfuscated access to content. Intuitive architecture should provide universal navigation with no ‘orphaned’ avenues of browsing. Every page of the website should be able to function as a landing page with a completely accessible hierarchy of menus demonstrating where the user is, how they got there, and where they can go. As with physical architecture, information architecture relies on firm foundations to build anything stable. Logically relevant parent categories, proper meta-tagging, and a well-represented taxonomies form the bedrock on which any information-rich website should be built.

It is unforgivable for large entities such as a city council to build sloppy websites. All websites—especially those in the public sector—should be doing the right thing by default. Ease-of-use should be rigorously tested through prototyping and wire-framing, and consistency of style and standards should be paramount. Coders hate redundancy, but having logical replication of data facilitates how users really make use of websites—often in a meandering, scattershot approach.

The core thing I have taken away from information architecture is that it has to be—like a building—designed from the ground up. Every element must have a strong foundation of accessibility and functionality that relates to how people are actually using something, rather than designing with your own plan in mind then instructing people how to use it. IA relies wholly on taking an objective approach to design that Isn’t self interested. Developing good IA habits demands stepping outside your own perspective and engaging with that subjective, user experience of using a website.

Shouting into the void

Instant messaging, ‘chat with a librarian’, and VoIP reference services all suggest a great paradigm shift in information services. Reaching out and engaging with clients through modern communication tools and directly answering queries sounds fantastic. What’s not to love?

My own support experiences (on the receiving end of queries) have shown me that communicating through these channels can be a boon to improving customer experiences, provide timely and accurate resolutions, and facilitate a culture of complacency, belligerence, apathy, and exploitation among lazy and petulant users.

That went off the rails quickly.

Don’t get me wrong, I undeniably see the benefit of these services. But the ubiquitous availability of helpdesks, instant chat services, and support lines has only fed a growing sense of entitlement that is pervasive in today’s information seeking behaviours. Clients, customers, or library patrons expect to get exactly what they want with minimal effort expended. A certain subset of these users will simply expect that any complicated work is done for them—and often become belligerent when they aren’t catered to immediately.

This isn’t exactly a new problem. Unruly clients and surly customers have always had problems with customer service structures. Users with a certain pre-disposition are always going to cause issues. But, the anonymity afford by IM chat services gives a layer of abstraction that suddenly ‘permits’ abandoning social norms, and it quickly becomes acceptable to act like a petulant child or a raving lunatic.

I realise I’m being hyperbolic here, but technology is not a universal panacea. Creating a chat or IM service to support users will certainly allow new avenues of support. But, time-and-time-again I encounter people who have no sense of online etiquette or decorum. People who treat anything that happens online as a free-for-all where they are entitled to act however they please, and exploit and take advantage of anyone willing to put up with it.

Chat and IM services are great for the consciences user; they often provide that magical experience of meeting and exceeding your expectations. For the lazy, angry, or mindlessly indulgent user they’re nothing more than another service that implicitly owes them something or can be exploited.

And for the diligent operator? These foul exploiters do nothing more than ruin it for everyone else.

Can you Digg it?

Curation, Aggregation, and Web 2.0

Tools for curating, sorting, and managing web content usually take the form of social aggregators such as Digg or Reddit. The act of curating is not one of careful selection by a trained expert, but rather the weighted consensus of the masses promoting or up-voting content they find notable.

Web 2.0—nay, the entire information profession—has a problem; the barriers to information creation and storage have fallen in recent years. This has resulted in the amount of information on the Web proliferating beyond all expectations. Finding the right information among the endless supply of trivial and irrelevant data has become almost impossible. The rational response would be to trust our curation to trained professionals, able to disseminate and sort through this wealth of information and categorise it based on merits of accuracy and quality.

Instead, popular aggregators and the wisdom of crowds have emerged as the determining values of qualitative merit on the Web.

There is a very real risk that the Web—the most powerful source of knowledge available—is mislabelling, misrepresenting, and misplacing important data, and being unable to distinguish it from the unfiltered noise of the masses. We have trusted the most important resource in human history to the collective rule of enthusiastic amateurs.

This pollution of data poses a threat of eroding and fragmenting any real information stored on the Web. Users have come to rely on the anonymous and amorphous ‘rest of the Web’ as their authoritative filter. Content aggregators remix information drawn from multiple sources and republish them free of context or editorial control. These aggregated opinions of the masses are vulnerable to misinformation as users have too much control and too little accountability. The risk of aggregating information is the risk of privileging the inaccurate, banal, and trivial over the truth.

Digg.com, founded in 2004, was the first notable aggregator of Web 2.0 content. Voting content up or down is the core of the site: respectively ‘digging’ and ‘burying’ material based on contributors input. This supposedly democratic system allows content of merit to be promoted and displayed. But, this assumes that all opinions and user-generated regulations are equally valuable and relevant in determining merit.

The collective judgements of a group—the clichéd ‘wisdom of the crowds’—can be an effective measure of certain types of quantitative data. Called upon to guess at the number of jellybeans in a jar, the aggregated guesses of a thousand contributors would provide a relatively accurate figure. However, if that same group was called upon to disseminate the value of a news story their opinions would not represent a collective truth about the value or merits of the piece. The voting process of Digg or Reddit is transparent and instant, and causes contributors to cluster around popular opinions—promoting sensationalism and misinformation. Content that grabs the attention of users will quickly be promoted and rise to be seen by more users, regardless of its accuracy.

The momentum of a popular story is exponential: the more users see something, the more popular it becomes—exposing it to even more users. The infinite shelf-space and shelf-life of the Web means that once a piece of information has seen any exposure it almost impossible to control. Instantly a lie can spread across the Web by the zeal of its promoters, and be cross-referenced by a dozen news aggregators. Lies become widespread and pollute enough aggregation sites that they become the valid—supposedly authoritative—result of any Google search on the topic. The wisdom of the crowds is fickle and closer to a mob mentality; it is impossible to aggregate their wisdom without aggregating their madness as well. After all, there is a fine line between the wisdom of crowds and the ignorance of mobs

However, non-trivial and important content is still being created, promoted, and viewed on the Web; aggregated information services do capture these notable pieces of data in their trawling. In practice an old problem remains: time and effort must be manually expended to sort out the real information from the useless noise. Exactly the sort of time and effort that professional curators, librarians, and information professionals were traditionally employed to expend.

Digital media theorist Andrew Keen, in his book The Cult of the Amateur (2007) likens the community of Web 2.0 to evolutionary biologist T.H Huxley’s humorous theory that infinite monkeys on infinite typewriters would eventually create a masterpiece such as Shakespeare. Keen sees this infinite community of empowered amateurs as undermining expertise and destroying content control on the Web. He argues that their questionable knowledge, credentials, biases, and agendas means they are incapable of guiding the public discourse of the Web with any authority at all.

Another perspective on this comes from the 1986 book Amusing Ourselves to Death, wherein television commentator Neil Postman theorised about the erosion of the public discourse by the onslaught of the media. He frames the media in terms of the dystopian scenarios offered by Huxley’s grandson—science fiction author Aldous Huxley—in the novel Brave New World, and compares them to the similar dystopia of George Orwell’s 1984:

 

‘There are two ways by which the spirit of a culture may be shrivelled. In the first—the Orwellian—culture becomes a prison. In the second—the Huxleyan—culture becomes a burlesque’ (Postman, 1986, p.155).

 

In one dystopia, Orwell feared those who would deliberately deprive us of information; in another, Huxley feared those who would give us so much information that the truth would be drowned in a sea of irrelevance.

And, the culture of Web 2.0 is essentially realising Huxley’s dystopia. It is cannibalising the content it was designed to promote, and making expert opinions indistinguishable from that of amateurs.

User-generated content is creating an endless digital wasteland of mediocrity: uninformed political commentary; trivial home videos; indistinguishable amateur music; and unreadable poems, essays, and novels. This unchecked explosion of poor content is devaluing the work of librarians, knowledge managers, professional editors and content gatekeepers. As Keen suggests ‘What is free is actually costing us a fortune. By stealing away our eyeballs, the blogs and wikis are decimating the publishing, music, and news-gathering industries that created the original content these Websites ‘aggregate’ (Keen, 2007, p.32).

In a world with fewer and fewer professional editors or curators, knowing what and whom to believe is impossible. Because much of the user-generated content of the Web is posted anonymously—or under pseudonyms—nobody knows who the real author of much of this self-generated content is.

No one is being paid to check their credentials or evaluate their material on Wiki’s, aggregators, and collaboratively edited websites. The equal voice afforded to amateurs and experts alike has devalued the role of experts in controlling the quality and merit of information. So long as information is aggregated and recompiled anonymously then everyone is afforded an equal voice. As Keen dramatically states, ‘the words of wise men count for no more than the mutterings of a fool (2007, p.36)’.

We need professional curation of the internet now more than ever. We need libraries and information organisations to embrace the idea of developing collections that include carefully evaluated and selected web resources that have been subject to rigorous investigation. Once upon a time we relied on publishers, booksellers, and news editors to do the sorting for us. Now, we leave it to anonymous users who could be a marketing agency hired to plant corporate promotions; it could be an intellectual kleptomaniac, copy-pasting other’s work together and claiming it as their own; or it could be, as Keen fears, a monkey.

Without professional intervention, the future is a digital library where all the great works of human history sit side-by-side with the trivial and banal under a single, aggregated category labelled ‘things’. And we would have no-one to blame but ourselves.

Microblogging and Me

There is no such thing as a tool that is good even if used without consideration. Social media, microblogging, and corporate communications platforms are no exception to this. That being said, they are a powerful way to flatten hierarchies and open up the conversation within an organisation.

My previous employer–a major web-hosting company–were heavily invested in creating an open, integrated communications system within the company. With offices in a number of locations around Australia and the world, there was often a significant disconnect between all but the most closely integrated departments. To combat this isolation, the organisation rolled out Yammer across the company. Yammer, for those unfamiliar with the platform, is more-or-less a facebook news-feed clone for closed, internal use. Much like familiar social media platforms, Yammer invites users to post, comment, and follow discussions and share links and the such like.

Because the organisation had not followed through with a comprehensive internal communications policy for Yammer, the results were mixed. The posting quickly turned into inane, trivial, and mundane minutia such as: ‘The coffee pot on level 5 is empty’, ‘Lol who turned out the lights,’ and ‘Woo! Go accounts. More sales!’…you get the idea. There were some flashes of inspired thinking on the service, such as the CFO opening up a forum for discussing summer reading titles relevant to business and technology, which invited a rare opportunity to speak candidly (about books!) with the managing directors of a multi-national corporation. These opportunities to make my voice heard were few and far between, but I welcomed the fact that such a conversation could not have been possible without a tool like Yammer.

Ultimately, the problem with corporate microblogging and social feeds is one of restraint and management. Unchecked, it becomes yet another source for information-bloat and distraction. Too-regulated and it becomes a cork-board for posting internal PR releases.

Organisations take note! You should start hiring internal social media moderators and curators to better direct, manage, and engage the use of these platforms in your organisation. I’m certain that my peers and I would welcome the challenge!

 

Too Much Information

Web 2.0 pundit and theorist Andrew Keen writes in his book Digital Vertigo (2012):

 Instead of making us happier and more connected, social media’s siren song—the incessant calls to digitally connect, the cultural obsession with transparency and openness, the never-ending demand to share everything about ourselves with everyone else— is, in fact, both a significant cause and effect of the increasingly vertiginous nature of twenty-first—century life.

The inconvenient truth is that social media, for all its communitarian promises, is dividing us, rather than bringing us together (p. 67)

There’s a great deal of wisdom in what Keen is saying: The overwhelming wealth of information available online lends itself to a perverse idea of obsessive over-sharing and digital exhibitionism. Ideas of transparency and openness have to be considered against the alternative of constructing a carefully limited, constructed persona online to be completely disingenuous.

Ultimately, either end of the spectrum is still driving us towards an online culture that is divided, fragmented, and essentially at odds with itself.

So what’s the middle ground? What balance can there be between honestly engaging in a rich, participatory culture online, and protecting our individual privacy and identity.

For my own part, I choose to present myself as a professional fully and absolutely online. Anything relevant to my professional development, career aspirations, and written work is funnelled into the same set of linked channels. I keep a unified identity across media platforms (@mjjfeeney on Twitter; www.mjjfeeney.com on this, my blogging domain; /mjjfeeney/ as my Facebook username etc.). Since our online identities span so many platforms today, I feel that presenting a consistent set of values and sharing limits across each platform is vital. I would hate for someone who follows me on twitter to discover this blog and be disoriented by an overabundance of personal content.

I feel that keeping this consistency about what we’re sharing—and where—is vital. What you put online will be found, no matter where you think it’s hidden away. Making sure it’s something you’d be willing to share in *any* of your other channels of communication is vital.