Tag Archives: library

Games, Gaming, and Gamification – Part 1

This is the first in a series of posts about Games, Gamification, and Libraries.

There’s a certain stubborn snobbery that associates gaming with children, which therefore assumes that adults who engage in meaningful play-based activities must be in some way juvenile too. It’s a huge generalisation, but there’s a pervasive unpleasantness demonstrated by a certain, narrow-minded breed of professionals that ‘objectively’ rules out games and gamification in a completely arbitrary way that doesn’t need to be explained, defended, or evaluated.

And that’s hugely frustrating!

To derail from the broader discussion for a moment, I’d just like to contextualise my soap boxing a little. I’m pretty savvy when it comes to games and gaming—in fact, if Gladwell’s litmus test of ’10 000 hours of an activity make you an expert’ then I’ve certainly covered the requirements a few times over.  Just this past week I participated in the Indie Speed Run 48 hour game jam—an excellent idea in the middle of a solid block of university assessment. This game jam tasked my team and me with the complete design and development of a game in just 48 hours with a random set of elements to include and adhere to.


Immersing myself in a rapid-fire development environment for a whole weekend left me asking some of the high-level, design questions that are fundamental to all sorts of manifestations of ‘games’.

These core principals of play design—whether they in a 2D side-scroller or a library catalogue—ask the same sorts of questions about attention, motivation, and engagement. In taking on the mantle of a game designer I had to ask myself these questions in order to find meaningful verbs to describe play-based activities beyond the mundane tasks being performed.

Take Trove’s text correction initiative for example. Here we have a dull set of actions such as laborious information parsing and data entry. But, it’s packaged in such a way that we see it as exploration, discovery, and competition. Checking and correcting OCR’d articles becomes a quixotic race-to-the-top of who can demonstrate the highest level of commitment. Sure, it’s a highly insular community. But, within the space provided by Trove an entire currency of reputation has grown up around the dubious honour of correcting the most articles.

This isn’t profoundly high-level game design; but it works. The layers of progression and incentives give a positive feedback loop that rewards commitment and engagement. But, bootstrapping game systems onto existing mechanics is always going to be inherently flawed.

This is no ‘ground-up’ design. It’s simply re-contextualising something that already works and trying to make it fun and engaging. It’s really just expressing an existing system through new mechanics.

Play shouldn’t feel like work, and there is a wealth of work that needs to be contributed to libraries. Social data that enriches a catalogue such as tagging, reviewing, recommending, relating, and classifying records enriches a library in an amazing way. Slapping the veneer of a game onto the catalogue to ‘trick’ people into doing this work feels disingenuous, and smacks of exploitation.

Really, it’s a fundamental question of putting content into a game, rather than bootstrapping a game onto content.

Serious games have an element of mastery: your engagement and fun come from a progressive, skill-based unlocking of content. Gamification without meaningful mechanics might as well be a workplace KPI that just tracks your threshold for filling arbitrary quotas to achieve recognition.


A Delicious Mess

Following on from my soap-boxing about the state of content curation online, I’ve setup a Delicious feed available here and embedded in this blog under my ‘What’s Interesting?’ tab at the top.

I’ll be collating and curating interesting articles about disruptive innovation, information futures, and just plain good writing in this feed, so stay tuned for more of that!

Can you Digg it?

Curation, Aggregation, and Web 2.0

Tools for curating, sorting, and managing web content usually take the form of social aggregators such as Digg or Reddit. The act of curating is not one of careful selection by a trained expert, but rather the weighted consensus of the masses promoting or up-voting content they find notable.

Web 2.0—nay, the entire information profession—has a problem; the barriers to information creation and storage have fallen in recent years. This has resulted in the amount of information on the Web proliferating beyond all expectations. Finding the right information among the endless supply of trivial and irrelevant data has become almost impossible. The rational response would be to trust our curation to trained professionals, able to disseminate and sort through this wealth of information and categorise it based on merits of accuracy and quality.

Instead, popular aggregators and the wisdom of crowds have emerged as the determining values of qualitative merit on the Web.

There is a very real risk that the Web—the most powerful source of knowledge available—is mislabelling, misrepresenting, and misplacing important data, and being unable to distinguish it from the unfiltered noise of the masses. We have trusted the most important resource in human history to the collective rule of enthusiastic amateurs.

This pollution of data poses a threat of eroding and fragmenting any real information stored on the Web. Users have come to rely on the anonymous and amorphous ‘rest of the Web’ as their authoritative filter. Content aggregators remix information drawn from multiple sources and republish them free of context or editorial control. These aggregated opinions of the masses are vulnerable to misinformation as users have too much control and too little accountability. The risk of aggregating information is the risk of privileging the inaccurate, banal, and trivial over the truth.

Digg.com, founded in 2004, was the first notable aggregator of Web 2.0 content. Voting content up or down is the core of the site: respectively ‘digging’ and ‘burying’ material based on contributors input. This supposedly democratic system allows content of merit to be promoted and displayed. But, this assumes that all opinions and user-generated regulations are equally valuable and relevant in determining merit.

The collective judgements of a group—the clichéd ‘wisdom of the crowds’—can be an effective measure of certain types of quantitative data. Called upon to guess at the number of jellybeans in a jar, the aggregated guesses of a thousand contributors would provide a relatively accurate figure. However, if that same group was called upon to disseminate the value of a news story their opinions would not represent a collective truth about the value or merits of the piece. The voting process of Digg or Reddit is transparent and instant, and causes contributors to cluster around popular opinions—promoting sensationalism and misinformation. Content that grabs the attention of users will quickly be promoted and rise to be seen by more users, regardless of its accuracy.

The momentum of a popular story is exponential: the more users see something, the more popular it becomes—exposing it to even more users. The infinite shelf-space and shelf-life of the Web means that once a piece of information has seen any exposure it almost impossible to control. Instantly a lie can spread across the Web by the zeal of its promoters, and be cross-referenced by a dozen news aggregators. Lies become widespread and pollute enough aggregation sites that they become the valid—supposedly authoritative—result of any Google search on the topic. The wisdom of the crowds is fickle and closer to a mob mentality; it is impossible to aggregate their wisdom without aggregating their madness as well. After all, there is a fine line between the wisdom of crowds and the ignorance of mobs

However, non-trivial and important content is still being created, promoted, and viewed on the Web; aggregated information services do capture these notable pieces of data in their trawling. In practice an old problem remains: time and effort must be manually expended to sort out the real information from the useless noise. Exactly the sort of time and effort that professional curators, librarians, and information professionals were traditionally employed to expend.

Digital media theorist Andrew Keen, in his book The Cult of the Amateur (2007) likens the community of Web 2.0 to evolutionary biologist T.H Huxley’s humorous theory that infinite monkeys on infinite typewriters would eventually create a masterpiece such as Shakespeare. Keen sees this infinite community of empowered amateurs as undermining expertise and destroying content control on the Web. He argues that their questionable knowledge, credentials, biases, and agendas means they are incapable of guiding the public discourse of the Web with any authority at all.

Another perspective on this comes from the 1986 book Amusing Ourselves to Death, wherein television commentator Neil Postman theorised about the erosion of the public discourse by the onslaught of the media. He frames the media in terms of the dystopian scenarios offered by Huxley’s grandson—science fiction author Aldous Huxley—in the novel Brave New World, and compares them to the similar dystopia of George Orwell’s 1984:


‘There are two ways by which the spirit of a culture may be shrivelled. In the first—the Orwellian—culture becomes a prison. In the second—the Huxleyan—culture becomes a burlesque’ (Postman, 1986, p.155).


In one dystopia, Orwell feared those who would deliberately deprive us of information; in another, Huxley feared those who would give us so much information that the truth would be drowned in a sea of irrelevance.

And, the culture of Web 2.0 is essentially realising Huxley’s dystopia. It is cannibalising the content it was designed to promote, and making expert opinions indistinguishable from that of amateurs.

User-generated content is creating an endless digital wasteland of mediocrity: uninformed political commentary; trivial home videos; indistinguishable amateur music; and unreadable poems, essays, and novels. This unchecked explosion of poor content is devaluing the work of librarians, knowledge managers, professional editors and content gatekeepers. As Keen suggests ‘What is free is actually costing us a fortune. By stealing away our eyeballs, the blogs and wikis are decimating the publishing, music, and news-gathering industries that created the original content these Websites ‘aggregate’ (Keen, 2007, p.32).

In a world with fewer and fewer professional editors or curators, knowing what and whom to believe is impossible. Because much of the user-generated content of the Web is posted anonymously—or under pseudonyms—nobody knows who the real author of much of this self-generated content is.

No one is being paid to check their credentials or evaluate their material on Wiki’s, aggregators, and collaboratively edited websites. The equal voice afforded to amateurs and experts alike has devalued the role of experts in controlling the quality and merit of information. So long as information is aggregated and recompiled anonymously then everyone is afforded an equal voice. As Keen dramatically states, ‘the words of wise men count for no more than the mutterings of a fool (2007, p.36)’.

We need professional curation of the internet now more than ever. We need libraries and information organisations to embrace the idea of developing collections that include carefully evaluated and selected web resources that have been subject to rigorous investigation. Once upon a time we relied on publishers, booksellers, and news editors to do the sorting for us. Now, we leave it to anonymous users who could be a marketing agency hired to plant corporate promotions; it could be an intellectual kleptomaniac, copy-pasting other’s work together and claiming it as their own; or it could be, as Keen fears, a monkey.

Without professional intervention, the future is a digital library where all the great works of human history sit side-by-side with the trivial and banal under a single, aggregated category labelled ‘things’. And we would have no-one to blame but ourselves.

Giving Back: Personal Learning Networks

In 1985, Steven Brand published the now famous information doctrine Hackers: Heroes of the Computer Revolution, which stipulated the unique ethos of the hacking subculture and claimed that ‘All information wants to be free’:

On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. (Brand, 1985, p. 49) 

But, information is not made solely of ephemeral ideas; it is made of ideas and work. Sadly, I’m guilty of taking advantage of the altruism of others and exploiting that good work selfishly. Having recently explicitly examined the idea of a ‘Personal Learning Network’ (PLN) I realised that I’m a ‘drain’ on my localised PLN: I take more than I put back.

I have embedded myself in a community of people with like interests, who I make use of as a sort of social filter to hopefully reveal the most relevant information to me. I actively scour blogs, twitter feeds, and other social data to skim off the cream-of-the-crop of trends coming down in the LIS sector. But, even when I have something to contribute, I remain largely silent. I realise that this isn’t a particularly admirable state of affairs, and aim to rectify it in the coming months.

First things first, I’m going to get some fresh, original content up on this blog. I’m really fascinated by social aggregation and the transformation of controlled taxonomies into organic folksonomies, so stay tuned for some of that in the near future.

Also, I’ve started repurposing some of my writing from 2010+ on the evolution of digital publishing, price, and piracy to snazzy blog-sized chunks.

So, I come hat in hand to my PLN, offering these small morsels of content to repay the free-ride I’ve been taking so far. It’s not much, but it’s a start.

Enlightened self-interest

Workshop 7: Moving out into the profession

Quite the capstone to a long, challenging, and illuminating semester!

Socialising and mingling with my peers in-class one final time this semester reminded me that we were strangers only a few months ago. Over the course of the semester we had already taken meaningful steps towards building our personal networking and professional relationships. Chatting with the guests over cheese and wine and listening to their presentations drove home for me just how vital communication and sociability is to the success and vitality of my career.

Being a responsible, capable, and proficient information professional is not something I can manage on my own. My learning and development isn’t taking place in a vacuum; it is being guided, shaped, and informed by the people around me. Finding my place in this profession was always going to be a by-product of connecting with people and building a meaningful understanding of how I fit into the larger context of this community.

Although it is unnerving to face the challenges of finding meaningful employment in a field that is so dynamic, I am confident that I have started to develop enough self-knowledge to understand how to thrive and prosper in the face of these challenges. I feel like I have moved beyond my initial embarrassment and reticence of not knowing or understanding some things, and I have embraced the fact that I am still a beginner in this expansive field. I feel I am finally comfortable with letting go of some perfect ideal of my future employment, and embracing change as it comes.

Documenting these first steps I’ve taken into a larger, professional world  has contributed to my own understanding of  who I am, and what I want to do. It has illuminated for me what sort of jobs, environments, and types of work will make me happiest and helped me to develop career goals that reflect what I value most.

On reflection, the entire program of INN634 instilled in me the guiding principle of loyalty to my own professional goals. My own personal integrity and commitment to moving forward is not about finding an employer willing to take me on, but rather about developing a practical set of skills and capabilities that guarantee I will be employable for life.

I’m going to cap-off this off with a quote that really evokes what I felt was the core theme of this program:

“It is not the strongest of the species who survive, nor the most intelligent, but the ones most responsive to change

Charles Darwin.

Seeing the Gorilla

Workshop 5: Evidence Based Practice – Being research led

(This reflection was originally written on April 28, 2013)

The challenge to being more open and receptive to the situation around us is that sometimes, we just don’t see the Gorilla. When it came to understanding evidence based practice I certainly missed what was right in front of me.

Ann Gillespie spoke to us at length about how we can fundamentally adopt a way of thinking about the profession that interrogates data to provide meaningful evidence.

Evidence based practice in librarianship is not something I had any experience with prior to this workshop. Wrangling concrete data from a more holistic, qualitative process was eye opening and gave me pause. I had never considered that there were ways of empirically measuring or evaluating success that didn’t just draw on dry statistics and analytic data.

Calling on intuition and reflection to approach and measure library practices is fascinating. I was really drawn in by the theoretical frameworks espoused by Andrew Booth and Johnathan Elredge in the literature, and was able to contextualise how a practice cribbed from medicine and science could be applicable to LIS decision making.

I was genuinely surprised by the dissent in the classroom about the ‘woolly’ lack of value that EBP has, as it seemed self-evident to me almost immediately how useful and valuable this sort of approach could be. Identifying and iterating on best-practices is always going to be the way forward, and qualitative data can provide a wealth of context for interpreting and applying these practices.

I tend towards more quantitative, theoretical, research-oriented projects, and adopt a healthy level of pragmatism and detachment about the process. But, a more holistic approach to evidence based practice seems to present a much more dynamic, practical way of addressing the day-to-day challenges of the LIS profession.

If anything, this workshop highlighted for me that intuition and reflection are too-valuable as professional tools to be ignored. Evidence based practice is certainly something I intend to adopt in my further studies.