All website audiences are different. Web publishers need to segment that audience first by how often they visit the site. Engaging your fans -- those who visit your site the most -- will be far more productive than simple, generic advertising.Read More
Counting stats only go so far. We have to add value to our customers by providing context to our statistics and package information in ways people can understand. This helps people make decisions.Read More
Bleeding Edge, Thomas Pynchon, page 322:
...Maxine recalls that Heidi has a collection of decorated dollar bills, which she regards as the public toilet wall of the U.S. monetary system, carrying jokes, insults, slogans, phone numbers, George Washington in blackface, strange hats, Afros and dreadlocks and Marge Simpson hair, lit joints in his mouths, and speech-balloon remarks ranging from witty to stupid.
"No matter how the official narrative of [Sept. 11, 2001] turns out, it seemed to Heidi, "these are the places we should be looking, not in newspapers or television but at the margins, graffiti, uncontrolled utterances, bad dreamers who sleep in public and scream in their sleep."
"Products and services that don't solve peoples' problems, or don't solve them in a competitive cost, fail," writes Abbie Griffin in her chapter Obtaining Customer Needs for Product Development [pdf] in the book: The PDMA Handbook of New Product Development.
To build these products or services, organizations must understand exactly what people need and make a product to match those needs, Griffin writes. There's a second way: give people something they didn't know they needed; create a new technology that solves problems people may or may not know they have. "Although teams can be successful this way, it is a far riskier path to success."
The real goal is learning how to talk to customers, asking their needs and adapt your products or services to meet those needs. This is far easier than it sounds. First, Griffin says, many firms speak only generally to their customers. So they come back with only general needs. "The key is to talk to customers using appropriate methods and asking questions customer can answer and that can provide information useful for developing new patterns," she writes.
Another important lesson: Only ask customers to provide information they can, in fact, provide. A whole list exists of things customers cannot help you with:
1. They can't tell you what exactly you should develop. Features, looks, etc. should not come from customers, but from your team.
2. Customers also cannot provide reliable information on what they have not experienced or do not know firsthand. If they aren't ebook users, don't ask them to weigh in on a new reader.
On the other hand:
Customers can provide reliable information about the things with which they are familiar and knowledgeable or that they directly have experienced. A customer can provide the subset of the needs information that is relevant to them in an overall category of customer problems. They can articulate the problems and needs they have. They can indicate the problems and features they currently use to meet their needs, where these products fall short of solving their problems, and where they excel. The only way that a full set of customer needs for a product area can be obtained is by coming to understand the detailed needs of a number of customers, each of whom contributes a piece of the needs information.
The bottom line: Understand customer problems and how these problems impact how they perform their job or live their lives.
This means asking customers very in-depth questions about how they obtain or acquire and use products and services to fulfill these particular needs. Ask them why they use something that way. Get as much of the context as you can. You need to ask patrons why they did something, what worked well and what did not work well.
"Please tell me about the last time you searched for a book to borrow."
A 2010 study investigated the differences between for-profit and non-profit journalism models and news practices and citizen knowledge. Is there a link between less informed people and for-profit news?Read More
In a study how news stories become popular through two social networking sites, researchers compare trending stories to viewing a contagious disease spreading through the general population.
Written in 2010, the study [pdf] by Kristina Lerman and Rumi Ghosh at the USC Information Sciences Institute, investigated Twitter and Digg, which both allows people to follow others, receiving their updates on a moving status bar. In Twitter, users can post 'updates' on anything (up to 140 characters), but the social network has become a great tool for the curation of news stories.
Digg is more centered on news stories, where users can vote for (or digg) stories and allow others users to view them.
Like other media on the World Wide Web, like music, videos and wikipedia pages, the researchers argue news stories are heavily long-tailed: A very few items become extremely popular, a few a vaguely popular leaving the vast majority to go largely unnoticed. To me, this is the study's golden kernel: Proof that these social networks are important tools to forward popular media stories, but less perfect for illuminating lesser-known stories.
The mean number of Digg stories runs about 50, while a few get 400 votes and a very small minority can reach as many as 1500. Compared to Twitter, stories move through Digg much faster -- largely because the network of friends is so large. Also, once a story receives many Diggs, it moves to the front page, allowing unsubscribed participants to vote it up. Only fast rising stories can remain on the front page because Digg received (in 2010) about 20,000 stories each day.
Twitter, a much larger network than Digg, nevertheless allows stories to expand -- but not at the same speeds as Digg. Twitter does not have a like function, so the researchers counted whenever a story was retweeted by another user to be seen by all of his or her users. This adds the potential contagion to spread deeper into the network. The researchers also found that stories reach this level on Twitter, but do not remain so viral for long.
From the researchers:
In spite of the similarities, there are quantitative differences in the structure and function of social networks on Digg and Twitter. Digg networks are dense and highly interconnected. A story posted on Digg initially spreads quickly through the network, with users who are following the submitter also likely to follow other voters. After the story is promoted to Digg's front page, however, it is exposed to a large number of unconnected users. The spread of the story on the network slows significantly, though the story may still generate a large response from Digg audience. The Twitter social network is less dense than Digg's, and stories spread through the network slower than Digg stories do initially, but they continue spreading at this rate as the story ages and generally penetrate the network father than Digg stories.
Information Contagion: an Empirical Study of the Spread of News on Digg and
Twitter Social Networks
Kristina Lerman and Rumi Ghosh
Websites earn revenue three basic ways: charge a subscription, charge advertising or do both.
Higher subscription prices offer fewer viewers. However, those viewers may be the most attractive for advertisers because they have the income to pay for the high subscription. But they also may have the lowest tolerance for adverts.
That's from a seminal 2003 paper [pdf] on the potential impacts of various advertising and subscription models for electronic media.
The authors investigate the efficacy of four subscription and advertising strategies:
- Limited access, which generally drive away low income (or minimally interested users) because they refuse to pay the subscription price.
- Free access, which generally drive away high income users because advertisements are too common.
- Pooling, where all customers accept the available advert price
- Separating, where customers are segmented by self selecting their preferred subscription price.
The authors argue that a pure adverstiser-supported or pure pay-per-view models will only be successful under certain parameters: Mostly if the site only appeals to high- and low-income audience segments.
Media sites will most likely pick up advertisers with higher end products -- those that produce a higher margin. But if these advertisers are fighting for the lower-income (or low interest) segment, there will be a lower probability of successful advertising. Advertising is a consumer annoyance, the authors write, but it can be used to market different products to different audience segments. "if different consumer segments are willing
to pay different prices for the opportunity to watch the program at different levels of advertising," they write.
Thus, the authors lean heavily toward separating, allowing the site to segment the audience. For one, providing users with a choice to opt in or opt out of different amounts of advertising is easy to do on electronic sites. Secondly, advertisers will pay a higher price when they can better target different sets of users.
They conclude by looking toward a future where content and advertising is married:
It is not likely that the separating pricing strategies that we discuss here will become more pervasive with time since contemporary electronic media make it possible to implement it much more efficiently than was possible in the past. The implications are higher profits for media providers, more choices for customers, and more targeted advertising for advertisers.
Advertising versus pay-per-view in electronic media
Intern. J. of Research in Marketing
20 (2003) 13 – 30
Ashutosh Prasada, Vijay Mahajanb and Bart Bronnenberg
If website publishers that advertise on their sites allow consumers to pay to opt out of that advertising, how much does this impact 1) publisher profits and 2) user annoyance?
A quick groundrule: Consumers don't like online advertisements. Tåg cites studies from DIGDIA claiming that 44 percent of consumers admit they would watch television without ads by paying $3.99 per program while 17 percent would pay $2.99 for the same show if it had advertisements.
Thus, media firms have a balancing act. They must weigh revenues from advertisers with the wishes (and potential) revenue from their consumers. If these consumers are relatively profitable to the advertisers, the media firm should not allow people to opt out of adverts. If the site has very high quality (or hard to find) content, allowing users to opt-out of adverts would "cannibalize sales from the fee-based alternative," Tåg writes.
When the media firm is fee-based, he writes, a higher quality product implies higher profits. A higher number of patrons has the same effect. If the user base grows, media firms can charge more for ad space. However, for those sites that allow user opt-out (at a fee), those people who don't pay the fee often see a higher amount of advertising, increasing advertising annoyance. "Shifting to a business model of allowing consumers to pay to remove advertisements harms consumers but benefits advertisers and the media firm," Tag writes.
In the end, Tag argues:
... [A] business model of allowing consumers to pay to remove advertisements is more likely to be optimal when the quality of the media firm’s product is low, the annoyance of advertisements is high, and advertisers’ profit margins are low. Further, the media firm may benefit from an increase in the annoyance of advertisements. Advertising quantity is higher when consumers can pay to remove advertisements compared to when they can’t and advertising quantity may be increasing in the annoyance of advertisements (this offers a testable implication of the model). Shifting to a business model of allowing consumers to pay to remove advertisements harms consumers but benefits advertisers and the media firm. The impact on total welfare is ambiguous.
Another day of disasters and social media. (Here is the first.)
Per a recent paper [pdf] by three researchers, crowdsourcing or groupsourcing information after disasters is important, mostly because coordination plays such a crucial role. While social media -- especially platforms like Ushahidi offer a lot of people an easy way to provide information from a variety of media (phones, email, twitter, etc.) -- social media does not have an "inherent coordination capability" to share information, resources among many different groups.
This lack of collaborative spaces is social media's greatest drawback, the authors argue. "Microblogs and crisis maps do not provide a mechanism for apportioning response re- sources, so multiple organizations might respond to an individual re- quest at the same time," they write.
Secondly, crowdsourcing applications do not provide all necessary information for relief efforts. Geo-tag accuracy can be questioned, they say. Duplicating reports remains a problem, along with fraud reporting.
However, crowdsourcing applications are very quick, providing near real-time information. (An Ushahidi platform was running from servers in the United States just two hours after the Haitian earthquake.) Report verification is still a work in progress. While groups can automatically filter through reports -- photos, videos and comments -- to verify their veracity, some platforms only provide small-scale verification for their maps.
The authors look forward:
Crowdsourcing integrated with crisis maps has been a powerful tool in humanitarian assistance and disaster relief. Future crowdsourcing applications must provide capabilities to better manage unstructured messages and enhance streaming data...Furthermore, metrics that gauge the success of crowdsourcing and coordination systems for disaster relief will be designed and leveraged for system evaluation and improvement.
Harnessing the Crowdsourcing Power of Social Media for Disaster Relief [pdf]
Huiji Gao and Geoffrey Barbier, Arizona State University
Rebecca Goolsby, US Office of Naval Research
In a 2005 paper [pdf], Karrie Peterson and James A. Jacobs point out that 84 percent of government information can only be accessed through web servers managed by federal agencies. In the digital age, only 14 percent of federal information is now placed in depository libraries.
The authors point out this is part of an effort -- sometimes concerted, sometimes not -- of the government to keep some of its information solely stored on its servers:
- To remain out of competition with commercial interests like pubslishers or other repackagers of government information;
- Or, to hold it back from wide distribution (via depository libraries) because it is cheaper to do so.
In light of the 16-day government shutdown in October, Crystal Vicente poses an interesting question:
If [Peterson's and Jacobs'] figures are correct, and eighty-four percent of government information is only available through government controlled websites, then what of the access to information during situations such as the recent government shutdown, when government databases were completely inaccessible?
Vicente's paper is from LLRX.com: Law & technology resources for legal professionals:
The problem with having most of the government's information available only online, as the shutdown harshly demonstrated, is the lack of access to information when controlled by a single entity. The convenience of an electronic government has shifted the control of information from the numerous institutions formerly charged with providing the information to the public – the depository libraries – to a single entity: the federal government.
A growing number of cases exist of people and governments using social media during disasters to help coordinate relief and provide information: Haiti's 2010 earthquake, Queensland 2011 floods of Australia; 2011 Japan earthquake. Just to name a few.
Neil Dufty, in a paper, offers several ways how social media could better prepare communities for disaster response.
For example, using social media to:
- Inform the community of risks, and how agencies and organizations are planning to manage them
- Engage people to help prepare for disasters
- Crowdsource information for emergency managers, which can be done before, during or after an event
- Communicating warnings and other information
- Coordinate responses and recovery
- Conduct post-event learning
A 2011 Congressional Research Service report investigates how social media may increase the public’s ability to communicate with the government during a disaster, an important step in receiving and providing information. One drawback, however, is during Hurricane Irene, residents experienced power outages lasting at least 48 hours. "[O]verreliance on the technology could be problematic under prolonged power outages," the report states. "Thus emergency managers and officials might consider alternative or backup options during extended power outages, or other occurrences that could prevent the use of social media."
With this in mind, CRS points to lessons learned and best practices for governments and other agencies using social media to facilitate during disasters:
- Identify target audiences for the applications, such as civilians, nongovernmental organizations, volunteers, and participating governments;
- Determine appropriate types of information for dissemination;
- Disseminate information the public is interested in (e.g. what phase the incident is in, etc.);
- Identify any negative consequences arising from the application—such as the potential spread of faulty information—and work to eliminate or reduce such consequences.
Katy Kelly and Gwen Glazer, librarians with experience in media (Glazer, a former reporter and Kelly, a film producer), find themselves creating outreach materials for Cornell Libraries using social media platforms: Twitter, Facebook, FourSquare, etc.
They've collected their thoughts in a chapter in the book "The New Academic Librarian: Essays on Changing Roles and Responsibilities"
Regarding those social media accounts, here's a portion of their best practices, which can relate to any library or organization pushing messages via social media:
- Talk only when you have something to say
- Consider your (diverse) target audience. When students, professors, alumni, administrators and the like are all your audience, you must think about what is too granular and what is too broad to post?
- Don't obsess over assessment? "It's impossible to measure how private experiences with Facebook or any other social-media tool impact future."
- Find substantive content that speaks to a larger goal
There's more, and it can be found here.
Chatbots are virtual agents that can answer questions employing a heavy dose of artificial intelligence which allow them to understand natural language. These virtual agents/chatbots are popular on websites with heavy customer service needs.
For academic libraries, facing budget and staff cuts, employing chatbots provide an interesting proposition. You could allow a ChatBot to help direct users who:
- May not be so inclined to ask a human for help
- Especially regarding complicated websites or information networks.
This comes from a paper by Michele L. McNeal and David Newyear in Library Technology Reports.
They provide an explanation of how chatbots could work:
The process of searching databases or catalogs usually requires the user to compose a search for the information needed, conforming to the structures and language defined by the target data source. A chatbot using NLP [Natural Language Processing], on the other hand, allows users to pose a question as they would to another human being. The responsibility of locating the needed information shifts from the user to the programmer of the chatbot. The chatbot designer creates a structure that leads the user through a question-and-answer dialogue to discover the information needed and to provide it. This process can also address the problems created by library terminology or jargon with which the user may not be familiar. In addition, regular review of the chatbot’s conversation logs allows the designer to monitor the types of questions and the terminology used to pose them and to update the responses provided by the chatbot and the language it recognizes. This is why the chatbot can be particularly convenient and helpful to those patrons who are least familiar with the library and its services.
The writers lay out the advantages: Chatbots can personalize user service; they simplify patron access to library sites; they don't get flustered when people swear at them; and, they are anonymous.
German libraries seem to be at the forefront of this movement, employing chatbots at a few websites for nearly ten years.
At the Bibliothekssystem Universität Hamburg, Stella has been answering questions since 2004.
Askademicus has been at the Technische Universität Dortmund since about that time.
Since 2006, INA has been working on the Bücherhallen Hamburg website.
In the US, the University of Nebraska-Lincoln Libraries have been testing Pixel since 2010.
Here is that page: http://pixel.unl.edu
Introducing Chatbots in Libraries
Michele L. McNeal and David Newyear
Library Technology Reports
Volume 49, Number 8 / November/December 2013
"The academic library sits at the intersection of university instruction, services, and resources," writes Eric Ackerman in a recent paper. But the traditional methods to assess academic libraries are no longer relevant. Stakeholders openly question the relationship between libraries, student learning and research. And, libraries must also prove they provide a return on investment during troubled budget times.
Assessment was often handled inhouse at libraries, so they were only relevant to librarians (and their bosses). Ackerman suggests that academic libraries must better measure ways meaningful to these outside stakeholders -- especially those who write checks or provide accreditition.
From the paper, called Program Assessment in Academic Libraries: An Introduction for Assessment Practitioners
[M]ost library assessment is developed in relative isolation from the larger higher education community. It has been driven mainly by internal library needs, and has resluted in metrics and reporting protocols that are meaningful primarily to other librarians. Instead, these measures need to be meaningful not only to librarians but also to other stakeholders, both on and off campus.
Ackerman provides a list of problem areas.
- Information literacy: can someone please define it -- and tell us how it helps students?
- Services: customer service instruments are easy to understand. However, Ackerman points out customer services change nowhere more often than an online, digital environment
- Reference Services: Similar to the Information Literacy problem above, good luck trying to quantify reference questions and the amount of help their answers gave the patrons.
- Web Stats: Vendor supplied weblogs don't provide a lot of granularity, nor do they capture the "why" or the purpose of the use, Ackerman writes.
Here is a link to the paper again: Program Assessment in Academic Libraries: An Introduction for Assessment Practitioners