Friday, 27 June 2008

Project failure

Article on BCS site citing reasons for project failure:

Thursday, 26 June 2008

ISKO event on information retrieval

Went along to some of the ISKO event on information retrieval today...

Brian Vickery was up first but unfortunately, I missed most of his talk. I did catch the last few minutes though where he asked some very pertinent questions:

  • What is the case for building classifications, thesauri and taxonomies? How does this relate to the needs of Communities of Practice?
  • Are the benefits of controlled retrieval languages strong enough to justify the effort and cost of creating/maintaining/using them?
  • Is there a growing need to harmonise or match terminologies?
  • What is the future for "universal" controlled languages and general classifications/ ontologies?

Next up was Stephen Robertson, giving a researcher perspective. He pointed out that although web search engines have been very successful, other systems cannot say the same - perhaps because the extensive machine learning available to Google et al just isn't feasible for a smaller setup. Roberston mentioned some useful sources of evidence in evaluating retrieval - notably click-throughs and "dwell time" (how long a user spends somewhere before returning to search results). There is some rich data out there but it is also "noisy".

Last up was Ian Rowlands who talked about the implications of the Google Generation report. He started with some context - insecurity around the power of Google, Yahoo branding; devaluing of the "library" brand; the hypothesis that the younger generation is somehow different. He referred to various pieces of research including Carol Tenopir's long-standing survey of academics. The bottom line of the Google Generation report is that it is a myth - yes, there is a type of user behaviour which is comfortable online but the Google Generation is not a homogenous mass of people - "silver surfers" (another irritating term!) demonstrate characteristics too and there are also "digital dissidents" among younger generations who are shunning technology. So, the general message is to stop thinking of our users as fixed targets who fit some kind of stereotype. We need to understand user behaviour much better, in particular, online reading - but then, how much do we really understand about how people read/absorb information in print? How can we be sure what we learn about online reading is peculiar to an online environment and isn't just typical of reading in whatever format?

Rowlands also suggested that we need to help users form "mental maps" of information - typically, when you walk into a library for a print resource, you have a reasonably good image of what you are expecting to find - the same can't be said of the web. There is a message for librarians here to help create easier access to information for users e.g. through less confusing terminology. Information literacy is key but research seems to suggest that unless individuals learn from a young age, the changes possible in user behaviour are more limited. There have been studies demonstrating a correlation between information literacy and academic grades.

Rowlands finished with a plea to understand our users better - stop thinking of them as one big mass which can be served by a one size fits all solution and learn from the commercial world, where customers are segmented and can follow a number of routes to information - though, I have to say, the commercial world doesn't always get it right either and they have greater resource at their disposal.

Tuesday, 24 June 2008

Back in the library world

Have rejoined CILIP and had various bumph through today. The LMS study seems to have stirred things up a bit with some uncertainty about the way forward, but everyone seems to agree that sitting still isn't an option. Need to set aside time to read the study through again...

Friday, 20 June 2008

Is the web changing the way we think?

A nice story on BBC by Bill Thompson ( suggesting that the availability of small chunks of information on the Web is limiting our reading and thinking - now, I'll be the first to admit to a short attention span and it'd be lovely to use this as my excuse but I'm not so sure...I think that a lot of people (maybe not the younger people so much) still print out anything which takes more than a couple of minutes to read, so I don't think we're doing all our reading on screen...and in a way, the Web has made it easier to discuss issues and be exposed to other people's opinions. But maybe there is something in the idea that we maybe accept information from others without thinking too hard about the quality, validity...? Something to think about...

Thursday, 19 June 2008

Yet more snippets...

Computing, 19 June 08:
- news that the OECD has organised a meeting of Internet experts this week in Seoul. Topics for discussion include net neutrality and adoption of IPv6 (which would enable almost limitless IP addresses, a concern given the ubiquity of mobile devices)
- Janie Davies writes about the green agenda in academic IT services - no mention of JISC work here but does mention HEFCW's shared services initiative. Gloucestershire makes it into the Green League :-)

Various snippets

Research Information - April/May 08:
- article by Sian Harris on peer review referring to recent report from Mark Ware Consulting on behalf of the Publishing Research Consortium - quotes 93% of academics disagreed with the statement that peer review is unnecessary. However, the report does note criticism with the current approach to peer review e.g. overloading of reviewers, time taken, methods used, bias of single blind method, lack of guidance from editors. Open review is an alternative, but apparently not a popular one.
- article by Nadya Anscombe on changes to the peer review process across a number of neuroscience journals - the Neuroscience Peer Review Consortium (NPRC). The journals (22 currently) have agreed to share reviewers' comments thereby reducing the number of times a manuscript might be reviewed.
- article by John Murphy on Google Book Search - mentions the Partner Programme where Google works with publishers and the Library Programme where Google has worked with the Bodleian as well as Cornell, Princeton and Harvard. About 10,000 publishers are involved and 28 large libraries are supplying material. IPR is obviously an issue and lawsuits are underway - one area of uncertainty is orphan works although Google is tackling this by publishing only snippets.
- article by Tom Wilkie and Sian Harris on e-books. We've all been waiting a while now for e-books to really take off and the authors suggest that "despite this enthusiasm amongst researchers, however, there are formidable barriers to the wider acceptance of e-books" including file format (with XML emerging as the preferred standard); legacy file formats; effective multimedia support; archiving and preservation; standardising e-book information; pricing models; understanding user behaviour. Ebooks have a lot of potential - we can do more with the content (e.g. translations) and enable users to build their own personal libraries but like other types of content, our thinking still seems restricted by what we could achieve with paper. One concern is what the role of the librarian will be if they are no longer seen as the intermediary/gatekeeper for accessing books.
Research Information - June/July 08
- article by Nash Pal on multi-product platforms for e-products - as opposed to the current model where e-books and e-journals have developed along separate paths resulting in silos. Benefits to the user include uniform online experience; seamless search; unified access control; potentially lower management/maintenance costs. "... what is needed is an integrated front end supported by a single, comprehensive, content-agnostic set of admin tools to manage all content types".
- article by Jay Katzen on "collective intelligence" as a solution to the volume of information/data facing researchers. Katzen quotes recent research from Carol Tenopir - "Scientists now read 25% more articles from almost twice as many journals then they did 6 years ago". Essentially (although very much from a vendor perspective) the author proposes a combination of quality corpora, user-focused tools and collaborative space.
Information World Review - April 08
- Tracey Caldwell reports on Pfizer's attempt to make JAMA reveal confidential peer review documents as part of its legal case concerning its arthritis drugs Bextra and Celebrex - again, raises the question of open review
- ALPSP (Assoc Learned and Professional Society Publishers) agrees platform deal with MyiLibrary
- Peter Williams in his editorial: "Information professionals should put themselves at the heart of the current debate over payment models for information and content. As the information gatekeeper for their organisations, they exercise a major responsibility on a daily basis in deciding what information is paid for, the value of that information, and the subsequent return on investment"
- article by Tracey Caldwell on ebooks - noting that business models are still at an experimental stage. Quotes Mark Carden, senior VP at MyiLibrary "paper and shipping account for only 5-10% of the cost of a book". Refers in some detail to JISC's eBooks Observatory project and CIBER's SuperBook project. Ebooks have potential in helping librarians provide access to knowledge free at the point of use - they can incorporate Web2.0 technologies such sa social networking, tagging; they are easily updated; online chats with authors could add an interesting dimension; integration into workflow; and the idea of iChapters, content can be purchased as chunks rather than as an entire monograph or collection. Also quotes Jay Katzen, from Science Direct: "...there needs to be a publisher paradigm shift so that more information is put in at the creation of content such as better tags". Mentions the Automated Content Access Protocol which will enable publishers to make content machine readable (semantic web?). Chris Armstrong is quoted: "Journals are more granular; access is to the article, which has an abstract, while access to and abstracts for e-books tend to be at the book level. Journals are also serials, so an access habit can be built up". A key early challenge is to tackle the issue of monitoring usage to inform future purchasing decisions.
- article by Michelle Perry on new business models for publishers. Mentions O'Reilly which looked at how tutors were using their titles online and came up with the idea of an online model that allowed them to design their own books for their courses. Apparently, Elsevier has developed a product to enable medics to search for diagnoses (???). David Worlock, from Outsell, highlights 3 areas publishers must grapple with to avoid being left behind: workflow, community, and vertical search.
Information World Review - May 08
- article by Kim Thomas on grey literature reporting that regulations to mandate deposit of electronic material is in hand but unlikely to be implemented before Autumn 09. There is a hope that this regulation will allow the BL to harvest websites for grey literature. Refers to 2 projects part-funded by JISC: Manchester Uni repository of Access Grid events; and Kings repository of documents relating to committee meetings.
Information World Review - June 08
- news that OCLC members participating in Google Book Search will now be able to share their MARC records with Google, the idea being that if an individual finds a book through Google Book Search, they'll be able to drill down to find where the book is physically located
- article on open access in social sciences and humanities, reporting on the EU promoting OA through something called Action32 of the STM-based COST programme (Co-operation in the field of Scientific and Technical Research). There is increasing pressure from users to link to source data - it has been suggested that a useful first step might be to open up access to research already in the public domain.


March 08 issue of ITNow from BCS includes an article by John Tabeart, "Child's play", on innovation:
"Innovation occurs when two or more ideas, components, capabilities, or technologies are combined together in a novel way".

Tabeart recommends working with the following principles:
  • Establish broad rules
  • Provide raw materials
  • Lead by example
  • Keep an open mind
  • Encourage experiments
  • Learn from experience
  • Challenge conventional wisdom
  • Encourage collaboration
  • Celebrate success
  • Accept and understand failure
  • Liberate from the constraints of business as usual

Information literacy

FreePint includes a review of the LIS Show by Adrian Janes. He very neatly sums up two main themes to emerge from this year's event:

User empowerment

using Web2.0 technologies; wifi access; RFID

Information literacy

A very interesting overview of some work already underway (notably Sheffield and Bedfordshire) to improve quality of discovery and to counter the seemingly widespread belief that "if it isn't on Google, it doesn't exist" (also refers to the recent RIN report on use of academic libraries and the Google Generation report). Peter Godwin (co-author of a CILIP book on information literacy and library2.0) is quoted as saying In a digital world in which, as he said, ‘Content has left the container', we as professionals have to adapt. Godwin refers to key principles he set out for ‘Library 2.0':

  • Find out your users' changing needs
  • Believe in your users
  • Be rid of the culture of perfect
  • Become aware of emerging technologies.

There's also a reference to SCONUL's 7 Pillars of Information Literacy which I will take a look at when I have time.

Free Pint has a related article in the same issue by Derek Law on digital natives covering the issue of information literacy as well as provenance of digital information and the role of the librarian:

"It is all too easy to see the prospect of an alliterate world in apocalyptic professional terms. Much better to recognise that repurposing our skills, particularly in the areas of building collections of born digital materials, providing trust metrics and kitemarking and teaching information literacy skills will be more prized than ever. The trick will be to ensure that our profession responds to this, rather than abandoning the field to others while we guard the gates of our paper based storehouses of knowledge."

JISC away day : part 2

Oh dear, it's taken me a while to finish writing up the away day ... I blame it on the email backlog which was waiting for me when our away day finished.

Anyway, the most useful session (for me) was on the 2nd day - on the new JISC IPR policy. I understand this is going to appear on the JISC web site soon. It's been developed as part of the IPR consultancy. Professor Charles Oppenheim talked us through the background and the key principles behind the policy.

It was also a useful refresher of some of the issues around IPR and the implications for JISC and its funded projects. Charles referred to the 4 reports produced as part of the consultancy:

Monday, 16 June 2008

JISC Away Day part 1

Today was the first day of the annual JISC Away Day. Here are my very quickly typed up notes...

First up, Ron Cooke, the JISC Chair, gave an overview of some recent achievements and looked towards the future and JISC's role in the sector. Malcolm Read gave an overview of key challenges facing JISC and referred to recent market research (e.g. 100% of Russell Group unis have led on JISC projects but figures are lower for other institutions).

Particularly useful to hear from JISC Collections - have noted down the following to look up later: NESLI2SMP; Knowledge Exchange joint licensing; eBooks observatory; CASPER; extending licensing beyond HE (study ongoing); deals with Scottish HEIs. Also noted: JISC Publishers Action Group; paper ebook; Repositories UK; Flourish/TICTOCs as examples of the U&I programme; Emerge community; Web2Rights.

Attended a session on increasing the impact of JISC in the sector. The group discussed who we are trying to reach (funding bodies; institutions; change agents); what messages we need to get across (value for money, influencing strategy/policy); and how. I think an additional question might be when we engage with different stakeholders depending what we hope to achieve. Branding was a key topic and the need for brand management. It was agreed JISC also needs to work on improving understanding of JISC activities within the community, enabling feedback, and finding the right metrics to measure impact. Kerry mentioned that they are currently working on audience analysis to improve the web site - i.e. providing secondary routes to information. It was acknowledged that much of our information is written for experts - there needs to be a more basic level which is more contextual.

The group also discussed what is meant by impact. We need to distinguish between reach (e.g. hit on Google) and impact (affecting behaviour in the sector). What can we learn from service reviews? What can we learn from the Top Concerns work? What value does JISC add to the sector? Methods discussed included institutional visits; networks of moles/champions.

Tuesday, 10 June 2008

BL Direct

BL Direct now has over 7000 journal titles available for full text purchase:

Data librarians

Interesting article in CILIP Update:
which quotes:
"‘Recent research carried out by the Australian Department of Education, Science and Training3 has indicated that the amount of data generated in the next five years will surpass the volume of data ever created, and in a recent IDC White Paper4 it was reported that, between 2006 and 2010, the information added annually to the digital universe will increase more than six fold from 161 exabytes to 988 exabytes.’ "

JISC preservation of web resources project

Dangers of the cloud

Yep, still reading back through Bloglines (having a bit of a spring clean!) and came across a piece from Bill Thompson on the dangers of the cloud - funnily enough, had a similiar conversation at a meeting last week...

Image search on the web

Reading through old posts in my Bloglines account, came across this BBC story about attempts to address the limitations of image searching on the web:

Search engine paradigm shift

Interesting post on Geospatial Semantic Web blog:

Thursday, 5 June 2008

BCS at Cheltenham Science Festival

I went across to the Cheltenham Science Festival today, for the BCS sponsored talk "Computer Whizz: The Best is Yet to Come" given by Professor Dave Cliff, from University of Bristol. It was a really enjoyable talk...

Dave Cliff started off by covering some of the big things to happen over the last 50 years. He talked about the idea that one major thing happens every decade and how Moore's Law (giving examples relating to processors, hard drives, digital cameras) is being proved right and has indeed become a self-fulfilling prophecy. He showed the progression from mainframe - minicomputer - PC - LAN/distributed networks - Internet/Web - utility/service computing.

He also talked for a while about utility computing, sharing some of the thinking from HP. He showed the design for a centre with 50,000 blade servers which was interesting to see, especially to learn that around 350 are replaced a day and new kit arrives in shipping containers. In fact, Sun/Google have patented the shipping container which has it all ready to go and just needs "plugging in". The cloud (HP called it utility computing, IBM on-demand computing, Sun N-1) is the future business model offering real-time processing (drug design, real-time translation, simulation, gaming worlds). And in fact, there has been work done on market-based control so computers can effectively bid for work, and the user can determine the price they are willing to pay for remote processing.

Cliff also explained a little how computing is learning from nature - e.g. genetic networks, superorganisms, ecosystems - and socioeconomic systems - .e.g marketplaces, languages, ontologies.

And of course, being a science festival, a talk on computing wouldn't be complete without a reference to robots! There has been a lot of work on humanoid robots but there have been many successful commercial applications of non-humanoid robots e.g. Cliff also shared some thoughts on how the lines between human and robot may be becoming blurred, through for example, the use of intelligent prosthetics for amputees; cochlear implants. Might there be a future for storing our memories increasingly on devices and not in our heads?

Lastly, he touched very briefly on two new-ish areas: amorphous computation and quantum computing. Apparently, Bristol Uni is a Centre for Excellence for quantum computing. Though this is where it started getting a little rushed and possibly too technical to cover neatly in a few minutes so will have to look into these a bit more...

All in all, a really enjoyable presentation :-)