Wednesday 13 August 2008

Moving...

New job, new blog address...! I'm moving my blog to ... if anyone is out there reading this, then I hope you'll take a look...

http://alisworldofwork.blogspot.com/

Thursday 7 August 2008

LHC goes live

Computing has a front page story on the LHC going live tomorrow:
http://www.computing.co.uk/computing/news/2223424/grid-awaits-secrets-universe-4158895 and the implications for data management

Friday 1 August 2008

Project failure in NHS

Interesting article on BCS about reasons for project failure - based on IT projects in University College London (UCLH) in the last 3 years ...much of it is common sense really, though interesting to see that projects following PRINCE2 are more likely to succeed that those which don't....plus some useful recommendations
http://www.bcs.org/server.php?show=ConWebDoc.20341

Wednesday 30 July 2008

"Semantic Medline"

Interesting story in Information Today...

Cognition launches Semantic Medline
http://newsbreaks.infotoday.com/wndReader.asp?ArticleId=50075

"...enables complex health and life science material to be rapidly and efficiently discovered with greater precision and completeness using natural language processing (NLP) technology"

I tried a quick search "exercise and depression" just to see it working - results are mostly relevant on the first couple of pages - it does offer you to select the correct meaning e.g. of depression (feeling of sadness/hopelessness) but still seems to bring up records referring to other meanings (e.g. ST segmental depression) - although I guess it's impossible to avoid that - and the definitions might be more useful if sourced from a medical dictionary which they don't appear to be. It would be interesting to compare results using MeSH.

Given that my search retrieved over 7000 results, it would also be useful to have some options for narrowing the search - suggesting additional search terms (e.g. are you interested in a particular population e.g. postnatal?)

http://www.semanticmedline.com

Monday 28 July 2008

Mobile web

From BBC: Mobile web reaches critical mass

"The mobile web has reached a "critical mass" of users this year, according to a report by analysts Nielsen Mobile.

The US is the most tech savvy nation with nearly 40 million Americans - 16% of all US mobile users - using their handset to browse on the move.

The UK and then Italy come a close second and third in the 16 countries surveyed by the analyst firm.

[...]

'PC internet users visit more than 100 domains per month, on average,' the report said.

'By contrast, the average mobile internet user in the US visited 6.4 individual websites per month.' UK use was slightly less at 5.5 per month."

http://news.bbc.co.uk/1/hi/technology/7499340.stm

Clearly, has implications for how to deliver content effectively ... could be a good way of delivering alerts, prompts, small chunks of quality content, bitesize e-learning...

Friday 25 July 2008

Open Web Foundation

"an organization that will help the creation and acceptance of Open Web"

"The Open Web Foundation's goal it to provide a home for community created specs. with mentorship, resources and infrastructure. Hopefully this will help teams spend time on making the spec."

http://radar.oreilly.com/2008/07/open-web-foundation.html

ps Thanks to Ian for pointing this out

Thursday 24 July 2008

More..various news

  • Google launched Knol this week, taking on Wikipedia although it does take a different approach, making authors more visible than on Wikipedia, with more emphasis on authority and reputation. Individuals can contribute but I'm not clear how contributions are validated - it recommends contributors write a bio to establish credentials and you can set permissions for others to edit your "knol" - but essentially it seems to be up to the reader to judge based on the writer's credentials. It also lets writers select IPR options, defaulting to Creative Commons. A lot of the knols there now relate to health so I'd be interested to know more about their quality framework.
  • Steve Prentice from Gartner tells the BBC that the days of interacting with your computer via your mouse are numbered
  • New Scientist reports "UK to get superfast broadband by 2012" (speeds of up to 100 megabits per second) -
  • CILIP Gazette 11-24 July includes a feature on the latest TFPL Connect event, exploring implications of a recent CMI report on the world of work in 2018. Delegates discussed the move towards portfolio working; the role of knowledge managers; flexible working; increasing emphasis on "alliance-building", strategic planning and political skills.
  • Central Office for Information releases guidelines on inclusion for public sector websites
  • Interesting article reporting on James Evans' research in Science, Great minds think (too much) alike suggesting that access to more journal literature is actually resulting in fewer citations
  • Article in Times Higher reporting on the suggestion by Bahram Bekhradnia, director of the Higher Education Policy Institute that HEFCE's new Research Excellence Framework should be based on peer review not solely data metrics
  • IWR reports: Nearly £10 million has been awarded to preserve low use journals for those in UK Higher Education. The new initiative, UK Research Reserve (UKRR) aims to improve access to the journal information for researchers as well as better preserve the body of work.

Tuesday 22 July 2008

More bits and pieces of news and stuff

Friday 18 July 2008

Various news

I'm starting to catch up with reading - here's some of the news to hit recently (ish!):
  • Microsoft buys up Powerset, in its attempt to take on Google
  • HEFCE announces 22 pilot institutions to test the new REF (http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=402609)
  • NHS Choices selects Capita as preferred bidder
  • Google is experimenting with a Digg-like interface
  • Amazon S3 experienced service outage on 20 July - one of the risks of relying on the cloud, I guess
  • Encyclopaedia Britannica goes wiki
  • Proquest to acquire Dialog business from Thomson Reuters
Some interesting articles came my way too...
  • Information : lifeblood or pollution? has some interesting thoughts about when information has value and when there is so much information it loses its value. Jakob Nielsen is quoted: 'Information pollution is information overload taken to the extreme. It is where it stops being a burden and becomes an impediment to your ability to get your work done.' Possible solutions are rating the integrity of information and clearer provenance.
  • International initiative licenses resources across 4 European countries about a deal negotiated via the Knowledge Exchange with Multi-Science, ALPSP, BioOne, ScientificWorldJournal, and Wiley-Blackwell.
  • A fun way of describing the amount of data Google handles

Thursday 17 July 2008

JISC Innovation Forum

Earlier this week, this JISC Innovation Forum took place, with the aim of getting together projects and programmes to discuss cross-cutting themes and share experiences. I attended the theme on research data - 3 sessions in all each focusing on a different aspect:

Session 1 - Legal and policy issues
This session followed the format of a debate, with Prof Charles Oppenheim arguing for the motion that institutions retain IPR and Mags McGinley arguing that IPR should be waived (with the disclaimer that both presenters were not necessarily representing their personal or institution's views).

Charles argued that institutional ownership encourages data sharing. Curation should be done by those with the necessary skills - curation involves copying and can only be done effectively where the curator knows they are not infringing copyright therefore the IPR needs to be owned "nearby". He also explained how publishers are developing an interest in raw data repositories and wish to own the IPR on raw as well as published data. There is a real need to encourage authors from blindly handing over the IPR on raw data. He suggested a model where the author is licensed to use and manipulate data (e.g. deposit in repository) and the right to intervene should they feel their reputation is under threat. The main argument focused on preventing unthinking assignment of rights to commercial publishers.

Mags suggested that curation is best done when no-one asserts IPR. There may in fact be no IPR to assert and she explained that there is often over-assertion of rights. There is in general a lot of confusion and uncertainty around IPR which leads to poor curation - Mags suggested the only way to prevent this confusion is to waive IPR altogether. Data is more than ever now the result of collaboration relying on multiple (and often international) sources of data so unravelling the rights can be very difficult - there could be many, even 100s of owners across many jurisdictions. Mags concluded with the argument that it is easier to share data which is unencumbered by IPR issues and quoted the examples of Science Commons and CC0.

A vote at this point resulted in : 5 for the motion supporting institutional ownership; 10 against; 7 abstaining.

A lively discussion followed - here are the highlights:
  • it's important to resolve IPR issues early
  • NERC model - researchers own IPR and NERC licenses it (grant T&Cs)
  • in order to waive your right, you have to assert it first
  • curation is more than just preservation - the whole point is reuse
  • funders have a greater interest in reuse than individual researchers - also have the resources to develop skills and negotiate T&Cs/contracts
  • not just a question of rights but responsibilities too
  • issues of long-term sustainability e.g. AHDS closure
  • incentives to curate - is attribution enough?
  • what is data? covered range of data including primary data collected by researcher, derived data, published results
  • are disciplines too different?
  • duty to place publicly funded research in the public domain? use of embargoes?
  • can we rely on researchers and institutions to curate?
  • "value" of data?
  • curation doesn't necessarily follow ownership - may outsource
  • proposal to change EU law on reuse of publicly funded research - HE now exempt - focuses on ability to commercially exploit - HEIs may have to hand over research data??
And finally, we voted again : this time, 6 for the motion; 14 against; 3 abstaining.

Session 2 - Capacity and skills issues
This session looked at 4 questions:
  1. What are the current data management skills deficits and capacity building possibilities?
  2. What are the longer term requirements and implications for the research community?
  3. What is the value of and possibilities for accrediting data management training programmes?
  4. How might formal education for data management be progressed?
Highlights of discussion:
  • who are we trying to train? How do we reach them? The need for training has to appear on their "radar" - best way to reach researchers is via lab, Vice-Chancellor, Head of School of funding source.
  • training should be badged e.g. "NERC data management training"
  • "JISC" and "DCC" less meaningful to researchers
  • a need to raise awareness of the problem first
  • domain specific vs generic training
  • need to target postgrads and even undergrads to embed good practice early on
  • need to cover entire research lifecycle in training materials
  • how is info literacy delivered in institutions now? can we use this as a vehicle for raising awareness or making early steps?
  • School of Chemistry in Southampton has accredited courses which postgrads must complete - these include an element of data management
  • lack of a career path for "data scientists" is a problem
  • employers increasingly looking for Masters graduates as perceived to be better at info handling
  • new generation of students - have a sharing ethic (web2.0) but not necessarily a sense of structured data management
  • small JISC-funded study to start soon on benefits of data management/sharing
  • can we tap into records management training? a role here for InfoNet?
  • can we learn from museums sector? libraries sector?
  • Centre for eResearch at Kings are developing "Digital Asset Management" course, to run Autumn 09
  • UK Council of Research Repositories has a resource of job descriptions
  • role of data curators in knowledge transfer - amassing an evidence base for commercial exploitation
  • also a need for marketing data resources

Session 3 - Technical and infrastructure issues

This session explored the following questions:

  • what are the main infrastructure challenges in your area?
  • who is addressing them?
  • why are these bodies involved? might others do better?
  • what should be prioritised over the next 5 years?
One of the drivers for addressing technical and infrastructure issues is around the sheer volume of data – instruments are generating more and more data – and the volume is growing exponentially. It must be remembered that this isn't just a problem for all big science – small datasets need to be managed too although the problem here is more to do with variety of data (heterogenous) than volume. It was argued that big science has always had the problem of too much data and have to plan experiments to deal with this e.g. LHC in CERN disposes of a large percentage of data collected during experiments. In some areas, e.g. geospatial, data standards have emerged but it may be a while before other areas develop their own or until existing standards become de facto standards.

Other areas touched on included:
  • the role of the academic and research library
  • roles and responsibilities for data curation
  • how can we anticipate which data will be useful in the future?
  • What is ‘just the right amount of effort’?
  • What are the selection criteria – what value this data might have in the future (who owns it, who’s going to pay for it), how much effort and money would you have to regenerate this data (eg do you have the equipment and skills to replicate it?)
  • not all disciplines are the same therefore one size doesn't fit all
  • what should be kept? data, methodology, workflow, protocol, background info on researcher? How much context is needed?
  • how much of this context metadata can be sourced directly e.g. from proposal?
  • issues of ownership determine what is stored and how
  • what is the purpose of retaining data - reuse or long-term storage? Should a nearline/offline storage model be used? Infrastrucutre for reuse may be different from that for long-term storage?
  • Should we be supporting publication of open notebook science? (and publishing of failed experiments). What about reuse/sharing if there’s commercial gains?
The summing up at the end concluded 4 main priority areas for JISC:
  1. within a research environment – can we facilitiate the data curation using the carrot of sharing systems? (IT systems in the lab)
  2. additional context beyond the metadata
  3. how do we help institutions understand their infrastructural needs
  4. what has to happen with the various dataset systems (fedora etc) to help them link with the library and institutional systems

Tuesday 8 July 2008

Data, Information, Knowledge, Wisdom

I was pointed to an article by Martin Fricke (by the BCS KIDDM list) which argues that the Data-Information-Knowledge-Wisdom hierarchy is methodologically unsound. It makes for fairly heavy reading at times but has some interesting discussion about the strength of data/information/knowledge in relation to "truth":

"Information is both more extensive than data and many instances of it are logically stronger than data. Information is irreducible to data. [...] This makes knowledge and information synonymous. Knowledge and information collapse into each other"

"And the wise person must not only have wide appropriate knowledge, but they must act in accordance with the knowledge they have."

The article also mentions evidence, but in a different context to the "evidence-based practice" use - this is more related to knowledge (some discussion of whether this means "know-that" or "know-how") and wisdom.

http://dlist.sir.arizona.edu/2327/01/The_Knowledge_Pyramid_DList.pdf

Wednesday 2 July 2008

Agile documentation

Interesting post on BCS blog on agile management led me to the concept of TAGRI - They Aren't Gonna Read It: http://www.agilemodeling.com/essays/tagri.htm.

Some interesting thoughts of how documentation should be produced - working with the customer, to provide communication not documentation for documentation sake, writing to a good enough standard.
http://www.agilemodeling.com/essays/agileDocumentation.htm

Some of the questions posed could apply to anyone writing any kind of documentation. Interestingly, they don't advise using templates as each system is different so will require different documentation - the thinking is that the template is resource-intensive to create; the template will ask for detail which isn't always relevant but people will attempt to write something; thus reviews take longer because there's so much more information to read through.

Friday 27 June 2008

Project failure

Article on BCS site citing reasons for project failure:
http://www.bcs.org/server.php?show=ConWebDoc.19584

Thursday 26 June 2008

ISKO event on information retrieval

Went along to some of the ISKO event on information retrieval today...

Brian Vickery was up first but unfortunately, I missed most of his talk. I did catch the last few minutes though where he asked some very pertinent questions:

  • What is the case for building classifications, thesauri and taxonomies? How does this relate to the needs of Communities of Practice?
  • Are the benefits of controlled retrieval languages strong enough to justify the effort and cost of creating/maintaining/using them?
  • Is there a growing need to harmonise or match terminologies?
  • What is the future for "universal" controlled languages and general classifications/ ontologies?

Next up was Stephen Robertson, giving a researcher perspective. He pointed out that although web search engines have been very successful, other systems cannot say the same - perhaps because the extensive machine learning available to Google et al just isn't feasible for a smaller setup. Roberston mentioned some useful sources of evidence in evaluating retrieval - notably click-throughs and "dwell time" (how long a user spends somewhere before returning to search results). There is some rich data out there but it is also "noisy".

Last up was Ian Rowlands who talked about the implications of the Google Generation report. He started with some context - insecurity around the power of Google, Yahoo branding; devaluing of the "library" brand; the hypothesis that the younger generation is somehow different. He referred to various pieces of research including Carol Tenopir's long-standing survey of academics. The bottom line of the Google Generation report is that it is a myth - yes, there is a type of user behaviour which is comfortable online but the Google Generation is not a homogenous mass of people - "silver surfers" (another irritating term!) demonstrate characteristics too and there are also "digital dissidents" among younger generations who are shunning technology. So, the general message is to stop thinking of our users as fixed targets who fit some kind of stereotype. We need to understand user behaviour much better, in particular, online reading - but then, how much do we really understand about how people read/absorb information in print? How can we be sure what we learn about online reading is peculiar to an online environment and isn't just typical of reading in whatever format?

Rowlands also suggested that we need to help users form "mental maps" of information - typically, when you walk into a library for a print resource, you have a reasonably good image of what you are expecting to find - the same can't be said of the web. There is a message for librarians here to help create easier access to information for users e.g. through less confusing terminology. Information literacy is key but research seems to suggest that unless individuals learn from a young age, the changes possible in user behaviour are more limited. There have been studies demonstrating a correlation between information literacy and academic grades.

Rowlands finished with a plea to understand our users better - stop thinking of them as one big mass which can be served by a one size fits all solution and learn from the commercial world, where customers are segmented and can follow a number of routes to information - though, I have to say, the commercial world doesn't always get it right either and they have greater resource at their disposal.

Tuesday 24 June 2008

Back in the library world

Have rejoined CILIP and had various bumph through today. The LMS study seems to have stirred things up a bit with some uncertainty about the way forward, but everyone seems to agree that sitting still isn't an option. Need to set aside time to read the study through again...

Friday 20 June 2008

Is the web changing the way we think?

A nice story on BBC by Bill Thompson (http://news.bbc.co.uk/1/hi/technology/7459182.stm) suggesting that the availability of small chunks of information on the Web is limiting our reading and thinking - now, I'll be the first to admit to a short attention span and it'd be lovely to use this as my excuse but I'm not so sure...I think that a lot of people (maybe not the younger people so much) still print out anything which takes more than a couple of minutes to read, so I don't think we're doing all our reading on screen...and in a way, the Web has made it easier to discuss issues and be exposed to other people's opinions. But maybe there is something in the idea that we maybe accept information from others without thinking too hard about the quality, validity...? Something to think about...

Thursday 19 June 2008

Yet more snippets...

Computing, 19 June 08:
- news that the OECD has organised a meeting of Internet experts this week in Seoul. Topics for discussion include net neutrality and adoption of IPv6 (which would enable almost limitless IP addresses, a concern given the ubiquity of mobile devices)
- Janie Davies writes about the green agenda in academic IT services - no mention of JISC work here but does mention HEFCW's shared services initiative. Gloucestershire makes it into the Green League :-)

Various snippets

Research Information - April/May 08:
- article by Sian Harris on peer review referring to recent report from Mark Ware Consulting on behalf of the Publishing Research Consortium - quotes 93% of academics disagreed with the statement that peer review is unnecessary. However, the report does note criticism with the current approach to peer review e.g. overloading of reviewers, time taken, methods used, bias of single blind method, lack of guidance from editors. Open review is an alternative, but apparently not a popular one.
- article by Nadya Anscombe on changes to the peer review process across a number of neuroscience journals - the Neuroscience Peer Review Consortium (NPRC). The journals (22 currently) have agreed to share reviewers' comments thereby reducing the number of times a manuscript might be reviewed.
- article by John Murphy on Google Book Search - mentions the Partner Programme where Google works with publishers and the Library Programme where Google has worked with the Bodleian as well as Cornell, Princeton and Harvard. About 10,000 publishers are involved and 28 large libraries are supplying material. IPR is obviously an issue and lawsuits are underway - one area of uncertainty is orphan works although Google is tackling this by publishing only snippets.
- article by Tom Wilkie and Sian Harris on e-books. We've all been waiting a while now for e-books to really take off and the authors suggest that "despite this enthusiasm amongst researchers, however, there are formidable barriers to the wider acceptance of e-books" including file format (with XML emerging as the preferred standard); legacy file formats; effective multimedia support; archiving and preservation; standardising e-book information; pricing models; understanding user behaviour. Ebooks have a lot of potential - we can do more with the content (e.g. translations) and enable users to build their own personal libraries but like other types of content, our thinking still seems restricted by what we could achieve with paper. One concern is what the role of the librarian will be if they are no longer seen as the intermediary/gatekeeper for accessing books.
----------------------------------------------------------------------------------------------
Research Information - June/July 08
- article by Nash Pal on multi-product platforms for e-products - as opposed to the current model where e-books and e-journals have developed along separate paths resulting in silos. Benefits to the user include uniform online experience; seamless search; unified access control; potentially lower management/maintenance costs. "... what is needed is an integrated front end supported by a single, comprehensive, content-agnostic set of admin tools to manage all content types".
- article by Jay Katzen on "collective intelligence" as a solution to the volume of information/data facing researchers. Katzen quotes recent research from Carol Tenopir - "Scientists now read 25% more articles from almost twice as many journals then they did 6 years ago". Essentially (although very much from a vendor perspective) the author proposes a combination of quality corpora, user-focused tools and collaborative space.
----------------------------------------------------------------------------------------------
Information World Review - April 08
- Tracey Caldwell reports on Pfizer's attempt to make JAMA reveal confidential peer review documents as part of its legal case concerning its arthritis drugs Bextra and Celebrex - again, raises the question of open review
- ALPSP (Assoc Learned and Professional Society Publishers) agrees platform deal with MyiLibrary
- Peter Williams in his editorial: "Information professionals should put themselves at the heart of the current debate over payment models for information and content. As the information gatekeeper for their organisations, they exercise a major responsibility on a daily basis in deciding what information is paid for, the value of that information, and the subsequent return on investment"
- article by Tracey Caldwell on ebooks - noting that business models are still at an experimental stage. Quotes Mark Carden, senior VP at MyiLibrary "paper and shipping account for only 5-10% of the cost of a book". Refers in some detail to JISC's eBooks Observatory project and CIBER's SuperBook project. Ebooks have potential in helping librarians provide access to knowledge free at the point of use - they can incorporate Web2.0 technologies such sa social networking, tagging; they are easily updated; online chats with authors could add an interesting dimension; integration into workflow; and the idea of iChapters, content can be purchased as chunks rather than as an entire monograph or collection. Also quotes Jay Katzen, from Science Direct: "...there needs to be a publisher paradigm shift so that more information is put in at the creation of content such as better tags". Mentions the Automated Content Access Protocol which will enable publishers to make content machine readable (semantic web?). Chris Armstrong is quoted: "Journals are more granular; access is to the article, which has an abstract, while access to and abstracts for e-books tend to be at the book level. Journals are also serials, so an access habit can be built up". A key early challenge is to tackle the issue of monitoring usage to inform future purchasing decisions.
- article by Michelle Perry on new business models for publishers. Mentions O'Reilly which looked at how tutors were using their titles online and came up with the idea of an online model that allowed them to design their own books for their courses. Apparently, Elsevier has developed a product to enable medics to search for diagnoses (???). David Worlock, from Outsell, highlights 3 areas publishers must grapple with to avoid being left behind: workflow, community, and vertical search.
---------------------------------------------------------------------------------------------
Information World Review - May 08
- article by Kim Thomas on grey literature reporting that regulations to mandate deposit of electronic material is in hand but unlikely to be implemented before Autumn 09. There is a hope that this regulation will allow the BL to harvest websites for grey literature. Refers to 2 projects part-funded by JISC: Manchester Uni repository of Access Grid events; and Kings repository of documents relating to committee meetings.
---------------------------------------------------------------------------------------------
Information World Review - June 08
- news that OCLC members participating in Google Book Search will now be able to share their MARC records with Google, the idea being that if an individual finds a book through Google Book Search, they'll be able to drill down to find where the book is physically located
- article on open access in social sciences and humanities, reporting on the EU promoting OA through something called Action32 of the STM-based COST programme (Co-operation in the field of Scientific and Technical Research). There is increasing pressure from users to link to source data - it has been suggested that a useful first step might be to open up access to research already in the public domain.

Innovation

March 08 issue of ITNow from BCS includes an article by John Tabeart, "Child's play", on innovation:
"Innovation occurs when two or more ideas, components, capabilities, or technologies are combined together in a novel way".

Tabeart recommends working with the following principles:
  • Establish broad rules
  • Provide raw materials
  • Lead by example
  • Keep an open mind
  • Encourage experiments
  • Learn from experience
  • Challenge conventional wisdom
  • Encourage collaboration
  • Celebrate success
  • Accept and understand failure
  • Liberate from the constraints of business as usual

Information literacy

FreePint includes a review of the LIS Show by Adrian Janes. He very neatly sums up two main themes to emerge from this year's event:

User empowerment

using Web2.0 technologies; wifi access; RFID

Information literacy

A very interesting overview of some work already underway (notably Sheffield and Bedfordshire) to improve quality of discovery and to counter the seemingly widespread belief that "if it isn't on Google, it doesn't exist" (also refers to the recent RIN report on use of academic libraries and the Google Generation report). Peter Godwin (co-author of a CILIP book on information literacy and library2.0) is quoted as saying In a digital world in which, as he said, ‘Content has left the container', we as professionals have to adapt. Godwin refers to key principles he set out for ‘Library 2.0':

  • Find out your users' changing needs
  • Believe in your users
  • Be rid of the culture of perfect
  • Become aware of emerging technologies.

There's also a reference to SCONUL's 7 Pillars of Information Literacy which I will take a look at when I have time.

Free Pint has a related article in the same issue by Derek Law on digital natives covering the issue of information literacy as well as provenance of digital information and the role of the librarian:

"It is all too easy to see the prospect of an alliterate world in apocalyptic professional terms. Much better to recognise that repurposing our skills, particularly in the areas of building collections of born digital materials, providing trust metrics and kitemarking and teaching information literacy skills will be more prized than ever. The trick will be to ensure that our profession responds to this, rather than abandoning the field to others while we guard the gates of our paper based storehouses of knowledge."

JISC away day : part 2

Oh dear, it's taken me a while to finish writing up the away day ... I blame it on the email backlog which was waiting for me when our away day finished.

Anyway, the most useful session (for me) was on the 2nd day - on the new JISC IPR policy. I understand this is going to appear on the JISC web site soon. It's been developed as part of the IPR consultancy. Professor Charles Oppenheim talked us through the background and the key principles behind the policy.

It was also a useful refresher of some of the issues around IPR and the implications for JISC and its funded projects. Charles referred to the 4 reports produced as part of the consultancy:

Monday 16 June 2008

JISC Away Day part 1

Today was the first day of the annual JISC Away Day. Here are my very quickly typed up notes...

First up, Ron Cooke, the JISC Chair, gave an overview of some recent achievements and looked towards the future and JISC's role in the sector. Malcolm Read gave an overview of key challenges facing JISC and referred to recent market research (e.g. 100% of Russell Group unis have led on JISC projects but figures are lower for other institutions).

Particularly useful to hear from JISC Collections - have noted down the following to look up later: NESLI2SMP; Knowledge Exchange joint licensing; eBooks observatory; CASPER; extending licensing beyond HE (study ongoing); deals with Scottish HEIs. Also noted: JISC Publishers Action Group; paper ebook; Repositories UK; Flourish/TICTOCs as examples of the U&I programme; Emerge community; Web2Rights.

Attended a session on increasing the impact of JISC in the sector. The group discussed who we are trying to reach (funding bodies; institutions; change agents); what messages we need to get across (value for money, influencing strategy/policy); and how. I think an additional question might be when we engage with different stakeholders depending what we hope to achieve. Branding was a key topic and the need for brand management. It was agreed JISC also needs to work on improving understanding of JISC activities within the community, enabling feedback, and finding the right metrics to measure impact. Kerry mentioned that they are currently working on audience analysis to improve the web site - i.e. providing secondary routes to information. It was acknowledged that much of our information is written for experts - there needs to be a more basic level which is more contextual.

The group also discussed what is meant by impact. We need to distinguish between reach (e.g. hit on Google) and impact (affecting behaviour in the sector). What can we learn from service reviews? What can we learn from the Top Concerns work? What value does JISC add to the sector? Methods discussed included institutional visits; networks of moles/champions.

Tuesday 10 June 2008

BL Direct

BL Direct now has over 7000 journal titles available for full text purchase:
http://www.bl.uk/news/2008/pressrelease20080527.html

Data librarians

Interesting article in CILIP Update:
http://www.cilip.org.uk/publications/updatemagazine/archive/archive2008/june/Interview+with+Macdonald+and+Martinez-Uribe.htm
which quotes:
"‘Recent research carried out by the Australian Department of Education, Science and Training3 has indicated that the amount of data generated in the next five years will surpass the volume of data ever created, and in a recent IDC White Paper4 it was reported that, between 2006 and 2010, the information added annually to the digital universe will increase more than six fold from 161 exabytes to 988 exabytes.’ "

JISC preservation of web resources project

http://jiscpowr.jiscinvolve.org/

Dangers of the cloud

Yep, still reading back through Bloglines (having a bit of a spring clean!) and came across a piece from Bill Thompson on the dangers of the cloud - funnily enough, had a similiar conversation at a meeting last week...
http://news.bbc.co.uk/1/hi/technology/7421099.stm

Image search on the web

Reading through old posts in my Bloglines account, came across this BBC story about attempts to address the limitations of image searching on the web:
http://news.bbc.co.uk/1/hi/technology/7395751.stm

Search engine paradigm shift

Interesting post on Geospatial Semantic Web blog:
http://www.geospatialsemanticweb.com/2008/06/06/search-engine-paradigm-shift

Thursday 5 June 2008

BCS at Cheltenham Science Festival

I went across to the Cheltenham Science Festival today, for the BCS sponsored talk "Computer Whizz: The Best is Yet to Come" given by Professor Dave Cliff, from University of Bristol. It was a really enjoyable talk...

Dave Cliff started off by covering some of the big things to happen over the last 50 years. He talked about the idea that one major thing happens every decade and how Moore's Law (giving examples relating to processors, hard drives, digital cameras) is being proved right and has indeed become a self-fulfilling prophecy. He showed the progression from mainframe - minicomputer - PC - LAN/distributed networks - Internet/Web - utility/service computing.

He also talked for a while about utility computing, sharing some of the thinking from HP. He showed the design for a centre with 50,000 blade servers which was interesting to see, especially to learn that around 350 are replaced a day and new kit arrives in shipping containers. In fact, Sun/Google have patented the shipping container which has it all ready to go and just needs "plugging in". The cloud (HP called it utility computing, IBM on-demand computing, Sun N-1) is the future business model offering real-time processing (drug design, real-time translation, simulation, gaming worlds). And in fact, there has been work done on market-based control so computers can effectively bid for work, and the user can determine the price they are willing to pay for remote processing.

Cliff also explained a little how computing is learning from nature - e.g. genetic networks, superorganisms, ecosystems - and socioeconomic systems - .e.g marketplaces, languages, ontologies.

And of course, being a science festival, a talk on computing wouldn't be complete without a reference to robots! There has been a lot of work on humanoid robots but there have been many successful commercial applications of non-humanoid robots e.g. irobot.com. Cliff also shared some thoughts on how the lines between human and robot may be becoming blurred, through for example, the use of intelligent prosthetics for amputees; cochlear implants. Might there be a future for storing our memories increasingly on devices and not in our heads?

Lastly, he touched very briefly on two new-ish areas: amorphous computation and quantum computing. Apparently, Bristol Uni is a Centre for Excellence for quantum computing. Though this is where it started getting a little rushed and possibly too technical to cover neatly in a few minutes so will have to look into these a bit more...

All in all, a really enjoyable presentation :-)

Tuesday 27 May 2008

RSC virtual library

FreePint features a story from RSC on setting up their virtual library: http://web.fumsi.com/go/article/share/2818

Interesting discussion of some of the barriers with publishers and how they addressed them. Also interesting to note the physical space previously occupied by the library is being reconfigured to include a new conference/meeting space.

Presentation on VREs/MREs

Thanks to Rachel for pointing this out: Interesting presentation from the Eduserv Symposium...by David Harrison of Cardiff Uni...
http://www.eduserv.org.uk/foundation/symposium/2008/presentations/davidharrison

Medecins sans frontieres adopt Open Repository

Press release from 15 May:

"Today, Médecins Sans Frontières (MSF) adopts 'Open Repository' - the service from BioMed Central, which allows institutes to build, launch, host, and maintain their own repositories.
Through the implementation of the Open Repository system, MSF is now able to provide a personalized in-house repository that maximises the distribution of their research at a fraction of the cost of other commercial systems.
[...]
Médecins Sans Frontières is just one of 15 organizations who have adopted the Open Repository solution since its inception.
[...]
Open Repository is built upon the latest version of DSpace, an open-source solution for accessing, managing and preserving scholarly works. Customers of Open Repository benefit from updated system features not only from DSpace themselves, but also from BioMed Central's team who are continually working to enhance their repository service. "

Future of the Internet

BCS are hosting a debate next week - sold out :-( - featuring Jonathan Zittrain and Bill Thompson, looking at appliances (e.g. iPhones, XBox) and the impact they're having. Should we be concerned that appliances stifle the ability to create new things on the Internet, or should we be more concerned about safety and security? Some discussion on one of the BCS blogs - http://www.bcs.org/server.php?show=ConBlogEntry.441

Web usage

BBC reports on Jakob Nielsen's annual report into web usage:

"Instead of dawdling on websites many users want simply to reach a site quickly, complete a task and leave. Most ignore efforts to make them linger and are suspicious of promotions designed to hold their attention.
Instead, many are "hot potato" driven and just want to get a specific task completed.
[...]
"The designs have become better but also users have become accustomed to that interactive environment," Dr Nielsen told BBC News.
Now, when people go online they know what they want and how to do it, he said.
[...]
"Web users have always been ruthless and now are even more so," said Dr Nielsen.
"People want sites to get to the point, they have very little patience," he said.
"I do not think sites appreciate that yet," he added. "They still feel that their site is interesting and special and people will be happy about what they are throwing at them."
Web users were also getting very frustrated with all the extras, such as widgets and applications, being added to sites to make them more friendly.
Such extras are only serving to make pages take longer to load, said Dr Nielsen.
There has also been a big change in the way that people get to the places where they can complete pressing tasks, he said.
In 2004, about 40% of people visited a homepage and then drilled down to where they wanted to go and 60% use a deep link that took them directly to a page or destination inside a site. In 2008, said Dr Nielsen, only 25% of people travel via a homepage. The rest search and get straight there.
"Basically search engines rule the web," he said.
But, he added, this did not mean that the search engines were doing a perfect job.
"When you watch people search we often find that people fail and do not get the results they were looking for," he said.
"In the long run anyone who wants to beat Google just has to make a better search," said Dr Nielsen.
http://news.bbc.co.uk/1/hi/technology/7417496.stm

http://www.useit.com/

Friday 23 May 2008

Google Health launched

"Google Health allows you to store and manage all of your health information in one central place. And it's completely free. All you need to get started is a Google username and password. Google believes that you own your medical records and should have easy access to them. The way we see it, it's your information; why shouldn't you control it?

  • Keep your doctors up-to-date
  • Stop filling out the same paperwork every time you see a new doctor
  • Avoid getting the same lab tests done over and over again because your doctor cannot get copies of your latest results
  • Don't lose your medical records because of a move, change in jobs or health insurance"

It'll be interesting to see if they promote this over here in the UK. Given that the NHS is going to be promoting HealthSpace, is there as much of a market here?

From FAQs "Google Health is mostly about helping you collect, store, manage, and share your medical records and health information. There is a search box at the top of every page in Google Health, and if you enter a search query there, you go to the Google.com search results page that you are used to. There is also useful health information built into Google Health, but Google Health is not a new health-specific search engine."

Still, it'd be interesting to see their quality criteria for the information they DO point to.

Presence technology

BCS has an interesting feature on presence technology:
http://www.bcs.org/server.php?show=ConWebDoc.19229

"Being able to see individuals over the network provides organisations with the ability reach people almost anywhere when they are available, and importantly it gives the individual user the flexibility to control how they want to be reached. Communications, and by extension, the workforce, can stop being desktop centric, and start to incorporate the use of mobile internet devices and PDAs much more effectively.

[...]

For example third-party enterprises involved in a project could be given presence access to a particular folder of work for a specified length of time. This could help businesses to work more collaboratively and, importantly, to build stronger relationships, both of which ultimately can only help the bottom line. "

Geospatial resources use in tertiary education: shaping the future

Last week, I attended a workshop organised and run by EDINA, as part of the eFramework workpackage of the SEE-GEO project. The aim of the workshop was to inform future planning and to begin thinking about how geospatial resources might work in a future world. We were asked to look ahead around 5 years - the general consensus was that we would be seeing an evolution rather than a revolution in that time e.g. ubiquity of geo info.

Opportunities and Challenges

Social/political/economic:
  • economics of information - IPR; FoI; access and exploitation
  • what about the knowledge that doesn't lend itself to a digital format?
  • how to handle digital persona - virtual communities and alternative economies
  • divisive nature of technology - a new division of class according to access to technology? does it disenfranchise or empower?
Technological
  • standards and interoperability - impact of Google/Microsoft/Yahoo?
  • how to manage fast paced change and multiple devices
  • still a need to teach and train experts - geo experts will be needed, deeper learning for experts
  • domination of Google/Microsoft/Yahoo - driving technology but have also helped put GI in mainstream
  • data deluge
  • protection/privacy/access/reuse
  • embedding (what does embedding really mean?)

Research

  • need an underlying basic IT infrastructure (e.g. grid, visualisation, mobile) with a spatial infrastructure (e.g. spatial ontologies) overlaid on top
  • Google/Microsoft/Yahoo challenge - raises expectations; discourages sharing?; how well does it transfer to academia?
  • methodologies - lack of skills here - mashups are not research; need to develop more analytical skills in young researchers
  • data - integrity; interoperability; creation (new, repurposed); sharing
  • policy - IPR; funding; publication; RAE/REF; tracking development of information
  • collaboration - technological, social, learning with industry

Enablers

Data/Content
  • Data is currently in layers and "all over the place"
  • What will INSPIRE achieve?
  • funding for infrastructure: interoperability; storage; distribution
  • role of community generated data
  • quality and validation
  • semantic enrichment
  • where does Google/Yahoo/Microsoft fit?
  • Research Council mandates are not enforced
  • how does a researcher deposit a dataset/database?
  • depth/breadth tension
Tools/Technology
  • there is a disconnect between creator and dataset - need provenance info - data/process broker, intelligent catalogue
  • (web) services lead to fundamental changes in models of use e.g. do you need processing power alongside the data - remote processing
  • "handy" mobile needed - portable, light, multiple ports, GPS, wearable
  • sensor networks and notion of central storage
  • tools/portals enable virtual world immersion - deeper sense of telepresence
  • can we learn from games technology?
  • consolidated and converged technologies
  • collaboration and sharing - less travel?
  • different publication needs - raw data; code; published papers
Skills, knowledge, people
  • wider promotion of geo info
  • compulsory GI education
  • funders to encourage outputs to be disseminated
  • policy framework
  • repositories, portals, databases
  • need for academic level specialist support
  • career development
  • professional development
  • networks and communities of practice
Legal/policy
  • funding for methodological development e.g. spatial methods for Grid
  • copyright and intellectual property - derived data, watermarking, commercialisation
  • training - cross-disciplinary; quality
  • data and standards development - involving user communities
  • ethics - code of practice; awareness of issues; data integrity; monitoring
  • support - policy to encourage networking
  • data access policy - feasibility and extent of info in public domain
  • access/usage permissions - who has the right to grant permissions? authentication in a global context
  • collaborative support - policy to enable multi-centre, multidisciplinary, multisector, multinational activity
Social/institutional/economic
  • social software/networking tools
  • wider dissemination of metadata beyond traditional subject boundaries
  • cultural change to cite datasets
  • links between universities and schools
  • changing demography e.g. >adult learners
  • funding - different streams - staffing, content, experimentation
  • benefits - clear roles/responsibilities
  • free or pay to view infrastructure
  • alternative (i.e. to OS) providers now available
  • entrepreneurial drivers
  • REF/RAE should effectively recognise complex and hybrid digital outputs
  • institutional or subject repositories
  • nervousness about depositing material
  • support to clear confusion re IPR especially in relation to derived data
There was some discussion about the role of JISC and its Geospatial Working Group so some messages to feed back.

Also, as an aside, I talked with Dr Douglas Cawthorne from De Montfort Uni in Leicester - they are involved in a large project to map Leicester - the result will be a multilayered map, showing the current city, the Roman city, social maps, emotive maps etc and will incorporate user generated content e.g. photos. Something to watch out for...

Thursday 22 May 2008

IT delivering value

Another interesting article on delivering value in Computing (15 May 08) - "Harnessing IT value" reporting on research at Cranfield which came up with 6 competencies that organisations should have if they want to deliver value through IT:
creating strategy
  1. defining the information system contribution : translating strategy into processes, information and systems investments
  2. exploiting information : to maximise benefits
  3. defining the capability : long term planning of architecture and infrastructure
  4. implementing solutions
  5. delivering IT supply

IT contribution to the green agenda

Computing (15 May 08) runs with "Talks begin on cutting Europe's IT energy use" on the cover. A consultation is looking at how IT can enable a 20% cut in EU energy use by 2020. This will look at hardware efficiency as well as delivering online services and remote working.

Wednesday 14 May 2008

Provenance theme at NeSC

A nice intro article to the new theme on provenance in the latest NeSC newsletter...
http://www.nesc.ac.uk/news/newsletter/May08.pdf
Also a helpful report from the "Marriage of Mercury and Philology" event including a summary of the CLELIA project, which is looking at how to mark up and structure manuscripts to include all components of the text.

Successful IT projects in the public sector

http://www.bcs.org/upload/pdf/success-public-sector-projects.pdf

BCS blog entry on geo DRM

http://www.bcs.org/server.php?show=ConBlogEntry.427

Tuesday 13 May 2008

Govt websites ... cont

And another useful link from National Archives
http://www.nationalarchives.gov.uk/webcontinuity/

Monday 12 May 2008

Govt web sites

Interesting article in Computing about the plethora of Government web sites which doesn't really help anyone find the information they need when they need it:
http://www.computing.co.uk/computing/news/2215735/mps-call-intervention-number-3978850

I also came across the National Archives' Web Rationalisation project recently:
http://www.nationalarchives.gov.uk/preservation/webarchive/web-rationalisation.htm

Tuesday 6 May 2008

Project tips

A useful article from BCS on project triage (http://www.bcs.org/server.php?show=ConWebDoc.18968), recommending that projects are monitored using a small number of key metrics, to help find the way through the mass of information often presented by project managers:
  • milestone slippage
  • using this information, identify delivery trends using a timeline
This only works if you select the most significant milestones.

Thursday 1 May 2008

Shared services in HE

Thanks James for pointing out this article in Computing:
http://www.computing.co.uk/computing/analysis/2215635/advanced-lessons-teamwork-3968914
Mentions UKRDS, though not by name and highlights JANET as an exemplary model of a shared service.

Wednesday 30 April 2008

Learning from the NHS

It's been a while since I blogged - busy with preparing for my PRINCE2 reregistration exam on Friday. Can't believe it's 4 years since I did the Practitioner exam!

Anyway, trying to catch up with what's been happening in the world for the last couple of weeks (whilst my head has been stuck in the PRINCE2 manual) and very very slowly catching up on emails!

This story came out of the BCS Newsletter today - a report on the Thought Leadership Debate on transforming health services. http://www.bcs.org/server.php?show=ConWebDoc.18922

The key points made are that:
  • given the pace of change is faster than our ability to learn, centralised solutions aren't appropriate and that modularity is a more realistic approach, especially as the NHS itself is modular in nature. Prof Eddie Obeng pointed out that policy makers should use this to their advantage rather than work against it. It's an interesting perspective - centralised solutions can be costly and overly generic but having said that, without some kind of central drive, what happens to interoperability?

  • there is currently an over-focus on processes. I wonder if this is because of the need to use PRINCE2 methodology for managing projects which is process-driven. Although PRINCE2 does acknowledge the need to engage people, it perhaps isn't accorded the status it deserves - MSP does address stakeholder engagement to an extent but even that probably isn't enough (though have to admit haven't read the latest version of the MSP manual).

  • another point made was that the end goal is improving performance - why isn't the NHS achieving this? The elements (e.g. hard working workforce, political drivers, funding) are there but there doesn't appear to be a holistic approach to pull it all together.

  • some of the points made in the ensuing debate include:
  1. there aren't sufficient incentives to drive change
  2. there is no shared vision across the NHS - the NHS is not one organisation and doesn't have a single culture
  3. one size fits all solutions won't work - some things are best done locally, some nationally - but even national solutions may need to build in the facility for local personalisation, otherwise people will develop workarounds
  4. the NHS needs early adopters to help coach others (this model was used in the ESR programme)
  5. we shouldn't underestimate incremental change

Wednesday 16 April 2008

JISC conference

Yesterday, the annual JISC conference took place in Birmingham - as usual, a very busy day and although I caught up with lots of people, I still managed to miss some of the people I was hoping to catch up with.

3 of my projects gave demos - 3DVisA, NaCTeM and ASSERT - and it was great to see the interest in the people attending. I went along to two parallel sessions: one on the Strategic eContent Alliance and one on rapid community building. Here are my notes from both...

The Strategic eContent Alliance aims to build a common information environment, a UK Content Framework and to gather case studies and exemplars. The UK Content Framework will be launched in March 2009 and will incorporate:
  • standards and good practice
  • advice, support, embedding
  • policy, procedures
  • service convergence modeling
  • audit and register
  • audience analysis and modeling
  • exchange (interoperability) model development
  • business models and sustainability strategies
There are a number of change agents to achieve the vision of the SCA...
  • common licensing platforms
  • common middleware
  • digital repositories
  • digitisation
  • devolved administrations
  • service convergence
  • uk government policy review
  • funding

Globally, there are other incentives e.g.
  • service oriented architecture
  • EU initiatives
  • Google and Microsoft initiatives
  • Open Content Alliance etc
The SCA has also engaged an IPR consultancy and Naomi Korn gave a brief overview of the issues of working in such a content-rich world. Naomi pointed out that it has never been easier to access content and referred to a number of key developments and standards to be aware of:
  • Science Commons
  • Digital Libraries i2010
  • PLUS
  • ACAP
  • SPECTRUM (collections management)
  • JISC registry of electronic licences
  • Open Access Licensing initiatives
Simon Delafond from the BBC talked about the Memoryshare project which enables user-generated content to be recorded against a timeframe to create a national living archive. They plan to build on this project with the SCA to create Centuryshare to aggregate content and augment with user generated content - this will be a proof of concept project due to deliver in March 2009.

Meredith Quinn talked about the recent Ithaka report on sustainability. The paper tackles some of the cultural issues to be resolved to create the right environment for sustainability. Meredith outlined the 4 key lessons from this work:
  1. rapid cycles of innovation are needed - i.e. don't be afraid to try new ideas and to drop ideas which aren't working
  2. seek economies of scale - e.g. Time Inc required all their magazines to use the same platform - not such an easy task to achieve in the distributed nature of HE but maybe this is where shared services come in
  3. understand your unique value to your user
  4. implement layered revenue streams
The rapid community building workshop focused on the Users and Innovations programme and the Emerge community which has been set up to support the programme. Given the nature of the Web2.0 and next generation technologies this programme is dealing with, it was decided early on to adopt an agile and community-led approach. It was important to avoid imposing an understanding on the community and instead build a shared understanding across the community. So 80 institutions were brought together (some 200 individuals) face to face to start to build a community of practice - from there, the community developed further in an online environment, set up using Elgg.

The programme shared the success factors for community building:
  • bounded openness
  • heterogenous homophily
  • mutable stability
  • sustainable development
  • adaptable model
  • structured freedom
  • multimodal identity
  • shared personal repertoires
  • serious fun
some of which are oxymorons! This is explained a little more at https://e-framework.usq.edu.au/users/wiki/UserCentredDevelopment. The approach is based on "appreciative enquiry" coined by Cooperrider and Srivastra in 1987.

It was interesting to hear their thoughts on benefits realisation which focuses on 3 strands:
  • synthesis (of learning etc)
  • capacity building
  • increased uptake
The programme is also planning to create an Emerge Bazaar where projects can "share their wares" and offer services. This will also promote a kind of IdeasForge to encourage new activities which might lead to new funded projects. The Emerge Online conference is next week from 23 to 25 April.

As for the keynote sessions, key points from Lord Puttnam's speech were that we shouldn't try to solve problems with the same kind of thinking that caused them and that we are only scratching the surface of what we can achieve with technologies therefore should be more ambitious and keep innovation high on the agenda.

It was good to hear Ron Cooke highlight the data problem: "...my nightmare is the “challenge of super-abundant data” - not just its life cycle, but its superfluity with the new, unprecedented increases of data through Web 2.0 and user-generated content, including academic publishing in real time, blogging without control, and the quality and reliability of data. I am also concerned about the demands of skills it places on us - critical assessment is needed to deal with this data."

I missed Angela Beesley from Wikia but am pleased to see someone has summarised the talk http://librariesofthefuture.jiscinvolve.org/2008/04/15/jisc-conference-closing-keynote-speech-angela-beesley/ :-)

The SCA team have blogged the conference (far better than i have!) which you can read at http://sca.jiscinvolve.org/2008/04/15/.

The conference also saw the launch of the Libraries of the Future campaign (http://www.jisc.ac.uk/whatwedo/campaigns/librariesofthefuture.aspx).

Friday 11 April 2008

Google's grid

Thanks to Matthew for pointing this out - a preview release of Google's App Engine:

http://code.google.com/appengine/

Stirling Uni mandates open access

From a press release earlier this week...

STIRLING RESEARCH GOES GLOBAL

The University of Stirling has become the first academic institution in the UK to oblige staff to make all their published research available online.

Stirling is leading the way in open access to its research work, after the University’s Academic Council issued an institutional mandate which requires self-archiving of all theses and journal articles.

Professor Ian Simpson, Deputy Principal (Research and Knowledge Transfer) said: “We believe that the outcomes of all publicly funded research should be made available as widely as possible. By ensuring free online access to all our research output, we will maximise the visibility and impact of the University’s work to researchers worldwide.”

The four year project to create STORRE (Stirling Online Research Repository) has been brought to fruition by information technology specialists Clare Allan and Michael White.

Clare Allan said: “The University now requires all published journal articles to be deposited by authors, as soon as possible after they are accepted for publication, and in compliance with the publishers' copyright agreements.

“It is an important landmark in our archival development and marks the conclusion of a process that started in 2004 when Stirling was one of 20 academic institutions which signed up to the OATS (Open Access Team for Scotland) declaration. The repository project initially focused on electronic theses and in session 2006/07 we became one of the first universities to require these to be submitted electronically.

“The next stage was a pilot scheme for self-archiving of journal articles by some researchers, and this has now become mandatory. We are also building up a retrospective archive.”

Thursday 10 April 2008

Reuters and semantic web

The Economist reports on the semantic web, referring to Reuters' new service Calais which is free. It also mentions other people working in this area: Twine, Powerset, Metaweb, Hakia, Adaptive Blue, Yahoo and Qitera.

SEE-GEO gets mention in latest OGF newsletter

http://www.ogf.org/News/newscal_newsletter.php#link2

Transformational government

Computing (10 April) reports on the government's plans to engage with the public, using social networking and other intiatives. Interestingly, it mentions a "Whitehall taskforce spearheaded by [Tom] Watson will look at wider moves into information sharing between government and the public". It reports on the success of Netmums and NHS Choices as a spur for doing more in this vein. It also mentions work already underway including "opening up Ordnance Survey data for mash-ups" and the role of Land Registry, OS and the Hydrographic Office in improving public access to information.

Monday 7 April 2008

RCUK to review fEC

This was in the latest RCUK News...

"The Review's terms of reference are:

  • To review the impact of the revised funding arrangements for research on the sustainability of research in Higher Education Institutions;
  • To advise on changes that would enhance the delivery of sustainability;
  • To consider, and propose if necessary, changes in the operation of full economic costs in the funding of research;
  • To report to the Research Councils UK Executive Group and Universities UK by December 2008."
http://www.rcuk.ac.uk/news/080313.htm

Friday 4 April 2008

Visualisations for geosciences

Just came across this on a post by ResourceShelf last month:
http://www.free.ed.gov/resource.cfm?resource_id=2030

Trust and collaboration

Interesting report from Economist Intelligence Unit, sponsored by Cisco:
http://www.eiuresources.com/mediadir/default.asp?PR=2008033101

"Despite rise of virtual interaction, face-to-face collaborations still have the best chance of success"

Thursday 3 April 2008

NaCTeM developments

Really good to see the latest Mental Health demonstrator from the ASSERT project: http://nactem3.mc.man.ac.uk:8080/ASSERT_Refactored/. I really like the visualisation - I think this will really help social scientists get to grips with text mining and has the potential to facilitate and speed up the systematic review process.

And in a nice join up between two of my projects (NaCTeM and CO-ODE), NaCTeM has released its TerMine Plugin for Protégé. The plugin "uses text mining tools to extract candidate terms from a corpus of text and provides an interface for rapidly bringing these terms into an OWL ontology. It uses the TerMine term extraction tool provided by NaCTeM to extract concepts from text. The plugin accesses TerMine via a Web Service over the Internet."
The plugin can be downloaded from http://www.co-ode.org/downloads/protege-x/plugins/

Also worth mentioning the Kleio demonstrator http://nactem4.mc.man.ac.uk:8080/Kleio/ developed from Phase 1 of NaCTeM.

BT using social networking internally

Computing (3 Apr 08) features a story on how BT is using social networking to encourage more team working and collaboration across the organisation. One area where they are seeing benefit is in collaboration across industry sectors internationally e.g. security experts around the world are able to share knowledge much more easily; and it's now easier to see how different sectors may learn from one another. BT is also using a wiki on a major project (21CN - 21st Century Network) to share understanding.

New classification scheme for research in Australia and New Zealand

"The Australian Bureau of Statistics (ABS) and Statistics New Zealand (Statistics NZ) have
jointly developed ANZSRC to serve as a standard research classification for both
countries. It will improve the comparability of research and development statistics
between the two countries and the rest of the world. For the ABS and Australian
stakeholders, ANZSRC replaces the Australian Standard Research Classification (ASRC
1998) and for Statistics NZ and New Zealand stakeholders ANZSRC introduces a new
framework to measure R&D activity."
http://www.ausstats.abs.gov.au/ausstats/subscriber.nsf/0/2A3A6DB3F4180D03CA25741A000E25F3/$File/12970_2008.pdf

Ocean science projects using OGC standards

The latest OGC newsletter features a short overview of ocean science projects using OGC standards: http://www.opengeospatial.org/pressroom/newsletters/200803. The list of projects includes MOTIIVE on which the new COMPASS project is built.

Wednesday 2 April 2008

web2.0 and impact on science

Just tidying up bloglines and came across this in Scientific American on web2.0 and science
http://www.sciam.com/article.cfm?id=science-2-point-0-great-new-tool-or-great-risk

Tuesday 1 April 2008

Tips on avoiding scope creep

More tips from the BCS ... on scope creep including involving users in process mapping; involving stakeholders in project planning; managing risks, assumptions and issues.
http://www.bcs.org/server.php?show=ConWebDoc.17185

Presentation of project information

Some nice tips from BCS on presenting ad hoc project presentations and thinking of different and interesting ways of presenting updates....
http://www.bcs.org/server.php?show=nav.8884

Friday 28 March 2008

Projects addressing issues around research data

Yesterday, we had a meeting here at JISC to bring together current projects working in the field of research data. There's a lot happening and it's going to be really interesting to see what comes out of these studies:

I already mentioned (http://ali-stuff.blogspot.com/2008/02/jisc-and-research-data.html) an article earlier this year in Inform. Of course, much of the work stems from Liz Lyon's report from last year Dealing with Data (see earlier post at http://ali-stuff.blogspot.com/2007/11/data-sharing.html)

Tuesday 18 March 2008

NGS - case study and ENGAGE project

Latest NGS newsletter (http://www.grid-support.ac.uk/files/Newsletter/March_NGS_News_2008.pdf) features:

- article on ENGAGE: "Eleven groups, with research interests that include Oceanography, Biology and Chemistry, have already been interviewed. The results of the interviews will be reviewed during ENGAGE’s second phase. This phase will identify and publicise the ‘big issues’ that are hindering e-Research adoption and the ‘big wins’ that could help it. Solutions to some of the big issues will be developed and made freely available so that the entire research community will benefit. The solutions may involve the development of new software, which will make use of OMII-UK’s expertise, or may simply require the provision of more information and training. Any software that is developed will be deployed and evaluated by the community on the NGS."

- case study of using NGS in Integrative Biology to understand defibrillation of the heart: "“Using the NGS does not give time improvements when you are using sequential code, but it does give definite performance improvements.” says Dr Rodriquez. “Once you get started, using the NGS is very easy to use.”"

Building Effective Virtual Organisations

Webcasts from a recent NSF event:
http://www.ci.uchicago.edu/events/VirtOrg2008/index.php?pg=main

DCC Curation Lifecycle Model

This recently went to consultation - not sure when the results of the consultation come out and how much the model will change as a result. But in meantime, want to keep track of the links:

model : http://www.dcc.ac.uk/events/dcc-2007/posters/DCC_Curation_Lifecycle_Model.pdf
background info : http://www.ijdc.net/ijdc/article/view/45/52

"Innovation Nation" white paper

Thanks to James for pointing this out:

"Innovation Nation sets out the Government's aim to make the UK the best place in the world to run an innovative business or public service. It argues that innovation is essential to the UK's future prosperity and the ability to tackle major challenges like climate change.

The paper considers how Government and society respond to changes in innovation across the public, private and third sectors. Other key themes are further supporting innovative businesses and research; increasing exchanges of knowledge; boosting the supply of skilled people; supporting innovative towns and regions and promoting innovation in the public sector.

Headline commitments include [...]:
* Doubling the number of Knowledge Transfer Partnerships between businesses, universities and colleges to boost competitiveness and productivity alongside a greater exchange of innovation expertise between the private sector and Government led by DIUS and the TSB;
* Piloting of a new Specialisation and Innovation Fund to boost the capacity of further education colleges to unlock workforce talent and to support businesses in raising innovation potential;
* Expanding the network of National Skills Academies with one academy for every major sector of the economy;
* Sponsoring new Partnerships for Innovation bringing together venture capital with universities, business and other local partners to jointly develop innovative solutions to local and regional challenges. DIUS will publish a prospectus in the autumn;
* Establishing an Innovation Research Centre in partnership with the Economic and Social Research Council (ESRC), NESTA and the TSB;
* A new Annual Innovation Review to provide a comprehensive annual assessment of promoting innovation in the public and private sectors. The first of these will be published this autumn."

http://www.wired-gov.net/wg/wg-news-1.nsf/lfi/158862

OGC and OASIS collaborating on standards

Press release from OGC (http://www.opengeospatial.org/pressroom/pressreleases/849) about their collaboration with OASIS:
"The groups point to Web services as a key area of their cooperation. With the existing OGC Web Services (OWS) standards, most of the standards needed to publish, discover and use Web-resident geospatial data and services on the Web are in place. However, OWS must work in concert with other Web services standards. That's why OGC members approved the ebRIM (electronic business Registry Information Model) OASIS Standard as the preferred cataloging meta-model foundation for future application profiles of the OpenGIS® Catalog Service Web (CS-W) Standard. "

Monday 17 March 2008

New guide to geospatial resources in humanities

AHESSC has produced a very readable guide to geospatial resources and services in the humanities:
http://www.ahessc.ac.uk/geospatial-resources

Semantic web - various

Some discussion recently about the vision of the semantic web...

Tim Berners Lee features in The Times Online (http://technology.timesonline.co.uk/tol/news/tech_and_web/article3532832.ece) talking about the potential of the semantic web

Discussion on the BCS-KIDDM list referred to an earlier talk by Prof Ian Horrocks (http://www.epsg.org.uk/pub/needham2005/Horrocks_needham2005.pdf) which is a very readable intro to the concepts behind the semantic web and also refers to Manchester's work with Protege, including the pizza demo.

Yahoo have also been talking about semantic web, in particular its application in web searching (http://www.ysearchblog.com/archives/000527.html):
"In the coming weeks, we'll be releasing more detailed specifications that will describe our support of semantic web standards. Initially, we plan to support a number of microformats, including hCard, hCalendar, hReview, hAtom, and XFN. Yahoo! Search will work with the web community to evolve the vocabulary framework for embedding structured data. For starters, we plan to support vocabulary components from Dublin Core, Creative Commons, FOAF, GeoRSS, MediaRSS, and others based on feedback. And, we will support RDFa and eRDF markup to embed these into existing HTML pages. Finally, we are announcing support for the OpenSearch specification, with extensions for structured queries to deep web data sources."