From the JISC IPR Consultancy newsletter:
http://www.jisc.ac.uk/whatwedo/projects/ipr/iprconsultancy/newsletter28.aspx#progress
"Her Majesty's Revenue's and Customs and the Charities Commission's lawyers have issued an opinion that when the purpose of a particular piece of research carried out in a University is to enhance the profitability of a third party, then in order to avoid breaking the law, that research must be run by a spin off company of the HEI. The spin off company can then covenant its profits to the parent HEI. However, if the "commercial" research is not undertaken through the spin out company, then the HEI is breaking the Charities Act. On the other hand, if the work can be shown primarily to further the HEI, and benefits to any third party can be shown to be incidental, or only a very small sum is involved, there isn't a problem. The wording the lawyers have used was "Charity trustees would be in breach of their trusts if they decided to carry out activities that were supposed to directly further the charity's objects, but which resulted in private benefits that were not legitimately incidental. If the commercial advantage was one of the main purposes of the arrangement, then the commercial research would have to be undertaken as a taxable 'non primary purpose' trading.....In case of doubt, HMRC recommends carrying out the research through a trading company." The Charities Commission will shortly be issuing advice on this matter on its web site. "
Thursday, 13 March 2008
Google and data storage
Back in January, Wired reported (http://blog.wired.com/wiredscience/2008/01/google-to-provi.html) on Google's plans for open access to research data:
"Two planned datasets are all 120 terabytes of Hubble Space Telescope data and the images from the Archimedes Palimpsest, the 10th century manuscript that inspired the Google dataset storage project"
Also refers to an earlier article by Thomas Goetz (http://www.wired.com/science/discoveries/magazine/15-10/st_essay) on freeing dark data i.e. negative results, to get around the publication bias problem.
"Two planned datasets are all 120 terabytes of Hubble Space Telescope data and the images from the Archimedes Palimpsest, the 10th century manuscript that inspired the Google dataset storage project"
Also refers to an earlier article by Thomas Goetz (http://www.wired.com/science/discoveries/magazine/15-10/st_essay) on freeing dark data i.e. negative results, to get around the publication bias problem.
UKDA-Store
"The UK Data Archive (UKDA) is launching UKDA-store, a new research output management tool, later this year. Used to submit data deposits into the UK Data Archive, UKDA-store is to be initially released to the social science research community with the intention of extending the system to other researchers. UKDA-store will enable researchers to submit a range of digital outputs to the self-archiving repository with the right to set permissions for individual and group access, so that data can remain private (on embargo) although metadata continues to be searchable. Furthermore, data that is judged to meet the UKDA’s acquisition criteria can be formally lodged for long-term central system preservation within the UK Data Archive. [...]
UKDA-store will be formally launched at the National Centre for Research Methods Festival on 30 June 2008 in Oxford."
http://www.jisc.ac.uk/news/stories/2008/02/ukdastore.aspx
UKDA-store will be formally launched at the National Centre for Research Methods Festival on 30 June 2008 in Oxford."
http://www.jisc.ac.uk/news/stories/2008/02/ukdastore.aspx
Cloud tools used in NY Times digitisation
A short blog item (pointed out by Bill St Arnaud) on the use of Yahoo's Hadoop and Amazon's S3 and EC2 by the New York Times - cheaper infrastructure costs but the comments seem to suggest higher or equal coding costs. Interesting.
http://zzzoot.blogspot.com/2008/02/hadoop-ec2-s3-super-alternatives-for.html
http://zzzoot.blogspot.com/2008/02/hadoop-ec2-s3-super-alternatives-for.html
NSF partnership with Google and IBM
The Cluster Exploratory (CluE) relationship will enable the academic research community to conduct experiments and test new theories and ideas using a large-scale, massively distributed computing cluster. NSF anticipates being able to support 10 to 15 research projects in the first year of the program, and will likely expand the number of projects in the future.
http://www.nsf.gov/news/news_summ.jsp?cntn_id=111186&govDel=USNSF_51
http://www.google.com/intl/en/press/pressrel/20071008_ibm_univ.html
http://www.nsf.gov/news/news_summ.jsp?cntn_id=111186&govDel=USNSF_51
http://www.google.com/intl/en/press/pressrel/20071008_ibm_univ.html
Back on my blog!
Well, I haven't been doing too well at keeping blogging lately - combination of being poorly, on leave and just plain busy but hopefully getting back on track now...
Wednesday, 5 March 2008
myExperiment in Nature
Thanks to Judy for pointing this out - myExperiment gets a mention in Nature. Shame it doesn't mention it's funded by JISC but hey, we can't have everything!
http://www.nature.com/naturejobs/2008/080221/full/nj7181-1024a.html
"MyExperiment.org, funded by the UK government, lets users share workflows: the customary protocols for standardizing data, running simulations or conducting statistical analysis on large data sets. Standardized protocols for manipulating large data sets can be tweaked for specific purposes. Users can comment on their usefulness and link to other work-flows of interest. Bioinformaticians and geneticists are among those who stand to benefit most. For example, sharing a workflow for identifying biological pathways implicated in Trypanosomiasis resistance in cattle allowed another investigator to find pathways involved in sex dependence in the mouse model, says myExperiment project leader David De Roure, a computer scientist at the University of Southampton, UK. Done independently, this type of study could take two years. Such streamlining allows scientists to focus on discovery rather than drudgery, he says."
http://www.nature.com/naturejobs/2008/080221/full/nj7181-1024a.html
"MyExperiment.org, funded by the UK government, lets users share workflows: the customary protocols for standardizing data, running simulations or conducting statistical analysis on large data sets. Standardized protocols for manipulating large data sets can be tweaked for specific purposes. Users can comment on their usefulness and link to other work-flows of interest. Bioinformaticians and geneticists are among those who stand to benefit most. For example, sharing a workflow for identifying biological pathways implicated in Trypanosomiasis resistance in cattle allowed another investigator to find pathways involved in sex dependence in the mouse model, says myExperiment project leader David De Roure, a computer scientist at the University of Southampton, UK. Done independently, this type of study could take two years. Such streamlining allows scientists to focus on discovery rather than drudgery, he says."
Subscribe to:
Posts (Atom)