WS6: Benchmarking Incentives and Best Practices Across Scientific Disciplines – Abstracting from Established Evaluations

Date: Tuesday, 12 September 2017
Venue: Kollegienhaus, University of Basel

Organizers:

Salvador Capella-Gutierrez & Josep Ll. Gelpi (Spanish National Bioinformatics Institute (INB) & Barcelona Supercomputing Center (BSC), Barcelona, Spain)
Diana de la Iglesia (Spanish National Cancer Research Centre (CNIO), Madrid, Spain)
Jürgen Haas (Biozentrum, University of Basel & Swiss Institute of Bioinformatics (SIB), Basel, Switzerland)

Workshop Website

Call for Abstracts

This workshop brings together experts in benchmarking working in the Life Sciences domain and integrates experiences from various community-driven benchmarking efforts (CASP, BeCalm, CAMEO, GA4GH, among others) illustrating common key elements. The objective is to formulate ideas on how to leverage open infrastructures and continuous approaches for scientific, technical and functional assessment of bioinformatics tools. These examples will illustrate how benchmarking activities foster new interactions, developments and pave the way to establishing a general purpose data warehouse for benchmarking across the communities.

The organizing committee welcomes your abstracts for oral presentations and scientific posters. It is a great opportunity to share benchmarking experiences in bioinformatics, to pick up ideas from others, and to start future collaborations in the area. Abstracts can be submitted for oral or poster presentation. Abstracts that are not accepted for oral presentation are offered a slot for the poster session on an opt-in basis.

All submissions should be made online by filling the following form.

Deadlines:
Friday, September 1: Late posters submission deadline
Monday, September 3: Decisions announced
Friday, September 8: Slides/ Poster submission by email

Workshop Summary

Background
It is a common practice to benchmark bioinformatics methods, algorithms and tools during its development, either from scratch or from previous releases. This effort is performed using a certain composition of data sets and methods to benchmark against. These internal benchmarks often suffer from an incompleteness concerning both the test data and competing methods. The ELIXIR-EXCELERATE Framework for Scientific Benchmark and Technical monitoring aims at alleviating these shortcomings by focussing on key aspects of modern benchmarking efforts and promoting these across a broad range of bioinformatics communities. By establishing a transparently operated benchmarking data warehouse the ELIXIR-EXCELERATE Framework allows researchers

  1. to access constantly updated benchmarking data sets and
  2. to efficiently analyze the performance of a particular tool.

Benchmarking thrives from bringing together method and infrastructure developers as well as researchers aiming at providing the scientific community novel scientific tools, and cutting edge research results at the same time. Independent and community-driven blind benchmarks are addressing these aspects and have been shown to be crucial for objectively assessing scientific methods in various communities including substantial commitment from industry.

CASP is the prime example of community-driven benchmarking activities with bi-annual editions from 1994. Since then, many other communities have come together to design and run their own benchmarking activities e.g. CAFA and Quest for Orthologs efforts. The main benefit of recurring independent assessments is the task composition, which is varied from what was initially used to develop the methods. Since the tasks evolve over time, new aspects become crucial to be evaluated on, reflecting current developments in the community. By being able to compare various methods on the same data set at the same point in time new methods can document their superior performance in a much more transparent way as past individual benchmarks inherently suffer from the temporal focus on the methods available at a given particular time.

This workshop integrates experiences from various community-driven benchmarking efforts illustrating key common elements. More importantly, these examples illustrate how benchmarking activities bring together to their members and foster new interactions and developments.

Goals

  1. Exchange benchmarking experiences across communities
  2. Derive best practises and methodical approaches
  3. Define requirements to successfully establish community-driven (continuous) benchmarking in emerging communities
  4. Provide practical examples about how to integrate existing benchmark efforts into the ELIXIR-EXCELERATE framework
  5. Invite existing and emerging community-driven benchmark initiatives to make use of the ELIXIR-EXCELERATE framework via a hands-on session.

Target Audience
Members of scientific communities already conducting community-driven benchmarks and scientists interested in establishing benchmarking within their respective community.

Workshop Agenda and Speakers:

Please check regularly at the Workshop website.

Workshop speakers include:

  • Adrian Altenhoff (ETH Zurich; Swiss Institute of Bioinformatics, Switzerland)
  • Salvador Capella-Gutierrez (Spanish National Cancer Research Centre, Spanish National Bioinformatics Institute, Madrid, Spain)
  • Josep Lluís Gelpí (Barcelona Supercomputing Centre, University of Barcelona, Spanish National Bioinformatics Institute, Spain)
  • Jürgen Haas (Swiss Institute of Bioinformatics; University of Basel, Switzerland)
  • Peter Krusche (Illumina)
  • Núria López-Bigas (Institute for Research in Biomedicine, Barcelona, Spain)
  • Anália Lourenço (University of Vigo, Spain)
  • Javier Luri (University of Basrcelona, Barcelona, Spain)
  • John Moult (IBBR, University of Maryland, Maryland, USA)
  • Cedrik Magis (Centre for Genomic Regulation, Barcelona, Spain)