Link Details

Link 1084135 thumbnail
User 1173579 avatar

Published: Dec 20 2013 / 09:26

The Large Hadron Collider experiments manage tens of petabytes of data spread across hundreds of data centres. Managing and processing this volume required significant infrastructure and novel software systems, involving years of R&D and significant commissioning to prepare for the LHC First Data. The evolution of this global computing infrastructure, and the specialisations made by the experiments, have lessons relevant for many commercial "big data" users. This talk looks at the data and workflow management system of one of the LHC experiments and will draw out successes, weaknesses and interesting organisational issues that have parallels in a commercial setting. Filmed at JAX London 2013.
  • 12
  • 0
  • 392
  • 1100

Add your comment

Html tags not supported. Reply is editable for 5 minutes. Use [code lang="java|ruby|sql|css|xml"][/code] to post code snippets.

Apache Hadoop
Written by: Piotr Krewski
Featured Refcardz: Top Refcardz:
  1. Play
  2. Akka
  3. Design Patterns
  4. OO JS
  5. Cont. Delivery
  1. Play
  2. Java Performance
  3. Akka
  4. REST
  5. Java