Monday, 28 July 2008
One week ago we had the ESOF08 conference in Barcelona. This was a BIG event devoted to science communication. More than 3000 people attended the "scientific programme"during last weekend. Not bad having into account that the weather in Barcelona was just lovely, and the beach few metro stations away...
There were several presentations about the LHC. On Saturday morning the physics motivation and experiment status presentations were given by several people from CERN. Specially interesting and funny, as usual, was the presentation by Álvaro de Rújula. Unfortunately for the people that were not there, he is still using "analogic transparencies" (made by hand) so no way to download a copy to your PCs.
We organised a session on the LHC data processing and analysis challenge on sunday, and invited Pere Mato and Tony Cass from CERN as speakers. Pere first gave a talk on the challenge of the TDAQ systems in the LHC, to filter out and reduce the number of events from the 40MHz collision rate down to the 100Hz that can be permanently stored. Then Tony Cass presented the main challenges that the CERN computing centre is facing, as the Tier-0 of the LHC Grid. Finally I presented the LHC Computing Grid and the key role of this huge distributed infrastructure for the feasibility of the LHC data analysis.
There were quite a number of questions at the end of the session (not bad for a sunday-after-lunch one). Besides the most repeated one of "when exactly the LHC will start and how many days later you will discover new physics?" there was an interesting question asking about the similarities/differences between our LHC Grid and the now-so-famous Cloud Computing. We answered that, as of today, the LHC Grid and the Clouds available out there (like the Amazon one) are quite different. The LHC data processing, besides huge computing and storage capacities, needs a very big bandwith between those. Tier-1s are data centres specialiced in storing Petabytes of data and mining through all of this data using thousands of processors in a very efficient way. Trying to use the commercial Clouds to do this today, besides being too expensive, would most probably not meet the performance targets.
That said, we should all keep an eye on this new hype-word "the Cloud" as it will surely evolve in the next years and I am afraid our paths are poised to meet at some point. The LHC is today not a target customer for these Clouds, but what these giant companies are doing in order to be able to sell "resources as a service" is indeed very interesting and, as Wladawsky-Berger notes, is driving an "industrialisation" of IT data centers in a similar way as 25 years ago some companies like Toyota industralised the manufacturing process.
So, more productive, efficient and high-quality computing centers are coming out from the Clouds. We will definetely have to watch up to the sky very now and then, just to be prepared.
Thursday, 10 July 2008
Wednesday, 9 July 2008
The fraction of CMS sub-detectors participating in CRUZET3 has steadily increased and includes from the first time all its components: the DT muon system, RPC barrel, CSC endcap, HCAL and barrel ECAL calorimeters, and the recently installed silicon strip tracker (the biggest tracker detector ever built!).
Last night (9.7.2008), over 1 million cosmic ray events were reconstructed on the tracker system. This is the first time we see triggered cosmic ray tracks in both TIB and TOB at the Tracker level:
Primary datasets are created using the new Tier-0 “repacker” in almost 'real time' and transferred to CAF and Tier-1 sites for prompt analyses. IN2P3 Tier-1 has the custodial responsibility to hold CRUZET3 data, although all Tier-1 sites are constantly receiving the cosmic data. During these two first days, ~3 TBs of data has landed to PIC, being the best Tier-1 site from Rate and Quality p.o.v (curiosly, yesterday we spent the whole day in scheduled downtime!).