Tuesday 30 March 2010

CMS Statement for the 7 TeV collisions

Today the Large Hadron Collider (LHC) at CERN has, for the first time, collided two beams of 3.5 TeV protons – a new world record energy. The CMS experiment successfully detected these collisions, signifying the beginning of the “First Physics” at the LHC.

A
t 12:58:34 the LHC Control Centre declared stable colliding beams: the collisions were immediately detected in CMS. Moments later the full processing power of the detector had analysed the data and produced the first images of particles created in the 7 TeV collisions traversing the CMS detector.

CMS was fully operational and observed around 200000 collisions in the first hour. The data were quickly stored and processed by a huge farm of computers at CERN before being transported to collaborating particle physicists all over the world for further detailed analysis.

The first step for CMS was to measure precisely the position of the collisions in order to fine-tune the settings of both the collider and the experiment. This calculation was performed in real-time and showed that the collisions were occurring within 3 millimetres of the exact centre of the 15m diameter CMS detector. This measurement already demonstrates the impressive accuracy of the 27 km long LHC machine and the operational readiness of the CMS detector. Indeed all parts of CMS are functioning excellently – from the detector itself, through the trigger and data acquisition systems that select and record the most interesting collisions, to the software and computing Grids that process and distribute the data.

This is the moment for which we have been waiting and preparing for many years. We are standing at the threshold of a new, unexplored territory that could contain the answer to some of the major questions of modern physics” said CMS Spokesperson Guido Tonelli. “Why does the Universe have any substance at all? What, in fact, is 95% of our Universe actually made of? Can the known forces be explained by a single Grand-Unified force”. Answers may rely on the production and detection in laboratory of particles that have so far eluded physicists. “We’ll soon start a systematic search for the Higgs boson, as well as particles predicted by new theories such as ‘Supersymmetry’, that could explain the presence of abundant dark matter in our universe. If they exist, and LHC will produce them, we are confident that CMS will be able to detect them.” But prior to these searches it is imperative to understand fully the complex CMS detector. “We are already starting to study the known particles of the Standard Model in great detail, to perform a precise evaluation of our detector’s response and to measure accurately all possible backgrounds to new physics. Exciting times are definitely ahead”.

Images and animations of some of the first collisions in CMS can be found on the CMS public web site http://cms.cern.ch

CMS is one of two general-purpose experiments at the LHC that have been built to search for new physics. It is designed to detect a wide range of particles and phenomena produced in the LHC’s high-energy proton-proton collisions and will help to answer questions such as: What is the Universe really made of and what forces act within it? And what gives everything substance? It will also measure the properties of well known particles with unprecedented precision and be on the lookout for completely new, unpredicted phenomena. Such research not only increases our understanding of the way the Universe works, but may eventually spark new technologies that change the world in which we live. The current run of the LHC is expected to last eighteen months. This should enable the LHC experiments to accumulate enough data to explore new territory in all areas where new physics can be expected.

The conceptual design of the CMS experiment dates back to 1992. The construction of the gigantic detector (15 m diameter by 21m long with a weight of 12500 tonnes) took 16 years of effort from one of the largest international scientific collaborations ever assembled: more than 3600 scientists and engineers from 182 Institutions and research laboratories distributed in 39 countries all over the world.

Monday 29 March 2010


Last week we finally started receiving ATLAS TAG data through the Oracle Streams, so we are now keeping an eye on how the users are going to consume such a "fancy" service. Selecting events directly querying an Oracle DB sounds fancy... at least to me :-)
I think in the end we allocated around 4 TB of space for this DB, so it will also be the largest DB at PIC.
All in all, an interesting exercise for sure. I hope now users will come in herds to query the TAGs like mad... there we go.

Friday 5 March 2010

Hammered!

Fridays are normally interesting days, aren't they? No interventions or new actions should be scheduled for Fridays, to allow people enjoying a quiet weekend. But quite often Fridays come with a surprise. This morning surprise was this monitoring plot in the Ganglia PBS page. The CPU farm at PIC was being invaded by a growing red blob of very cpu inefficient jobs. The plot at the bottom pointed us to the originator: atlas pilot jobs.
The ATLAS Panda web page is quite cool, indeed, but not extremely useful for a profane to dig into it.
It took us quite some time to realise that the source of these extremely inefficient jobs was just at the end of the corridor: our ATLAS Tier2 colleagues submitting Hammercloud tests and checking that very low READ_AHEAD parameters for dCache remote access can be very inefficient. Next time we will ask them to keep the wave a big smaller.

Monday 1 March 2010

LHC is back!

On February, the 7th, the CMS collaboration received the final positive referee report and publication acceptance on their very first Physics Results publication. The paper reports on first measurements of hadron production in proton-proton collisions occurred during the LHC commissioning December 2009 period. The successful operation and fast fata analysis impressed the editors and the entire collaboration was congratulated... and a party followed afterwards at CERN! ;)

This paper is under publication in JHEP and others will follow. CMS went into a major water-leak repair during the Winter shutdown, and now we are ready for more data. In fact, the LHC has restarted operations this weekend, and a few splash events have been already recorded by CMS.


After twenty years of design, tests, construction and commissioning, now is time for CMS collaborators to enjoy the long LHC run. LHC, we are prepared for the beams!

January availability report


We started 2010 with a number of issues affecting our two main Tier1 services: Computing and Storage. They were not that bad to make us failing the availability/reliability target (we still scored 98%) but sure there are lessons to learn.
The first issue affected ATLAS and it showed up on Jan 2nd in the evening, when the ATLASMCDISK token completely filled up: no free space! This is a disk-only token, so the experiment should manage it. ATLAS acknowledged it had had some issue with its data distribution during Christmas. Apparently they were sending to this disk-only token some data that should have gone to tape. Anyway, it was still quite impressive to see how ATLAS was storing 80 TB of data in just about 3 days. Quite busy Christmas days!
The second issue appeared on the 25th Jan and was more worrisome. The symptom was an overload of the dCache SRM service. After some investigation, the cause was traced to be the hammering of the PNFS carried out simultaneously by some MAGIC inefficient jobs plus also inefficient ATLAS bulk deletions. This issue puzzle our storage experts for 2 or 3 days. I hope we have now the monitoring in place that helps us next time we see something similar. One might try and patch the PNFS, but I believe that we can suffer from its non-scalability until we migrate to Chimera.
The last issue of the month affected the Computing Service and sadly had a quite usual cause: a badly configured WN acting as blackhole. This time it was apparently a corrupted /dev/null in this box (never quite understood how it appeared). We made our blackhole detection tools stronger after this incident, so that it will not happen again.