tag:blogger.com,1999:blog-1016492924055234962024-03-13T13:28:10.134+01:00The LHC at PICWhen later this year the LHC proton collider will switch on at CERN, it will generate an unprecedented amount of scientific data. In order to process and analyse this data the largest Grid infrastructure in the world has been built: the LHC Computing Grid, joining more than 100 sites in more than 30 countries. PIC is one of the eleven Tier-1 centres of this Grid. Follow the adventure of the real-time LHC data taking in this blog.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.comBlogger67125tag:blogger.com,1999:blog-101649292405523496.post-39147910006709058972010-07-27T15:50:00.002+02:002010-07-27T16:06:50.662+02:00CMS Dark Data<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_yB7fJkFSyIo/TE7kNF-67zI/AAAAAAAAC3U/1Koy_3bBzY0/s1600/Lego+Darth+Vader.jpg"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 197px; height: 200px;" src="http://3.bp.blogspot.com/_yB7fJkFSyIo/TE7kNF-67zI/AAAAAAAAC3U/1Koy_3bBzY0/s200/Lego+Darth+Vader.jpg" alt="" id="BLOGGER_PHOTO_ID_5498583108661473074" border="0" /></a>Last month it was ATLAS who was checking the consistency of their catalogs and the actual contents in our Storage. The ultimate goal is to get rid of what has been called as "dark" or uncatalogued data, which fills up the disks with unusable data. Let us recall that at that time ATLAS found that 10% of their data at PIC was dark...<br />Now it has been CMS that has carried out this consistency check on the Storage at PIC. Fortunately, they have also quite <a href="https://twiki.cern.ch/twiki/bin/view/CMS/StorageConsistencyCheck">automatized machinery</a> for this so we have got the results pretty fast.<br />Out of almost 1PB they have at PIC, CMS has found a mere 15TB of "dark data", or files that were not present in their catalog. Most of them from pretty recent (Jan 2010) productions that were known to have failed.<br />So, for the moment the CMS data seems to be around one order of magnitude "brighter" than the ATLAS one... another significant difference for a two quite similar detectors.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-34720656796295314812010-07-23T09:06:00.002+02:002010-07-23T09:28:47.278+02:00ATLAS pilot analysis stressing LAN<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/TElAECmtobI/AAAAAAAAC3M/BxNc6SjybUA/s1600/4gbytes_lan.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 128px; height: 200px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/TElAECmtobI/AAAAAAAAC3M/BxNc6SjybUA/s200/4gbytes_lan.png" alt="" id="BLOGGER_PHOTO_ID_5496995258345300402" border="0" /></a>These days a <a href="http://www.ichep2010.fr/">big physics conference</a> is starting in Paris. May be this is the reason behind the ATLAS "I/O storm" analysis jobs we saw yesterday running at PIC... if this is so, I hope the guy sending them got a nice plot to show to the audience.<br />The two first plots on the left show the last 24h monitoring of the number of jobs in the farm and the total bandwidth in the Storage system, respectively. We see two nice peaks around 17h and 22h which got actually very near to a 4Gbytes/second total bandwidth being read from dCache. As far as I remember we had never seen this before at PIC, so we got another record for our picture album.<br />Looking at the pools that got the load, we can deduce that it was ATLAS who was generating this load. The good news is that the Storage and LAN systems at PIC coped with the load with no problems. Unfortunately, there is not much more we can learn from this: were these bytes actually generating useful information or were they just the artifact of some suboptimal <a href="http://root.cern.ch">ROOT</a> caches configuration?Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-80127667596130435392010-07-05T16:29:00.003+02:002010-07-05T16:37:17.855+02:00LHCb token full: game over, insert coin?<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/TDHse1hS1FI/AAAAAAAAC3A/ULP-j87cEao/s1600/20100705_lhcb_mc-m_token.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 200px; height: 162px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/TDHse1hS1FI/AAAAAAAAC3A/ULP-j87cEao/s200/20100705_lhcb_mc-m_token.png" alt="" id="BLOGGER_PHOTO_ID_5490429435248301138" border="0" /></a><br />This is what happened las 23rd June. The MC-M-DST space token of the LHCb experiment at PIC got full and, according to the monitoring, we are stuck since then.<br />PIC is probably the smallest LHCb Tier1. Smallest than the average, and this probably creates some issues for the LHCb data distribution model. At first order, they consider all Tier1 the same size so essentially all DST data should go everywhere.<br />PIC can not pledge 16% of the LHCb needs for various reasons, so this is why some months ago we agreed with the experiment that, in order to still make an efficient use of the space we could provide, the data stored should be somehow "managed". In particular, we agreed that we could just keep the "two last versions" of the reprocessed data at PIC instead of keeping a longer history. Looked like a fair compromise.<br />Now we have our token full and looks we are stuck. It is time to check if that nice idea of "keeping only the two most recent versions" can actually be implemented.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-14346130569056466462010-06-22T12:17:00.005+02:002010-06-22T12:27:10.374+02:00Gridftpv2, the doors relief<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/TCCOu7y_4WI/AAAAAAAAC24/XPlvn2BF3es/s1600/gridftpv2.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 100px; height: 200px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/TCCOu7y_4WI/AAAAAAAAC24/XPlvn2BF3es/s200/gridftpv2.png" alt="" id="BLOGGER_PHOTO_ID_5485541283113984354" border="0" /></a><br />Yesterday around 14:30 there was an interesting configuration change in the WNs at PIC. It looks just as an innocent environment variable, but setting GLOBUS_FTP_CLIENT_GRIDFTP2 to true it just does the business of telling the applications to use the version 2 of the gridftp protocol instead of the old version 1. One of the most interesting features of the new version is that data streams are opened directly against disk pools, so the traffic does not flow through the gridftp doors. This effect can be clearly seen in the left plot, where the graphic at the bottom shows the aggregated network traffic through the gridftp doors at PIC. It essentially went to zero after the change.<br />So, good news for the gridftp doors at PIC. We have less risk of a bottleneck there, and also can plan for having less of them to do the job.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-1339158279709097452010-06-18T17:21:00.002+02:002010-06-18T17:35:52.521+02:00CMS reprocessing on 1st gear at PIC<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_yB7fJkFSyIo/TBuPZqavc-I/AAAAAAAAC2o/43ICfJBDnRs/s1600/200100618_week_cms_reprocessing.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 200px; height: 142px;" src="http://3.bp.blogspot.com/_yB7fJkFSyIo/TBuPZqavc-I/AAAAAAAAC2o/43ICfJBDnRs/s200/200100618_week_cms_reprocessing.png" alt="" id="BLOGGER_PHOTO_ID_5484134642299663330" border="0" /></a>We have seen a quite puzzling effect in the last week. After several weeks of low CMS activity, around one week ago we happily saw how reprocessing jobs started arriving to PIC in the hundreds.<br />Few days later, our happiness turned into ... what's going on?<br />As days passed, we saw that the cpu efficiency of CMS reconstruction jobs at PIC was consistently very low (30-40%!!)... with no apparent reason for that! There was no cpu iowait in the WNs, nor the disk servers showed contention effects.<br />We still do not understand the origin of this problem, but have identified two possible sources:<br /><span style="font-weight: bold;"><br />1) </span>The jobs themselves. We observed that most of the jobs with lower cpu efficiency were spitting a "fast copy disabled"message at the start of their output logfile. The CMSSW experts told us that this means that <span style="font-style: italic;"><br /><br /> "for some reason the input file has events which are</span><div style="font-style: italic;">not ordered as the framework wants, and thus the framework will read from the input </div><div style="font-style: italic;">out-of-order (which indeed can wreck the I/O performance and result in low cpu/wall</div><span style="font-style: italic;">times)".<br /><br /></span>Interesting, indeed. We still need to confirm if the 40% cpu efficiency was caused by this out-of-order input events...<br /><br />2) Due to our "default configuration", plus the CMSSW one, those jobs were writing the output files to dCache using the gridftpv1 protocol. This means a) the traffic was passing through the gridftp doors, and b) it was using the "wan" mover queues in the dCache pools which eventually reached the "max active" limit (at 100 up to now) so movers were queued. This is always bad.<br /><br />So, we still do not have a clue of what was the actual problem but looks as an interesting investigation so I felt like posting it here :-)<br /><span style="font-style: italic;"></span>Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-15980711202828316262010-06-08T10:04:00.002+02:002010-06-08T10:20:55.279+02:00ATLAS dark data<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/TA35ux06-NI/AAAAAAAACvM/UPn-Hlyi_qc/s1600/atlas-dark-data.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 195px; height: 200px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/TA35ux06-NI/AAAAAAAACvM/UPn-Hlyi_qc/s200/atlas-dark-data.png" alt="" id="BLOGGER_PHOTO_ID_5480310903624366290" border="0" /></a>It was quite a while ago since we did not take the broom and did a bit of cleaning in our disks. One week ago we performed a Storage consistency check for the ATLAS data at PIC. Luckily, the <a href="https://twiki.cern.ch/twiki/bin/view/Atlas/DDMOperationsScripts">tools and scripts to automatise this task</a> have evolved quite a lot since we tried this last time so the whole procedure is now quite smooth.<br />In the process we have almost 4 million ATLAS files at PIC, and about 10% of them appeared to be "dark", i.e. sitting on the disk but not registered in the LFC Catalog. Another 3,5% were also darkish but of another kind: they were registered in our local Catalog but not in the DDM central one.<br />The plots on the left show the effect of this cleaning campaign. Now the blue (what ATLAS thinks there is at PIC) and red (what actually we have on disk) lines are matching better.<br />So, this would go into the "inefficiency" of experiments using the disks. We have quantified this to be of the order of 90%. Substantially higher than the 70% which is in general used for WLCG capacity planning.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-23392805907292536122010-05-20T15:29:00.003+02:002010-05-20T15:46:02.919+02:00ATLAS torrent<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_yB7fJkFSyIo/S_U5XI0Xk1I/AAAAAAAACvA/2uS3OM7DvLs/s1600/atlas.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 183px; height: 200px;" src="http://3.bp.blogspot.com/_yB7fJkFSyIo/S_U5XI0Xk1I/AAAAAAAACvA/2uS3OM7DvLs/s200/atlas.png" alt="" id="BLOGGER_PHOTO_ID_5473343991805612882" border="0" /></a><br />It is true that this starts to be quite routine, but still I can not avoid to open my eyes wide when I see ATLAS moving data at almost 10 GB/s.<br />The plot shows the last 24h as shown in the <a href="http://dashb-atlas-data.cern.ch/dashboard/request.py/site?statsInterval=24">DDM</a> dashboard right now. Incoming traffic to PIC is shown in the 2nd plot. Almost half Gig sustained, not bad. Half to DATADISK and half to MCDISK.<br />Last but not least, the 3rd plot shows the traffic we are exporting to the Tier2s, also about half Gig sustained overall.<br />There is a nice feature to observe in the 2 last plots: the dip around last midnight. This is due to an incident we had with one of the <a href="http://www.ddn.com/">DDN</a> controllers. For some still unknown reason, the second controller did not take over transparently. Something to understand with the vendor support in the next days. Stay tuned.<br />Having into account the severity of the incident, it is nice to see that the service was only affected for few hours. Manager on Duty fire brigade took corrective action in a very efficient manner (ok Gerard!).<br />Now, let the vendors explain us why the super-whooper HA mechanisms are only there when you test them but not when you need them.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-10760998173201071452010-05-20T15:12:00.003+02:002010-05-20T15:24:44.852+02:00Welcome home lhcb pilot!<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_yB7fJkFSyIo/S_U2Jkksp5I/AAAAAAAACu4/MwOwngDkh3o/s1600/lhcb.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 136px; height: 200px;" src="http://2.bp.blogspot.com/_yB7fJkFSyIo/S_U2Jkksp5I/AAAAAAAACu4/MwOwngDkh3o/s200/lhcb.png" alt="" id="BLOGGER_PHOTO_ID_5473340460203026322" border="0" /></a>There is not much to say besides we are happy to see that the sometimes shy LHCb pilot jobs are back running at PIC since last midnight.<br />There was quite a while since we did not see these guys consuming CPU cycles at PIC, so they were starting with their full Fair Share budget. Interesting to see that in these conditions they were able to peak to 400 jobs quite fast and in about 6 hours they had already crossed their Fair Share red line.<br />I hear that ATLAS is about to launch another reprocessing campaign, so they will be asking for their Fair Share in a short time... I hope to see the LHCb load stabilizing at their share at some point, otherwise I will start suspecting they have some problem with us :-)Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-72530907474938182362010-05-06T15:37:00.004+02:002010-05-06T15:54:24.966+02:00DDN nightmare and muscle<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_yB7fJkFSyIo/S-LGLYOAG7I/AAAAAAAACtg/Whc5Mq6L-XM/s1600/graph.php.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 200px; height: 92px;" src="http://2.bp.blogspot.com/_yB7fJkFSyIo/S-LGLYOAG7I/AAAAAAAACtg/Whc5Mq6L-XM/s200/graph.php.png" alt="" id="BLOGGER_PHOTO_ID_5468150796363242418" border="0" /></a>We got a notable fright last weekend. Barça match against Inter in the Champions league semifinal was about to start when suddenly... crash. One of our flashy dCache pools serving a <a href="http://www.datadirectnet.com/">DDN</a> fatty partition (125 TB, almost full of ATLAS data) got bananas.<br />The ghost of "data loss" was there, coming to us. Luckily, after a somewhat "hero mode" weekend for our MoD and experts (thanks Marc and Gerard!) following the indications of Sun-Support the problem could be solved with zero data loss (uf!). The recipe looks quite innocent from the distance: upgrade the OS to the last version, Solaris 10u8.<br />We find quite often that a solution comes with a new problem. This time was not an exception. The updated OS rapidly solved the unmountable ZFS partition problem, but it completely screwed up the network of the server.<br />We have not been able to solve this second problem yet, and this is why the 125TB of data of the upgraded server (dc012) were reconfigured to be served by its "twin" server (dc004). This is a nice configuration that the DDN SAN deployment enables. So this is I think the first time we try this feature in production, and there we have the picture: dc004 serving 250 TB of ATLAS data with a peak up to 600 MB/s... and no problem.<br />Looks like, besides OS version issues, the DDN hardware is delivering.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-63256742093752159362010-05-06T15:11:00.002+02:002010-05-06T15:24:50.804+02:00Pilot has decided to kill looping job... strikes back!<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_yB7fJkFSyIo/S-LABzEOCWI/AAAAAAAACtY/GOyrMiYQIPc/s1600/atpilot.png"><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 200px; height: 150px;" src="http://2.bp.blogspot.com/_yB7fJkFSyIo/S-LABzEOCWI/AAAAAAAACtY/GOyrMiYQIPc/s200/atpilot.png" alt="" id="BLOGGER_PHOTO_ID_5468144034701511010" border="0" /></a>Some days ago we noticed in our dashboards a somewhat curious pattern. Here we go again: yesterday and today we can see the same behavior. A bunch of jobs in the batch showing near zero cpu efficiency (red in the upper plot). Looking for the smoking gun... we easily find a correlation with "atpilot" jobs (blue in the bottom plots). These atpilot jobs are nothing more than ATLAS user analysis jobs submitted through the <a href="http://panda.cern.ch:25980/server/pandamon/query">Panda</a> framework.<br />For various reasons, which we are still in the process of elucidating, these atpilot jobs tend to get "stack" reading input files, and they stay idle in the WN slot unntil the Panda pilot wrapper kills them. Luckily, it implements a 12 hours timeout for jobs detected as stalled.<br />So, this is the picture of today's 12h x 200 cores going to the bin. Hope we will find the ultimate reason why these atpilots are so reluctant to swallow our data... eventually.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-74249413140767775332010-04-27T09:55:00.006+02:002010-04-27T10:17:54.983+02:00Uops! a would be transparent operation<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/S9aaZ77FW7I/AAAAAAAACs0/stkbaChHLQg/s1600/20100427-site-stats.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 80px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/S9aaZ77FW7I/AAAAAAAACs0/stkbaChHLQg/s200/20100427-site-stats.png" alt="" id="BLOGGER_PHOTO_ID_5464724968233589682" border="0" /></a>If you look now at the ATLAS data transfers dashboard, you will easily find PIC since our efficiency in the last 24hrs hardly arrives to 50%. The reason for this are the transfer failure peak (orange in the plot) that we experienced yesterday between 10h and 14h. Up to 4000 transfers to PIC were failing per hour during a couple of hours.<br />These were transfer failing with "permission denied" errors at PIC destination, and the reason was us trying to implement an improved configuration for ATLAS in dCache: different uid/gid mappings for "user" and "production" roles so that, for instance, one can not delete the other's files by mistake.<br />The recursive chown and chmod commands on the full ATLAS name space were more expensive operations than we expected, so the operation was in the end not transparent. It took around 11 hours for these recursive commands to finish (hope this will get better with Chimera) but thanks to our storage expert MoD manually helping in the background, most of the errors were only visible for 4 hours.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com1tag:blogger.com,1999:blog-101649292405523496.post-60617202308493840332010-04-26T11:22:00.004+02:002010-04-26T11:37:19.747+02:00Scheduled intervention, in sync with LHC technical stop<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_yB7fJkFSyIo/S9VbtfseFkI/AAAAAAAACss/lhqDMP_Dn5E/s1600/site-stats.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 102px;" src="http://3.bp.blogspot.com/_yB7fJkFSyIo/S9VbtfseFkI/AAAAAAAACss/lhqDMP_Dn5E/s200/site-stats.png" alt="" id="BLOGGER_PHOTO_ID_5464374560044226114" border="0" /></a>We are right now draining PIC in preparation for a Scheduled intervention tomorrow. This is the first time we try and schedule an intervention in sync with the LHC operational schedule. Let's see how the experience works. In principle, it should be good that sites synchronize stops with the accelerator, but on the other hand we should make sure we do not stop all together! Communication challenge... our favorites :-)<br />One of our main interventions tomorrow will be the upgrade of the firmware of a bunch of 3Com switches we use to interconnect many of our disk and cpu servers. In the last days we have had quite a number of issues (tickets <a href="https://gus.fzk.de/ws/ticket_info.php?ticket=57623">57623</a>, <a href="https://gus.fzk.de/ws/ticket_info.php?ticket=57617">57617</a>, <a href="https://gus.fzk.de/ws/ticket_info.php?ticket=57177">57177</a>) reported mainly by ATLAS. We believe these are caused by the old firmware in these switches. However, this is just a theory of course... will see after this intervention if these network failures disappear.<br />We always think that, having dozens of disk servers as we do have for ATLAS, the temporary failure of one of them would not be that much of an issue. But this is not quite so. The attached plot shows how in the night from 23rd to 24th April the transfers from PIC to Tier2s failed with up to 800 failed transfers per hour. The problematic disk pool was indeed first detected by ATLAS than by us.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-10947639339789373002010-03-30T16:00:00.004+02:002010-03-30T16:07:39.783+02:00CMS Statement for the 7 TeV collisions<span style=";font-family:arial;font-size:100%;" lang="EN-GB" >Today the Large Hadron Collider (LHC) at CERN has, for the first time, collided two beams of 3.5 TeV protons – a new world record energy. The CMS experiment successfully detected these collisions, signifying the beginning of the “First Physics” at the LHC.<br /><br />A</span><span lang="EN-GB" style="font-size:100%;">t 12:58:34 the LHC Control Centre declared stable colliding beams: the collisions were immediately detected in CMS. Moments later the full processing power of the detector had analysed the data and produced the first images of particles created in the 7 TeV collisions traversing the CMS detector.<br /><br /></span><span style="font-size:100%;"><span lang="EN-GB">CMS was fully operational and observed around 200000 collisions in the first hour. The data were quickly stored and processed by a huge farm of computers at CERN before being transported to collaborating particle physicists all over the world for further detailed analysis. <o:p></o:p></span><br /></span> <!--EndFragment--> <span lang="EN-GB" style="font-size:100%;"><br />The first step for CMS was to measure precisely the position of the collisions in order to fine-tune the settings of both the collider and the experiment. This calculation was performed in real-time and showed that the collisions were occurring within 3 millimetres of the exact centre of the 15m diameter CMS detector. This measurement already demonstrates the impressive accuracy of the 27 km long LHC machine and the operational readiness of the CMS detector. Indeed all parts of CMS are functioning excellently – from the detector itself, through the trigger and data acquisition systems that select and record the most interesting collisions, to the software and computing Grids that process and distribute the data.</span> <p class="MsoNormal" style="font-family:arial;"><span lang="EN-GB" style="font-size:100%;"><o:p> </o:p></span></p> <p class="MsoNormal" style="font-family:arial;"><span style="font-size:100%;"><i style=""><span lang="EN-GB">“</span></i><i style=""><span style="" lang="EN-US">This is the moment for which we have been waiting and preparing for many years.</span><span lang="EN-GB"> We are standing at the threshold of a new, unexplored territory that could contain the answer to some of the major questions of modern physics”</span></i></span><span lang="EN-GB" style="font-size:100%;"> said CMS Spokesperson Guido Tonelli. “<i style="">Why does t</i></span><span lang="EN-GB" style="font-size:100%;"><i style="">he Universe have any substance at all? What, in fact, is 95% of our Universe actually made of? Can the known forces be explained by a single Grand-Unified force</i>”. Answers may rely on the production and detection in laboratory of particles that have so far eluded physicists. “<i style="">We’ll soon start a systematic search for the Higgs boson, as well as particles predicted by new theories such as ‘Supersymmetry’, that could explain the presence of abundant dark matter in our universe. If they exist, and LHC will produce them, we are confident that CMS will be able to detect them.” </i>But prior to these searches it is imperative to understand fully the complex CMS detector. “<i style="">We are already starting to study the known particles of the Standard Model in great detail, to perform a precise evaluation of our detector’s response and to measure accurately all possible backgrounds to new physics. Exciting times are definitely ahead”.</i><o:p></o:p></span></p> <p class="MsoNormal" style="font-family:arial;"><span lang="EN-GB" style="font-size:100%;"><o:p> </o:p></span></p> <p class="MsoNormal" style="font-family:arial;"><span lang="EN-GB" style="font-size:100%;">Images and animations of some of the first collisions in CMS can be found on the CMS public web site http://cms.cern.ch</span></p><p class="MsoNormal" style="font-family:arial;"><span lang="EN-GB" style="font-size:100%;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_toUYvpxSJE8/S7IEWahRPbI/AAAAAAAAA-I/SkunzaY34i0/s1600/1003058_01-A5-at-72-dpi.jpg"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 356px; height: 233px;" src="http://3.bp.blogspot.com/_toUYvpxSJE8/S7IEWahRPbI/AAAAAAAAA-I/SkunzaY34i0/s320/1003058_01-A5-at-72-dpi.jpg" alt="" id="BLOGGER_PHOTO_ID_5454426881821588914" border="0" /></a></span></p> <p class="MsoNormal" style="font-family:arial;"><span lang="EN-GB" style="font-size:100%;"><o:p> </o:p></span></p> <p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" lang="EN-GB" >CMS is one of two general-purpose experiments at the LHC that have been built to search for new physics.<span style=""> </span>It is designed to detect a wide range of particles and phenomena produced in the LHC’s high-energy proton-proton collisions and will help to answer questions such as: What is the Universe really made of and what forces act within it? And what gives everything substance? It will also measure the properties of well known particles with unprecedented precision and be on the lookout for completely new, unpredicted phenomena.<span style=""> </span>Such research not only increases our understanding of the way the Universe works, but may eventually spark new technologies that change the world in which we live. The current run of the LHC is expected to last eighteen months. This should enable the LHC experiments to accumulate enough data to explore new territory in all areas where new physics can be expected.<span style="font-family:arial;"><br /></span></span></p><p class="MsoNormal"><span style=";font-family:Arial;font-size:100%;" lang="EN-GB" ><span style="font-family:arial;">The conceptual design of the CMS experiment dates back to 1992. The construction of the gigantic detector (15 m diameter by 21m long with a weight of 12500 tonnes) took 16 years of effort from one of the largest international scientific collaborations ever assembled: more than 3600 scientists and engineers from 182 Institutions and research laboratories distributed in 39 countries all over the world.</span><o:p></o:p></span></p> <!--EndFragment-->joseflixhttp://www.blogger.com/profile/02913257732652015566noreply@blogger.com3tag:blogger.com,1999:blog-101649292405523496.post-13187974279982516402010-03-29T13:54:00.000+02:002010-03-29T13:55:16.327+02:00<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_yB7fJkFSyIo/S7CVD4imBnI/AAAAAAAACrI/5fqC_p-lbYE/s1600/tags.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 156px;" src="http://4.bp.blogspot.com/_yB7fJkFSyIo/S7CVD4imBnI/AAAAAAAACrI/5fqC_p-lbYE/s200/tags.png" alt="" id="BLOGGER_PHOTO_ID_5454023042695300722" border="0" /></a><br />Last week we finally started receiving ATLAS TAG data through the Oracle Streams, so we are now keeping an eye on how the users are going to consume such a "fancy" service. Selecting events directly querying an Oracle DB sounds fancy... at least to me :-)<br />I think in the end we allocated around 4 TB of space for this DB, so it will also be the largest DB at PIC.<br />All in all, an interesting exercise for sure. I hope now users will come in herds to query the TAGs like mad... there we go.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-9530670733284923332010-03-18T18:28:00.003+01:002010-03-18T18:39:54.391+01:00Tape write performance and check_written_file<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/S6Ji1cbvfiI/AAAAAAAACqU/xU5bZjX_dWo/s1600-h/tapewrite.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 160px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/S6Ji1cbvfiI/AAAAAAAACqU/xU5bZjX_dWo/s200/tapewrite.png" alt="" id="BLOGGER_PHOTO_ID_5450027169376861730" border="0" /></a>To me this was quite a discovery. As usually happens, we already had this information since several months in our mailboxes. FNAL folks told us about this Enstore parameter but we did not pay much attention at that time. Another effect of the "too much information to swallow daily" syndrome (at least for me).<br />Anyway, there is this funny parameter in Enstore called "<a name="1276bddb6708545b_1276766fc5098732_check_written_file">check_written_file" which tells Enstore whether to check files were correctly written to tape... by reading them back! So, quite an expensive check, indeed.<br />Bottom line is that we had it set up at 10 without really realizing. On average, one every 10 files written was read back for checking. A bit too much, isn't it?<br />Last tuesday 16th in the evening this parameter was increased by a factor of at least 50.<br />The good news is that the ATLAS performance we report to SLS clearly shows a 30% improvement in the expected moment (top plot). Good!<br />The not so good news is that the same plot for CMS (bottom plot) does not show any hint of improvement... one could even see a degradation! We believe (hope!) this is due to the fact that CMS is not writting many files in one go these days, so it is dominated by tape mounts.<br />Will keep an eye on this, but to me it looks like we saved some Euros in tape drive throghput this week ;-)<br /><br /></a>Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-32008476224105371142010-03-05T16:08:00.003+01:002010-03-05T16:21:23.212+01:00Hammered!<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_yB7fJkFSyIo/S5Eeg_mT5KI/AAAAAAAACos/c82acAUIEfQ/s1600-h/atlas.png"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 192px; height: 200px;" src="http://2.bp.blogspot.com/_yB7fJkFSyIo/S5Eeg_mT5KI/AAAAAAAACos/c82acAUIEfQ/s200/atlas.png" alt="" id="BLOGGER_PHOTO_ID_5445166976644408482" border="0" /></a>Fridays are normally interesting days, aren't they? No interventions or new actions should be scheduled for Fridays, to allow people enjoying a quiet weekend. But quite often Fridays come with a surprise. This morning surprise was this monitoring plot in the Ganglia PBS page. The CPU farm at PIC was being invaded by a growing red blob of very cpu inefficient jobs. The plot at the bottom pointed us to the originator: atlas pilot jobs.<br />The ATLAS Panda web page is quite cool, indeed, but not extremely useful for a profane to dig into it.<br />It took us quite some time to realise that the source of these extremely inefficient jobs was just at the end of the corridor: our ATLAS Tier2 colleagues submitting <a href="http://gangarobot.cern.ch/hc/1146/test/">Hammercloud</a> tests and checking that very low READ_AHEAD parameters for dCache remote access can be very inefficient. Next time we will ask them to keep the wave a big smaller.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-58647743094785343702010-03-01T12:11:00.005+01:002010-03-01T12:39:37.529+01:00LHC is back!On February, the 7th, the CMS collaboration received the final positive referee report and publication acceptance on their very first Physics Results publication. The paper reports on first measurements of hadron production in proton-proton collisions occurred during the LHC commissioning December 2009 period. The successful operation and fast fata analysis impressed the editors and the entire collaboration was congratulated... and a party followed afterwards at CERN! ;)<br /><br />This paper is under publication in JHEP and others will follow. CMS went into a major water-leak repair during the Winter shutdown, and now we are ready for more data. In fact, the LHC has restarted operations this weekend, and a few splash events have been already recorded by CMS.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_toUYvpxSJE8/S4unUti909I/AAAAAAAAA9c/AzSrdsMGZ1c/s1600-h/Picture+542.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px; height: 222px;" src="http://1.bp.blogspot.com/_toUYvpxSJE8/S4unUti909I/AAAAAAAAA9c/AzSrdsMGZ1c/s320/Picture+542.png" alt="" id="BLOGGER_PHOTO_ID_5443628548872852434" border="0" /></a><br />After twenty years of design, tests, construction and commissioning, now is time for CMS collaborators to enjoy the long LHC run. LHC, we are prepared for the beams!joseflixhttp://www.blogger.com/profile/02913257732652015566noreply@blogger.com1tag:blogger.com,1999:blog-101649292405523496.post-73796096812019828222010-03-01T10:10:00.003+01:002010-03-01T10:30:09.627+01:00January availability report<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/S4uG2Y1WGsI/AAAAAAAACoI/Pfo0Rdd5KSk/s1600-h/atlasmcdisk.PNG"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 190px; height: 200px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/S4uG2Y1WGsI/AAAAAAAACoI/Pfo0Rdd5KSk/s200/atlasmcdisk.PNG" alt="" id="BLOGGER_PHOTO_ID_5443592843544632002" border="0" /></a><br />We started 2010 with a number of issues affecting our two main Tier1 services: Computing and Storage. They were not that bad to make us failing the availability/reliability target (we still scored 98%) but sure there are lessons to learn.<br />The first issue affected ATLAS and it showed up on Jan 2nd in the evening, when the ATLASMCDISK token completely filled up: no free space! This is a disk-only token, so the experiment should manage it. ATLAS acknowledged it had had some issue with its data distribution during Christmas. Apparently they were sending to this disk-only token some data that should have gone to tape. Anyway, it was still quite impressive to see how ATLAS was storing 80 TB of data in just about 3 days. Quite busy Christmas days!<br />The second issue appeared on the 25th Jan and was more worrisome. The symptom was an overload of the dCache SRM service. After some investigation, the cause was traced to be the hammering of the PNFS carried out simultaneously by some MAGIC inefficient jobs plus also inefficient ATLAS bulk deletions. This issue puzzle our storage experts for 2 or 3 days. I hope we have now the monitoring in place that helps us next time we see something similar. One might try and patch the PNFS, but I believe that we can suffer from its non-scalability until we migrate to Chimera.<br />The last issue of the month affected the Computing Service and sadly had a quite usual cause: a badly configured WN acting as blackhole. This time it was apparently a corrupted /dev/null in this box (never quite understood how it appeared). We made our blackhole detection tools stronger after this incident, so that it will not happen again.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-67562149525058781172010-02-18T17:24:00.004+01:002010-02-18T17:41:44.326+01:00PIC goes to IES Egara (outreach activity)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_toUYvpxSJE8/S31rPINId1I/AAAAAAAAA9A/9xoFBXuhOp0/s1600-h/IES_Egara.jpg"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 200px; height: 150px;" src="http://4.bp.blogspot.com/_toUYvpxSJE8/S31rPINId1I/AAAAAAAAA9A/9xoFBXuhOp0/s320/IES_Egara.jpg" alt="" id="BLOGGER_PHOTO_ID_5439621832578201426" border="0" /></a><br />Last Tuesday, the 16th-February, Dr. Josep Flix went to IES Egara to give an overview of CERN, the LHC and the Grid to latest course high school students. It was not an easy task arriving there: it was raining, I was carrying around 100 CERN brochures with me, some other PIC brochures, the laptop, and all this... driving my bike! After getting lost on the town and asking several locals, I finally arrived to the high school. "Wet", but in time. The students were really surprised hearing what we do at CERN: science and technology. In the end, I was lucky enough not to electrocute myself during the talk (remember the rain and me "wet") and then students were able to place very interesting questions, indeed, well after the talk... Yes, the dark holes creation also was raised there, which seems a quite general and spread issue. From here, I want to congratulate Physics professor Juan Luis Rubio, to keep his students interested in Physics and with a very good knowledge of Particle Physics. After the talk, we spent also a good time in a nice restaurant on the town. At that time rain was gone...joseflixhttp://www.blogger.com/profile/02913257732652015566noreply@blogger.com1tag:blogger.com,1999:blog-101649292405523496.post-77089493478758471462010-01-29T11:17:00.015+01:002010-01-29T12:00:49.631+01:00IES Sabadell visits PIC (outreach activity)Yesterday we had the visit of around 80 students and 5 professors from IES Sabadell to PIC installations. Their academic field based in Informatics ("Cicles formatius de grau mig/superior") made the tour to be exciting and full of questions. The visit, conducted by Dr. Josep Flix, started with two talks held in the IFAE Seminar Room (next to PIC). The first talk was entitled "The LHC and its 4 experiments: a data stream to understand the Big Bang" and was presented by <span style="font-weight: bold;">Dra. Elisa Lanciotti</span>, who is the LHCb contact at PIC. The students placed very interesting questions related to Physics and the techonology used on the LHC, during and after the talk. The level of curiosity was amazing! Maybe, in part related to the preparation sessions prior the visit the professors made and the comprenhesive Elisa's talk. Well after, <span style="font-weight: bold;">Dr. Josep Flix</span> presented "The use of Grid Computing by the LHC". He is currently the CMS contact at PIC and the CMS Facilities/Integration coordinator. The talk also raised questions from the attentive audience.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_toUYvpxSJE8/S2K6j0RdtmI/AAAAAAAAA6o/5uy270SYpnI/s1600-h/IFAE_Seminar_Room.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 366px; height: 221px;" src="http://1.bp.blogspot.com/_toUYvpxSJE8/S2K6j0RdtmI/AAAAAAAAA6o/5uy270SYpnI/s320/IFAE_Seminar_Room.JPG" alt="" id="BLOGGER_PHOTO_ID_5432109225052321378" border="0" /></a><br />After the talks we made a visit to PIC installations, so they could see how a Computing Center is built and managed. In groups of 15 people we showed them first the real-time views of what's actually occurring on the Grid: the nice visualization of the WLCG grid activity on Google Earth, the ATLAS concurrent jobs running at all their Tiers, the CMS overall data transfer volumes, the LHCb job monitor display, and a few local monitoring plots, like the batch system and LAN/WAN usages.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_toUYvpxSJE8/S2K8HX9pO1I/AAAAAAAAA6w/Kw55TIsj2VA/s1600-h/PIC_TV.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px; height: 122px;" src="http://3.bp.blogspot.com/_toUYvpxSJE8/S2K8HX9pO1I/AAAAAAAAA6w/Kw55TIsj2VA/s320/PIC_TV.JPG" alt="" id="BLOGGER_PHOTO_ID_5432110935439915858" border="0" /></a>Then, the visit to the Computing Area itself started: we showed them the different kind of disk pools we have installed, which covered the SUN X4500 (we opened one, so they could see how disks are installed and can be easily replaced) and the new powerful DDN system that offers 2 PBs of disk space; our computational power based on brand new HP Blade systems; plus the two tape robots we have at PIC (around 3 PBs of data stored) and which are the tapes available on the market and how we use them. The students were impressed as well on the WAN and LAN capabilities, the latest improved with the acquisition of two new 10 Gbps switches.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_toUYvpxSJE8/S2K-ecHUM2I/AAAAAAAAA64/BLTeyR22P6k/s1600-h/DSC_0187.JPG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 320px; height: 214px;" src="http://2.bp.blogspot.com/_toUYvpxSJE8/S2K-ecHUM2I/AAAAAAAAA64/BLTeyR22P6k/s320/DSC_0187.JPG" alt="" id="BLOGGER_PHOTO_ID_5432113530714469218" border="0" /></a><br />So far, the morning was extremely fruiful. From PIC we want to thank the Professors (Gregorio, Fernando, Lino, Alberto, Alexandra) for their dedication and motivation they offer to their students. They enjoyed the visit and want to repeat it with other students from the school in two months from now. We are happy to receive them again! ;)joseflixhttp://www.blogger.com/profile/02913257732652015566noreply@blogger.com2tag:blogger.com,1999:blog-101649292405523496.post-64405620770157743482009-12-16T16:40:00.004+01:002009-12-17T10:41:02.099+01:00Last day of LHC running this year<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_yB7fJkFSyIo/Syn71tbtERI/AAAAAAAACRw/_8Bi1RYkd7M/s1600-h/PastedGraphic-2.JPG"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 92px;" src="http://3.bp.blogspot.com/_yB7fJkFSyIo/Syn71tbtERI/AAAAAAAACRw/_8Bi1RYkd7M/s200/PastedGraphic-2.JPG" alt="" id="BLOGGER_PHOTO_ID_5416136927037165842" border="0" /></a>After so much celebration of first days of LHC running, it is time today to celebrate the last day of LHC running... this year. In few hours the LHC will be switched of and accelerated protons will go on holidays until next year.<br />I has been a very nice and long awaited time since last 23rd November the experiments started taking collision data. Today the LHC goes on holidays, but the WLCG does not. This piece of distributed infrastructure we have been building in the last six years should stay up and running 24x7 so that the precious data taken can be processed, re-processed, re-re-processed and so on. Somebody said that <span style="font-style: italic;">"data can be equated with money that has value only if it is used and circulated"</span>. So this is what we will be doing in the next weeks: giving value to the LHC data. This will not yet be haunting the Higgs, but less sexy minimum bias soft QCD events... but still, LHC physics after all.<br />At PIC Tier-1 we will carefully look to the services to ensure maximum availability and efficiency.<br />For the moment, what can we say about PIC's performance during "the month in which the LHC started" (aka November 2009)? We just received this Christmas gift from the official WLCG availability reports:<br /><ul><li>PIC availability and reliability for OPS VO = 100%<br /></li><li>For ATLAS VO: 98% availability and 100% reliability (only ATLAS Tier-1 with max score)</li><li>For CMS VO: 100% availability and reliability (FZK also got max score for CMS)</li><li>For LHCb: 98% availability and 99% reliability (only CERN got 100% for LHCb)<br /></li></ul>Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-88481329578866614582009-11-22T11:17:00.002+01:002009-11-22T11:25:15.932+01:00Outreach in the school<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_yB7fJkFSyIo/SwkP9ZvNR2I/AAAAAAAACRQ/LXHJ6XYfcNU/s1600/satanasset.jpg"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 122px; height: 200px;" src="http://2.bp.blogspot.com/_yB7fJkFSyIo/SwkP9ZvNR2I/AAAAAAAACRQ/LXHJ6XYfcNU/s200/satanasset.jpg" alt="" id="BLOGGER_PHOTO_ID_5406870375190316898" border="0" /></a>Last week it was the "<a href="http://www.semanadelaciencia.es">week of science</a>" in Spain. This happens every year around mid November and consists of one week where plenty of activities oriented to explain science to the people are scheduled. In Catalonia, one of the organised activities are talks of scientists in the schools. Last wednesday there were 100 simultaneous talks carried out in different schools all around Catalonia. I visited a secondary school in Badalona where I had a great time talking about the LHC and the origin of the Universe to around 70 students. I see now that they even posted an etry in the <a href="http://cienciesb7.blogspot.com/2009/11/xerrada-setmana-de-la-ciencia-2009.html">blog of the school!</a><br />Nice to see that Catalan schools are in the blogosphere... and that they had a nice time listening to my LHC stories.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-75928209186234283282009-11-22T10:40:00.003+01:002009-11-22T10:47:23.986+01:00Real data flowing through PIC<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_yB7fJkFSyIo/SwkHHmErIXI/AAAAAAAACQw/-Bi4rnGiW60/s1600/Dibujo2.PNG"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 183px;" src="http://1.bp.blogspot.com/_yB7fJkFSyIo/SwkHHmErIXI/AAAAAAAACQw/-Bi4rnGiW60/s200/Dibujo2.PNG" alt="" id="BLOGGER_PHOTO_ID_5406860654695620978" border="0" /></a><br />ATLAS has provided a <a href="http://atladcops.cern.ch:8000/drmon/crmon_tier1s.html">nice monitoring page</a> where we can follow the progress of data distribution in these so exciting moments of first circulating beam in the LHC. This is not collisions yet, but real data indeed. After so many years of simulations, we are happy to see the first Megabytes of real stuff. In the picture, I have just captured the current status of the datasets distribution to Tier-1s and from there to the associated Tier-2s. The overall picture looks pretty green, which is good news. PIC received the subscribed data with no problems and promptly redistributed it to the Tier-2s. It looks the data movement went mostly smooth. Let's keep an eye on this. We will see the rates growing in the next days.Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-67935653128047812322009-11-22T09:51:00.004+01:002009-11-22T10:14:24.135+01:00Circulating beam in the LHC (take two)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_yB7fJkFSyIo/Swj8z1tKQiI/AAAAAAAACQg/JRBLR8hSZLM/s1600/Dibujo.PNG"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 200px; height: 123px;" src="http://4.bp.blogspot.com/_yB7fJkFSyIo/Swj8z1tKQiI/AAAAAAAACQg/JRBLR8hSZLM/s200/Dibujo.PNG" alt="" id="BLOGGER_PHOTO_ID_5406849320178303522" border="0" /></a><br />So, there we go. Last friday 20th November beams circulated again inside the LHC, after one long year of reparations. Everyone is happy and bottles of champain (or cava) are being opened in the control rooms. In the picture you can see, besides the party atmosphere at the LHC control room, the first event displays from <a href="http://atlas.web.cern.ch/Atlas/public/EVTDISPLAY/events.html">ATLAS</a>, <a href="http://cms.web.cern.ch/cms/News/CirculatingBeam.html">CMS</a> and <a href="http://lhcb-public.web.cern.ch/lhcb-public/">LHCb </a>. The hundreds of tracks coming from the collimators where beams are splashed can be clearly seen in all of them. We are watching the first LHC data.<br />Commencing countdown, engines on...Gonzalohttp://www.blogger.com/profile/00966789195778985780noreply@blogger.com0tag:blogger.com,1999:blog-101649292405523496.post-86547470912038159812009-11-10T12:19:00.008+01:002009-11-10T12:52:39.435+01:00LHC beam approaching CMS!Last Saturday evening, the 7th of November 2009, at around 8 p.m., after passing through the LHCb detector, for the first time since last year's incident, protons arrived at the doorstep of the CMS experiment, thus completing half the journey around the LHC's circumference. <p>Low energy protons from the LHC were dumped in a collimator just upstream of the CMS cavern. The calorimeters and the muon chambers of the experiment saw the tracks left by particles coming from the dumping point (a so-called 'splash event', see images). During the rest of the weekend, bunches of protons were also sent in the clockwise direction passing through the ALICE detector and were dumped at point 3.</p><p>All detectors saw 'splash' events on their monitoring pages. Castor and the Preshower detectors saw particles for the first time! Some beautiful pictures from the events seen:</p><p style="text-align: left;"><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_toUYvpxSJE8/SvlTMGURgrI/AAAAAAAAAkY/GnoF3DL6WSg/s1600-h/15440_186350363432_35328943432_2802209_1639862_n.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 360px; height: 222px;" src="http://1.bp.blogspot.com/_toUYvpxSJE8/SvlTMGURgrI/AAAAAAAAAkY/GnoF3DL6WSg/s320/15440_186350363432_35328943432_2802209_1639862_n.jpg" alt="" id="BLOGGER_PHOTO_ID_5402440695326802610" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_toUYvpxSJE8/SvlTU_XaTAI/AAAAAAAAAkg/3MaoIcVugJY/s1600-h/15440_186350503432_35328943432_2802210_4914849_n.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 360px; height: 245px;" src="http://1.bp.blogspot.com/_toUYvpxSJE8/SvlTU_XaTAI/AAAAAAAAAkg/3MaoIcVugJY/s320/15440_186350503432_35328943432_2802210_4914849_n.jpg" alt="" id="BLOGGER_PHOTO_ID_5402440848079735810" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/_toUYvpxSJE8/SvlTh5F3RTI/AAAAAAAAAko/V_-nZW4tiYs/s1600-h/15440_186351888432_35328943432_2802216_4616467_n.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 359px; height: 223px;" src="http://2.bp.blogspot.com/_toUYvpxSJE8/SvlTh5F3RTI/AAAAAAAAAko/V_-nZW4tiYs/s320/15440_186351888432_35328943432_2802216_4616467_n.jpg" alt="" id="BLOGGER_PHOTO_ID_5402441069733823794" border="0" /></a><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_toUYvpxSJE8/SvlT4BsoyjI/AAAAAAAAAkw/ILgE2TWsKCk/s1600-h/15440_186352083432_35328943432_2802222_564510_n.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 362px; height: 256px;" src="http://4.bp.blogspot.com/_toUYvpxSJE8/SvlT4BsoyjI/AAAAAAAAAkw/ILgE2TWsKCk/s320/15440_186352083432_35328943432_2802222_564510_n.jpg" alt="" id="BLOGGER_PHOTO_ID_5402441450001058354" border="0" /></a></p>joseflixhttp://www.blogger.com/profile/02913257732652015566noreply@blogger.com0