►
From YouTube: Discovery of the θ13 Weak Mixing Angle at Daya Bay using NERSC & ESnet, Craig Tull, Berkeley Lab
Description
Discovery of the θ13 Weak Mixing Angle at Daya Bay using NERSC & ESnet, Craig Tull, Berkeley Lab. Presented at NUG 2013, the annual meeting of the NERSC Users Group.
A
I'll
talk
about
another
high
energy
physics,
experiment
that
has
been
done
at
the
China
and
in
China
and
by
people
here
at
the
at
the
Berkeley
Lab,
as
well
as
other
institution.
Ian
is
from
the
physics
department,
so
it
was
a
very
heavily
physics
oriented
in
his
talk
I'm
in
computing
research
division.
So
I'll
talk
a
little
bit
more
about
the
computing,
but
I'll,
try
and
start
out
with
physics,
elf.
A
The
daya
Bay
experiment
is
the
first
sort
of
major
a
physics
experiment
between
the
that
is
a
more
or
less
equal
partnership
between
the
United,
States
and
China.
It's
a
very
typical
of
high-energy
physics,
experiments,
although
Adam
much
much
smaller
level
than
the
Atlas
or
LHC
experiments
its
global
in
nature.
It
requires
to
the
marshaling
many
people
from
many
institutions
were
about
probably
10
or
fewer
percent
of
a
of
an
LHC
experiments,
still
quite
a
lot
of
people
and
resources
and
an
effort
to
kind
of
coordinate.
A
The
experiment
itself
is
being
conducted
at
a
nuclear
power
plant
about
50
or
60
kilometers
northeast
of
Hong
Kong.
It's
a
there
for
a
variety
of
reasons,
but
one
is
that
it's
a
it's
one
of
the
most
powerful
sources
of
neutrinos
on
the
on
the
planet.
Another
is
that
we
couldn't
get
US
nuclear
power
plants
to
cooperate
with
us.
Chinese
nuclear
power
plants
seem
a
little
more
willing
to
do
so.
This
is
sort
of
the
pic
that
the
picture
of
the
nuclear
power
plant.
A
Where
you
see
the
there
are
three
pairs
of
reactors
at
that
two
locations
and
the
the
power
plant.
Our
detector
is
underneath
the
mountains
here
in
tunnels
that
we
blasted
so
y
mejor
theta
13.
I
guess
it
was
in
the
late
60s
early
70s,
when
neutrinos
from
the
Sun
were
first
measured.
It
was
noticed
that
only
about
a
third
or
a
half
of
the
neutrinos
we
expected
were
actually
showing
up
after
a
few
spall
starts.
A
It
was
postulated
that
this
is
due
to
neutrino
oscillations,
that
is
electron
neutrinos
from
the
nuclear
reactors
and
reactions
and
the
Sun
returning
into
other
flavors
of
neutrinos.
This
is
described
by
this,
a
unary
unitary
matrix
that
has
in
three
mixing
angles:
theta
2,
3,
theta,
1,
3
and
theta
1
2.
Now
before
about
a
year
ago,
in
March
last
March,
theta,
2,
3
and
theta
1
2
had
been
measured
but
not
theta,
1
3.
It's
the
smallest
and
last
unobserved
mixing
angle
for
the
for
this
matrix.
This
left
on
mixing
matrix.
A
We
measured
it
and
published
our
first
results
in
March
of
last
year.
It
has
a
lot
of
physics
and
implications
which
I
won't
try
to
go
into,
but
here's
a
picture
of
our
detector,
as
everyone
knows,
neutrinos,
are
quite
difficult
to
detect.
They
pass
through
matter
basically
unimpeded.
They
pass
through
the
earth,
in
fact
quite
regularly,
I
think
of
being
bathed
by
neutrinos
of
the
same
rate
from
the
Sun
at
night.
As
you
are
during
the
day,
the
detector
itself
is
fairly
large.
It's
a
order.
A
20
tons
of
dope,
the
liquid
simulator
and
electron
antineutrino
comes
in
from
the
reactor.
It
there's
a
not
shown
here
is
a
water
pool
outside
of
the
detector
that
Vito's
cosmic
genetic
muons
are
radiated
back
a
background
radiation
from
radiation.
The
you
want,
the
neutrino
comes
in
and
interacts
produces
a
positron
and
a
neutron,
the
positron,
because
its
antimatter
annihilating
producing
some
gammas.
A
So
the
the
signal
that
we're
looking
for
is
a
very
prompt
signal
and
from
the
positron
and
a
delayed
signal
from
the
neutron,
a
very
simple
detector,
so
it
just
turns
out
that
the
rate
at
which
these
at
the
the
characteristic
length
for
these
neutrinos
oscillations,
has
a
maximum
at
about
2
kilometers
for
the
theta
13
contribution
to
the
oscillation.
So
what
you
would
like
to
do
for
your
experiment
is
to
have
detectors
very
near
the
producer
of
the
neutrinos
and
then
another
detector
about
two
kilometers
away,
and
that's
in
fact,
what
we
do.
A
We
have
this
Y
shaped
tunnel,
where
some
of
the
detectors
are
near
one
set
of
nuclear
reactors,
and
another
set
of
detectors
is
close
to
the
other
side
of
nuclear
reactors
from
the
course
and
then
we
have
some
detectives
about
two
kilometers
away.
So
this
is
the
essence
of
the
experiment.
You
try
to
measure
the
neutrino
rate
here,
the
neutrino
right
there
and
calculate
the
disappearance
and
the
theta
13
mixing.
So
this
is
a
nurse
talk.
A
So
let
me
just
say
that
in
describe
this
sort
of
tier
1
tier
2
model
of
the
LHC
computing,
we
adopted
the
same
Nanak
later
nurse,
because
our
Tier
one
in
particular
PD
SF,
is
where
almost
all
of
the
the
processing
in
the
United
States
for
diving
happens.
All
of
our
data
is
stored
on
HP
SS,
including
the
raw
data
and
any
processed
data.
That
is
unreadable,
meaning
that
if
we
have
a
data
set,
that
actually
is
at
the
basis
of
one
of
our
publications,
we
store
it
forever.
A
At
the
HP
SS
there
are
on
site
data
servers,
there's
a
we
have
a
mix
of
national
and
international
institutional
networking
we
own.
We
provision
to
OC
3
4,
I'm
getting
data
out
of
the
nuclear
power
plant,
but
then
we
go
across
chinese,
international
and
and
US
national
and
our
networks
now
we're
producing
of
order.
125
terabytes
of
data
per
day
about
the
same
amount
of
of
derived
data
are
per
year,
sorry
and
that
the
same
amount
of
drive
data
for
a
year
as
well
that
we
we
need
to
store.
A
A
We
also
have
some
alternatives
where,
if
something
happens,
we
send
it
across
an
alternate
trans-pacific
network,
or
we
even
have
the
ability
to
do
manual
sneakernet,
basically
between
the
experiment
and
Hong
Kong,
and
things
do
happen.
We've
had
network
outages
because
of
earthquakes
and
because
of
super
typhoons,
we've
even
had
it
twice
by
a
ship
dragging
an
anchor
across
the
trans-pacific
cable.
A
That
needs
to
be
repaired
so
we're
in
constant
contact
with
the
es
net
folks
in
the
embassy
CST
net
folks
to
try
and
recover
from
these
he's
kind
of
these
kind
of
problems,
because
it
is
a
24-hour
24-7
experiment.
So
the
other
part
of
the
experiment
in
competing
terms
is
the
software
we
sort
of
operate
in
a
in
a
reusable
software
ecosystem.
A
lot
of
data-driven
workflow.
A
So
in
showed
you
this
picture
and
that's
because
we're
basing
our
framework
on
the
same
architecture
that
Atlas
and
LIC
base
there's
on
I
won't
go
into
any
detail
there,
except
to
say
that
that
all
the
scientists
right
they're
code
in
this
in
this
area
here
in
the
algorithms.
But
there's
a
lot
of
code
out
in
the
in
the
corn
and
as
as
Ian
mentioned,
lbl
is
responsible
for
a
lot
of
that
on
both
the
both
experiments.
So
there
are
a
couple
reasons
to
have
these
component.
A
These
component
architectures
one,
is
just
the
sheer
complexity
of
the
code
that
you're
trying
to
coordinate
die.
Bei
is
no
Atlas,
but
we've
still
got
ten
hundreds
of
thousands
of
lines
of
code
in
our
in
our
system
that
have
to
be
coordinated,
and
we
have
many
many
components
that
have
to
operate
together
and-
and
we
have
adopted
the
very
typical
approach,
which
is
that
we
have
sort
of
common,
shared
foundational
analysis,
components
that
everybody
uses
and
everybody
contributes
to
and
then
sort
of
individual
components
that
are
specific
to
particular
analyses
and
this'll.
A
You
know
this
really
helped
us
in
doing
the
analysis
and
trying
to
get
ready
for
our
publications,
our
first
publications,
because
people
could
share
information.
They
could,
you
know,
compare
results
in
between
the
different
analyses.
It
really
helped
a
lot,
but
that's
that's
new
opera
that
we
have
a
lot
of
other
pieces.
We
have
Spade
and
odm
also
P.
Squared
which
runs
on
the
nurse
machines
to
drive
our
analysis
I'll
just
briefly
describe
those,
but
these
are.
This
is
a
really
workflow.
A
It's
a
data-driven
workflow
system
so
that
as
data
files
come
in,
they
get
processed
through
the
system
analyzed
and
presented
as
results
back
to
the
scientist.
So
speed
is
our
data
transfer
and
management
system.
It
wasn't
originally
designed
and
written
at
ice
cube.
We've
basically
redesigned
it
and
re
implemented
it
from
the
ground
up,
but
we
adopted
it
initially
unaltered,
but
we've
made
a
lot
of
improvements
and
it's
really
a
very
nice
piece
of
code
right
now.
A
Data
is
produced,
Alan
site
and
and
it's
to
the
the
psf
nurse
warehouse
in
about
20
minutes
from
when
it
was
produced
on
at
at
the
nuclear
power
plant.
This
is
happening,
24-7,
I
and
Simon,
and
several
people
have
to
keep
an
eye
on
it
and
there's
a
lot
of
monitoring
involved
in
the
Spade
system.
A
We've
set
up
this
system,
we've
been
operating
it
now
for
several
years.
About
a
year
ago,
December
we
actually
started
taking
real
data
and
all
of
our
preparations
really
paid
off.
We
were
able
to
get
our
first
theta
13
results
within
only
75
days
of
the
of
the
start
of
data
takings.
It's
almost
unprecedented.
It's.
It
is
unprecedented,
in
fact,
in
most
people
I
meet
in
the
sort
of
are
astonished
that
we
were
able
to
do
this,
but
we
were
able
to
see
antineutrinos
in
the
first
field
detectors
within
24
hours.
A
We
were
able
to
see
that
the
far
hall
neutrino
detectors
had
an
anti-neutrino
deficit
within
a
week
and,
as
I
say,
20
days
after
we
closed
our
last
file,
we
able
to
get
a
high-quality,
theta,
13
announcement
and
publication
out
for
the
for
them
march
announcement
that
many
of
you
may
have
heard.
So
this
was
one
of
the
science
magazines,
top-ten
breakthroughs
of
2010
and
I
just
wanted
to
sort
of
highlight
that
of
the
10
top
10
breakthroughs
of
the
of
the
last
year.
A
The
Higgs
boson
and
the
neutrino
mixing
angle
are
two
of
them,
and
the
reason
is
of
course
that,
as
he
and
said,
the
standard
model
of
physics
doesn't
actually
have
predictive
power
for
many,
maybe
18
or
20,
something
I
guess
it
depends
on
how
you
count
two
parameters
and
before
a
year
ago,
five
of
those
parameters
are
still
unknown.
Now
two
of
them
have
been
measured
by
experiments
that
we
have
been
involved
in
here
at
the
nurse
cannell
VL.
A
We
are
still
at
it,
yes,
well
so,
yes,
this
is
actually
true.
So
the
the
from
the
physics
point
of
view
we
got
quite
lucky
because
theta
13
was
actually
much
larger
than
then
people
had
feared.
It
might
be,
in
a
sense,
a
lot
of
our
hard
work
paid
off,
and
that
showed
up
because
it
was
large,
be
good
because
it
was
if
it
had
been
very
small.
They
did
13
had
been
very,
very
small
die.
A
Bei
would
have
been
the
only
experiment
that
would
have
been
able
to
measure
it,
but
there
are
competitors
out
there,
there's
Reno
and
t2
k
and
some
other
experiments
in
Korea
and
Japan
and
France.
The
double
show
that
are
trying
to
measure
the
same
thing.
The
fact
that
it
was
large
meant
that
those
experiments
could
have
measured
it
as
well,
and
the
fact
that
we
were
able
to
have
everything
set
up
and
the
analysis
and
data
transfer
and
nurse
resources
lined
up
and
and
working
on
day.
A
One
meant
that
we
were
able
to
get
that
result
out
ahead
of
our
competitors,
even
though
the
theta
13
was
large.
So
in
a
sense,
we
almost
were
hoping
that
it
was
going
to
be
small,
because
then
we
didn't
have
any
real
competition
from
other
experiments.
But
the
fact
that
it
was
big
means
that
we
could
get
the
result
out
faster.