►
From YouTube: 2019-08-28 :: Ceph Science User Group Meeting
Description
Agenda: https://pad.ceph.com/p/Ceph_Science_User_Group_Index
A
A
A
Some
of
us,
you
know,
got
together
for
round
two
birds
of
a
feather
after
the
sessions,
one
boy
and
had
a
decent
conversation,
and
there
was
some
interest
and
we,
you
know,
keeping
in
touch
and
continuing
that
conversation
from
time
to
time.
So
we
feel
that
we
give
it
a
shot
through
the
business
stuff,
blue
jeans-
and
you
know
just
go
from
there
they're
all
working
and
they
are
very
kind
of
limited
topic
here.
You
know
HTC,
HPC
and
whatnot,
so
we
probably
have
enough
some
of
the
same
problems
solving
other
problems.
A
You
know
a
lot
of
similarities
and
it
could
be
worth
it
for
people
to
stay
in
touch
occasionally
and
shares
to
us.
I
do
those
problems.
They
are
less
understaffed.
Dealings
is
basically,
as
I
carousel
mentioned,
in
an
email
to
the
user,
unless,
hopefully
I'm
more
than
happy
to
direct
this
but
innovation.
As
a
user
group,
you
know
I'd
like
to
have
input
from
as
many
people
as
possible.
A
So
going
from
marathon
for
this
first
one
we
should
just
go
around
and
really
talk
about
in
a
few
minutes
was
a
person
know
when
you're
from
what
you're
doing
myself,
but
research
is
being
done
and
he
you
know
cool
information
or
thoughts.
You
have
about
a
medium
like
this
and
how
we
can
keep
it
going
or
you
know
whatever
feel
free
to
pop
in
anytime
and
say
anything.
A
B
We
have
like
two
sets
clusters
that
we
use
mainly
for
what
to
use
cases
like
one,
it's
OpenStack,
so
our
BD
and
be
are
very
it's
such
a
fast
PI
system.
We
actually
mount
on
the
on
the
cluster.
We
have
running
in-house
size-wise
I
guess
we
are
like
between
a
hundred
and
a
thousand
OST
is
depending
on
the
cluster
yeah,
like
my
idea
of
a
group
was
actually
to
share,
like
yeah
best
practices
and
also
understand,
like
yeah,
to
share
actually
issue
of
problems
that
may
be
solution
of
fillet
to
to
scaling
issues.
B
A
Existed
on
binding
zone,
you
don't
use
the
s3
rato
scary
stuff
really
at
all.
These
are
all
suffer
fast.
Human
parents
scratch
space,
but
that's
kind
of
surprised
that
we
know
everybody's,
usually
mostly
using
SP,
not
use.
If
not
every
knows
directly
like
read
in.
There
are
no
reasons
for
doing
that,
mostly
because
it
gives
you
direct
access
to
the
data
on
the
OS
DS
and
there's
no
more
bottlenecks
when
you
have
under
compute
gnomes
banging
that
the
self
cluster
at
the
same
time.
A
A
We
do
it
every
time
we
talk
about
what
we
do
if
they
would
developer
in
the
audience.
We
usually
get
some
feedback
of
complaints
that
our
objects
are
too
big
because
we
dump
five
plus
big
objects
and
everyone
else
directly
and
things
don't
like
that.
But
that's
only
a
small
amount
of
my
data
most
of
this
morning.
C
Anybody
who's
willing,
because
it's
a
since
like
next
on
the
list,
maybe
I,
can
continue
so
I'm
Simon
from
switch,
which
is
the
research
Network
for
business
universities
and
we
also
based
in
Zurich
like
Mattia.
We
run
this
like
on
behalf
of,
like
the
other
universities,
basically
and
we've
been
thought
of.
You
said
five
years
ago
as
the
basis
for
OpenStack
lustre,
so
we'll
stay
for
RVD,
but
we
also
installed
the
latest
gateway
and
without
really
thinking
much
about
it,
and
it
got
popular
quite
quickly
also
for
something
like
rising
to
us.
C
Applications
and
people
are
thought
if
you
distribute
videos
that
were
streamed
directly
from
Rallis
gateway
to
hundreds
of
browsers
out
there,
which
brings
some
funny
performance
issues
initially,
but
no
it's
sort
of
well
accepted
as
a
as
a
product
that
lives
on
its
own.
Recently
we
have
people
have
started
to
on
expatriates,
backups
or
backups
of
backups
from
some
commercial
systems
that
now
support
s3
and
that
that,
for
us
it's
a
high-performance
use
case.
C
We
don't
really
do
HPC
or
HTC's
more,
like
missing
middle
kind
of
offer,
that
we
have
no
no
ambitions
to
become
like
supercomputing.
Like
there
are
other
people
instance
in
them
doing
that
better,
but
yeah
with
our
with
people
who
want
to
copy
these
large
under
terror,
datasets.
Of
course,
it's
it's
becoming
a
performance
issue
just
to
do
the
logistics,
but
we
find
that
some
applications
manage
that
really
well
by
doing
lots
of
parallel
transfers.
So
I
am
pleasantly
surprised.
C
What
can
be
gotten
out
of
the
this
object
store
idea
when
you
have
decent
software
over
a
good
connection,
I
mean
it's
not
like
gigabytes
per
second,
but
we
certainly
exceed
Giga
bits
per
second
for
and
for
these
tasks
so
far
it
has
been
no,
it's
efficient,
I
think
we
help
to
scale
that
further
out
yeah.
We
have
two
clusters
in
two
different
threes
in
Switzerland,
about
900,
OSDs
and
500.
Osd
is
respected,
respectively
and
yeah.
Also,
in
light
of
these
new
use
cases,
we
have
started
to
do
Eurasia
coding,
Italy.
C
The
oddity
stuff
is
all
replicated
three
ways
as
traditional
and
the
New
York
wall,
Ultron
George
uses
eight
plus
three
Eurasia
coding,
namely,
and
and
some
of
the
buckets
due
to
the
way
the
software
works
have
become
quite
big,
like
in
the
hundreds
of
millions
of
objects.
Range
bomb
could
could
grow
through
the
billions
and
that
also
post
some
problems
will
form
some
of
the
api's.
The
s3
API
is
with
on
a
discrete
way,
I
think
we
found
there
are
some
improvements
on
the
way
there.
Looking
forward
to
that
yeah,
that's
it
for
me.
D
Performance
of
single
MDS
is
using
multiple
MTS's
skate,
trying
to
scale
horizontally,
trying
to
understand
how
the
metadata
balancing
works,
because
it's
in
our
experience,
it's
not
that
it's
not
that
effective,
actually,
the
way
that
the
the
way
that
set
decides
to
move
directories
between
MDS
is
one
kind
of
like
unintelligible
bags
of
people.
People
want
to
get
into
that
we
can
in
elaborate
of
it.
D
Well,
so
you
know
on
the
s3
side,
we
we
haven't
recollected
that
we're
back
up
lots
of
home
directory
data
to
which
is
working
pretty
well,
and
we
also
want
to
add
a
second
region.
So
we
want
like
something:
that's
not
quite
understood
as
how
to
like
transparently,
add
a
second
region
to
an
s3
set
up,
and
then
let
users
selectively
like
mirror
their
buckets
across
the
second
region,
like
on
a
case-by-case
basis.
It's
not
clear
exactly
how
to
do
that
and
lastly,
so
most
of
our
hardware
is
getting
pretty
old.
D
Just
not
sure
that
that's
going
to
work
that
well
with
Ceph,
because,
like
all
of
our
block
storage,
will
be
put
on
to
probably
12
servers
like
that
and
putting
all
the
block
storage
on
only
12
such
a
dense
server
sounds
kind
of
doesn't
sound
that
attractive,
but
it
seems
like
we're.
Gonna
have
to
do
it
anyway
for
cost
reasons.
So
yeah,
that's
like
the
main
thing
that's
going
on
home
yeah.
That's.
D
D
D
Okay,
I
mean
they
that
we
see
is
that
we
have
like
the
courage.
Hardware
is
something
like
40
servers
with
24
24
6
terabyte
drives
each
and
I
think
we've
run
out
of
saan
this
cluster.
We
measure
the
latency
pretty
closely,
and
it
seems
that,
even
though
it's
only
60%
full
on
capacity,
the
the
latency
has
increased
to
like
over
100
milliseconds
per
4k
right,
which
is
pretty
nasty.
So
anyway,
we'll
see.
A
G
Next
on
the
list-
and
if
you
can
hear
me
well-
oh
that's
good
news.
Okay,
the
technology
works
so
Dave
Holland
at
the
Sanger
Institute
yeah
we've
been
using
set
for
a
few
years.
Now
the
main
motivation
is
OpenStack,
so
we've
got
it
for
RVD
for
volumes,
but
the
winner
that
surprised
us
was
s3.
We
do
a
little
bit
of
s
to
be
used,
but
actually
the
users
really
love
it.
They
can
off
and
farm
on
the
way
the
majority
use
it
s.
G
Arena
we've
got
51
servers
with
60
discs,
so
60
s,
T's
per
server,
so
3060
altogether,
three
riders
can
raise
in
front
we're
looking
to
advance
to
six
with
some
788
rocks
the
network
step
in
front.
What
we
found
recently,
our
most
recent
fire
is
there
people
that
might
science
that
leaves
the
s3
connections
open
and
that
type
of
connections?
So
then
other
clients
can't
get
in
so
we're
looking
at
maybe
a
checkboxes
to
mitigate
that
I'm.
G
H
We
also
operate
a
very
small
installation
at
Van
Andel
Institute
in
Grand
Rapids,
which
is
just
a
set
patched
here.
We
just
have
a
three
three
nodes
replicated
with
mdme
disks,
and
then
we
have
a
patched
here
on
top
of
Kent.
You
know
cash
pool,
that's
tiered,
on
top
of
their
main
data
storage
pool,
and
then
they
primarily
access
that
with
the
s3
via
around
us,
Gateway
ya,
know
again
on
their
site.
H
H
H
H
Part
of
to
us
part
of
the
advantage
or
reason
to
start
with
staff,
and
in
this
context,
is
that
we
could
also
do
is
to
have
pools
that
don't
replicate
across
sites.
You
know
depends
on
what
the
use
case
is
and
what
makes
more
sense.
You
know
we
can
have
a
cool,
that's
crush
map,
you
know
only
one
site
that
might
make
sense
or
race
your
color
than
one
site,
whatever.
I
Hello,
this
is
sticked
Alka
from
snackage
BC
in
the
UK.
We
don't
use
Safford
a
particularly
large
scale
in
the,
but
we've
been
using
a
little
bit
differently
and
that
we've
been
experimenting
with
developing
sefa
versus
and
a
dog
home
directory
for
the
Salone
clusters
deployed
in.
I
can
stack
bare
metal
environments.
I
We
I
also
did
some
some
work,
which
I
presented
at
September
Lin
on
on
our
roll
with
Saif
at
the
human
brain
project,
at
the
eulogy
supercomputer
Center,
so
so
I'll
I'll
take
it's
been
a
little
bit
more
experimental,
but
we
also
use
it
in
an
open,
stack
context
in
this
sort
of
the
conventional
way
as
well.
I
think!
That's
all
for
me.
E
E
J
E
Called
three
nvme
and
three
hot
nodes:
we
have
that
serving
via
the
Assefa
fest
right
now,
as
well
as
our
V,
as
well
as
our
biddies,
which
are
consumed
by
over
for
our
virtualization
environment.
E
So
far
it's
been
pretty
awesome.
We
have
a
hundred
gig
back
back-end
net
network
for
everything
and
we
are
starting
to
poke
at
our
DMA
and.
F
E
We've
actually
been
using
him
for
long
enough
that
we
were
before
the
managed
block
block
device.
Plugins
existed,
so
we
do
have
a
cinder
bridge
in
place
for
it
with
Keystone
to
handle
the
individual
keys,
we're
in
the
process
of
rolling
it
over
to
set
a
fess,
because
we
found
that
the
performance
hit
is
small
enough.
That.
K
A
E
K
Oh,
this
is
Adam
Hoffman
at
the
big
data
Institute
Oxford.
We
have
been
using
the
set
primarily
with
OpenStack,
but
a
device
with
other
people
saying
nothing
particularly
exciting,
but
we
are
actually
looking
at
using
safe
for
multi-site
back
up
an
object.
Storage
so
definitely
be
interested
in
the
other
site
that
mentioned
they
were
doing
that
then
I
think
it's
Michigan.
A
L
M
I
can
go
next.
My
name
is
Douglas,
fuller
and
I'm.
The
offense
engineering
manager
at
Red,
Hat-
and
you
know,
in
a
previous
life
I
worked
at
in
research
computing
at
universities
and
later
at
Oak,
Ridge,
National,
Laboratory,
so
I'm
definitely
interested
to
see
how
research
computing
is
is
pushing
the
bounds
of
7th
s
and
I'm
interested
in
making
sure
that
if,
if
s
moves
in
a
direction,
that's
helpful
to
everyone
I
think
a
lot
of
times
in
terms
of
file
system.
Research
is
out
on
the
forefront
of
file
system
usage.
M
E
Thinking
of
interesting
use
cases
for
sattva
fests,
we
have
actually
been
in
the
process
of
rolling
out
some
SMB
dj2
movers,
with
a
Active
Directory
based
DFS
namespace
in
front
of
them
to
solve
some
of
our
very
long-term
file
services.
Ish
issues,
the
benefit
of
having
it
be
a
clustered
file
system
has
been
amazing
in
terms
of
our
ability
to
scale
four
or
SMB
I
know
that's
not
necessarily
a
HPC
or
particularly
scientific
use
case,
but
it's
one
that
we
weren't
really
expecting
to
run
into
and
has
been
awesome
so
far.
E
Back-End.
We've
had
a
lot
of
issues
with
that
in,
like
our
VMware
environment
and
some
other
areas.
So
we
found
ok
what
if
we
just
take
Windows
out
of
the
equation
and
just
have
individual
shares
that
are
served
by
multiple
s
by
multiple
SMB
nodes.
We
call
them
our
data
movers
and
then
use
the
DFS
namespace
in
the
front
to
present
a
single
entry
point
to
end
users,
and
then
the
client
machines
can
automatically
load
balance
between
them.
C
E
A
F
We've
actually
done
it
both
ways
when
we
bought
our
bulk
nodes,
we
ended
up
buying
the
bare-bones
chassis
and
the
drives.
Then
gimmies
everything
separate
recently
we've
started
a
direct
relationship
with
supermicro,
and
now
we
just
buy
them
fully
provisioned.
So
it's
hey
supermicro.
I
want
three
of
these
nodes
crammed
with
these
nvme
drives,
and
you
know
this
much
ram
and
all
this
other
stuff
go,
and
it
just
shows
up.
E
C
We
thought
quanta
initially
and
lately
more
super
michael
for
some
reason,
and
and
always
we
always
bought
a
small
chassis
one
you
so
initially
to
you
with
both
large
discs
and
now
it's
one
you
with
discs.
You
know
also
thinking
about
moving
to
a
larger
sassy,
but
my
colleagues
will
have
to
convince
me
that
this
is
really
worth
or
the
the
savings
are
worth
the
risk
that's
similar
to
what's
and
what
they
mentioned.
I
hope
we
can
get
deep
enough.
F
Just
recently
built
out
a
cold
storage
steel
to
go
with
our
bulk
tear
and
as
24-1
you
nodes
and
per
terabyte.
It
was
a
very
comparable
cost
and
I
ended
up
accidentally,
making
hot
air
out
of
it,
just
from
the
sheer
parallelism
across
all
EOS
DS
and
has
24
nodes.
So
if
I
had
to
do
the
whole
thing
over
again,
I
would
probably
roll
everything
into
one
use.
Instead
of
the
for
you
36
bay,
bulks
that
I
have
are
one
you
nodes
right
now:
each
have
12
drives
in
it.
F
J
Can
hear
me
don't
guess:
John
I
could
yeah
hi
yeah
I've
got
a
couple
of
things
about
hardware
up
until
recently,
I
walk
in
the
seismics
industry
and
for
the
company
called
CGG
I've
just
recently
started
work
with
a
startup
company
in
applying
deep
learning
to
medical,
imaging
and
breast
cancer
diagnosis,
we're
not
an
openstack
shop.
We
are
principally
kubernetes.
J
We've
got
some
interesting
use
cases
in
that
we
want
to
actually
deploy
set
fire
systems
in
into
hospitals
girly
as
appliances
doing
that
the
images
actually
stay
in
the
hospital
and
we
can
run
deep
learning
algorithms
on
them,
but
they
won't
be
sent
outside
of
the
hospital.
I
have
two
questions
really.
The
fosters
people
seem
to
be
using
safe,
FS
and
production.
In
the
past,
people
have
recoiled
in
horror
and
said:
they've
had
problems
with
staff.
Fs
is
reliable.
J
That's
that's
good
to
hear
because
we
use
this
s3
that
at
the
moment,
but
we
would
like
to
use
actual
def
of
s.
That's
up
there.
The
question
in
hardware
is:
has
anyone
looked
at
the
ARM
based?
There
was
principally
from
soft
iron,
of
course?
No,
so
there's
something
called
Mars
800,
which
is
one
small,
ARM
processor
attached
to
each
drive
and
put
in
the
chassis
with
a
switch
I
believe
they
were
made
for
us
for
a
design
for
Alibaba
as
any
we'd
use.
Then
any
of
us
we've.
E
I
C
D
So
like
then,
you
get
this
like
ping
pong
situation,
none
of
the
MTS's
have
enough
I'm
ready
to
start,
and
the
only
way
that
we
found
to
get
out
of
this
they're
sort
of
finding
a
server
with
more
memory
was
that
he
killed
appliance
and
by
killing
the
client
so
like
like
hard
killing
the
clients
like
I
guess
during
the
during
this
FS
reconnect
phase,
it
doesn't
bother
reloading
all
of
the
capabilities
for
those
clients
that
it
sees
has
disappeared.
So
that
was
pretty
scary.
And
now
we
like
monitor
those
capabilities
think
very
closely.
D
It's
it's
not
nice.
When
just
like
a
few
ten
clients
can
break
everything.
F
We've
found
one
of
our
ways
to
get
out
of
jail
with
the
Cape's
not
going
away
in
the
memory
just
ballooning
up
is.
If
we
remove
all
of
the
Bears
and
yeses-
and
you
say:
okay,
you
have
one
remaining
MDS
server.
Here's
a
giant
swap
file
just
go
chew
through
it.
It
won't
try
to
fail
over
X
is
no
one
to
failover
to
and
it'll
eventually
come
back,
it's
painful,
but
you
don't
end
up
with
the
round-robin
stone
anything
pong
ping
pong
in
effect.
H
Okay,
thank
you.
I'll
mention
one
issue
we
had,
we
had
a
user
are
thinking.
Data
to
us
was
some
kind
of
homegrown.
Our
sake
base
current.
That
created
huge
amounts
of
hard
links
to
like
older,
basically
account
for
older
versions
of
files.
That
then
didn't
change,
and
it
ended
up
completely
overloading
a
certain
hidden
pool
that
seth
has
heard
for
gray.I
knows,
and
then
we
had
to
keep
increasing
the
I
forgot
now
what
parameter
we
had
keep
increasing,
but
I'm
sure
developer
or
somebody
who's
familiar
with
no
right
way.
H
And
I,
just
it
kind
of
it
just
at
some
point,
basically
makes
it.
So
when
you
hit
that
limit,
nobody
can
write
any
data
because
or
nobody
can
delete
any
data
because
it
can't
create
I
nodes
in
this
country
objects,
and
this
hit
and
I
hidden,
pooled,
hidden
directory,
I
guess
so.
I,
don't
know
I'm,
probably
fuzzing
the
description
a
little
bit,
but
it
was
a
lot
of
hard
links
are
not
good
for
force.
F
F
F
is
the
point.
D
Yeah
we
had
that
to
sorry,
sorry,
if
I
keep
talking
but
the
the
problem.
The
hard
links
issue
is
specifically,
if
you,
if
someone
creates
hard
links
like
they,
create
a
file,
then
they
hard
link
to
that
file
and
then
they
delete
the
original
file.
So
in
that
case,
Steph
puts
the
original
file
into
that
stray
directory
and
there's
a
limit
of
1
million
stray
files
by
default
right
yeah.
D
So
there's
a
thing
that
you
can
increase
to
increase
that
hard
limit,
but
also
there's
some
there's
a
couple
of
tickets
in
the
tracker
where
they're
trying
to
like
preemptively.
So
the
wait.
So
the
way
that
you
can
fix
that
is,
you
just
have
to
stat
the
new
inode
and
the
the
the
consequence
of
starting.
D
It
is
that
stuff
realizes
internally,
that
the
the
hard
link
points
at
astray
thing
and
then
it
like
it's
called
it
reintegrates
that
inode,
so
that,
like
the
new,
the
new
copy
of
the
file
is
like
the
primary
and
it's
no
longer
in
the
stray.
So
we
had
someone
doing
the
same.
They
were
like
creating
millions
of
hard
links,
then
deleting,
but
all
that
they
did
is.
We
asked
them
to
put
a
little
LS
minus
L
to
their
hard
link
after
they
do
it,
and
it
kind
of
fix
everything
fix
for
that.
For
that
workload.
H
A
Yes,
I,
don't
know
off
the
top
stuff
for
many
I
want
to
talk
about.
Is
there
kinds
of
doing
this
sort
of
virtual,
saying,
I
kind
of
said
I
would
pull
at
the
bottom
of
the
soft
pad
that
people
can,
you
know,
put
an
X
next
to
or
whatever
to
you
know,
this
monthly
to
March
is
every
two
months
good.
What's
a
quarter,
I've
been
part
of
some
other,
like
virtual
meetings
before
and
sometimes
every
month
it
ends
up
feeling
a
little
first.
A
Alice
people's
thoughts
I
mean
I,
want
this
to
be,
you
know,
usually
a
community
via
venom
trial.
Right
now
for
me
scheduling
in
presentations
from
people,
unless
somebody
comes
up
and
says:
hey
I
have
an
interesting
thing:
I
wanna
I
can
share
with
everybody.
You
know
talk
about
it
for
half
an
hour,
so
it's
kind
of
leave
it
up
to
the
people.
A
A
Then
somebody
didn't
mention
like
an
alternating
time
zone
for
a
pad
folks,
that's
yeah,
there
was
I
got
a
couple
responses
from
like
in
Australia
New
Zealand.
You
know
saying
they're
interested,
but
this
semester
up
in
the
middle
of
the
night,
there's
no
way
to
make
a
call
at
this
time.
So
I
don't
really
know
how
to
deal
with
that.
I
I
Activity
round
set,
and
particularly
in
Australia
I,
think
there's
some
I,
don't
know
if
you're
going
to
the
OpenStack
summit
in
Shanghai,
but
there's
a
presentation
there
scheduled
about
using
safer
paths
with
MPI
workloads
from
Palsy
supercomputer
center.
So
we
also
know
people
from
other
sites
across
Australia
and
New
Zealand,
where
their
extensive
set
deployments
of
research
computing,
so
I
think
it's
I,
think
it's
a
useful
or
a
significant
group
of
group
of
people
and
expertise
around
this
area.
I
But
the
other
thing
I'd
say
is
that
from
from
OpenStack
experience
that
the
best
meetings
of
the
face-to-face
ones
so
I
think
if,
if
there
are
people
who
regularly
attend
things
like
cephalic
on
or
safe
days,
actually
just
getting
together
at
those
places
and
having
birds
of
fellows
assurances.
Some
is
the
very
best
way
of
sharing
information
and
breaking
down
barriers
around
this
kind
of
thing.
A
Yeah
we
agree,
impressive
stuff
is
best
by
far,
but
sometimes
it's
for
the
people,
I
can
only
make
it
like
more
confidence
of
year
or
so
I'll
spread
all
over
the
world.
So
this
was
the
idea
was
at
least
this
way
doing
these
virtual
mean
to
be
slightly
keep
in
touch.
So
that
way,
every
time,
if
you
already
do
this
stuff
once
a
year-
and
you
go
see
these
people,
it's
not,
you
know,
meeting
a
whole
bunch
of
strangers
again.
Every
time
you
go,
you
know,
but.
A
D
Well,
okay,
I,
don't
I
mean
it's
it
clear.
Is
it
clear,
as
already
specific
questions
regarding
logistics
that
that
anyone
has
for
the
people,
so
I
guess
I,
guess
I
mean
I
recognize
a
lot
of
the
names
on
there,
lots
of
speaker
a
lot,
some
people
right,
the
third
for
the
people
that
haven't
been
to
CERN
yet
does
it
clear
how
to
get
to
CERN
and
like
the
hotel
things
working
out
is
all
of
that
working
fine?
Are
these
specific
questions
or
concerns
there.
D
D
D
We'll
have
coffee
for
when
people
arrive,
though,
you
can
probably
get
like
a
croissant
and
a
coffee.
If
you
might
want
to
skip
breakfast,
the
agenda
looks
super
cool
I'm
really
excited
about
all
the
talks.
I'm
really
like
we
got.
We
got
lots
of
talks
and
we
managed
to
get
stuff.
Basically,
everyone
that's
submitted
well
I'm,
really
happy
about
that.
D
Yeah
we'll
have
like
a
after
the
after
the
second
day,
we'll
have
a
little
reception
just
outside
the
auditorium,
so
where
we
can
kind
of
go
mingle
and
stuff,
but
there's
nothing
currently
planned
for
after
that.
So
we're
kind
of
like
thinking,
if
about
sending
out
a
poll,
maybe
to
see
who
would
be,
who
would
be
interested
to
go
into
Geneva
that
night
and
then
I
don't
know,
find
some
typical
Swiss,
Chalet,
restaurant
or
something
I,
don't
know
if
it's
too
late
to
book,
something
like
that.
D
D
D
D
The
charm
yeah,
your
mic,
said
it
fuzzy,
but
yeah
they're
trying
it
can
be
crowded
in
the
morning.
That's
true.
D
That
will
be
Saturday
and
Sunday.
You
can
go
to
like
open
days.
Dot.
Cern
CERN
is
a
TLD,
so
open
days
outside
works,
and
now
you
can
see
the
different
sites
and
some
of
the
things
require
pre-registration
I,
don't
know
if
it's
too
late,
I
don't
know,
I
hope
that
people
have
already
pre-registered
for
some
of
the
some
of
the
tours
if
they
were
interested,
but
otherwise
how
to
look
there.
D
If
you
are
interested
I'm,
really
sorry
that
the
concert
that
the
set
day
is
not
on
the
Monday
right
afterwards,
not
obviously
would
have
been
our
preference,
but
all
of
the
conference
rooms
are
like
completely
booked
up
the
Monday
after
before,
basically
clean
up,
because
on
the
Saturday
and
Sunday
they'll
have
you
know
public
lectures
in
the
in
the
conference
rooms
and
then
on
this
on
the
Monday
they
want
to
clean
up.
So
we
can't.
We
couldn't
do
that.
D
M
K
M
M
They
tell
you
that
they
don't
do
pre-registration
for
it,
because
it
is
so
popular
and
it's
there
for
like
first-come,
first-serve,
okay,
exempted
I.
Think
from
that
registration
system.
I
was
just
wondering
if
there
were,
you
know
if
there
were
any
best
like
how
we
really
did
I
need
to
get
there.
For
example,
yeah.
D
Okay,
I,
don't
know
what
their
I
don't
know
what
would
be
available,
but
we
can
try
to
arrange
something
dedicated
if
like.
If
you
guys,
if
there's,
if
there's
like,
say
ten
of
us
or
so
then
it
might
be
possible
to
just
like
you,
a
tour
of
I,
don't
know
of
the
CMS
detector
or
something
like
that
that,
because
it
was
ten,
so
send
us
a
send
ten
to
me
and
Julian
and
Roberto
a
mail.
D
D
A
Yeah
I'm
sure
sounds
like
it:
I
guess
that
wraps
it
up
here,
I'll
send
out
a
staff
users.
You
know
I
figured
this.
This
time
went
small
for
most
people
here.
Obviously,
since
we
got
on
the
call
some
time
before
the
next
one,
probably
the
last
Wednesday
of
September
sticking
with
that
other
than
that,
thanks
for
enjoying
it-
and
hopefully
we
can
stay
and
just
soon
enough
to
keep
the
sky.