►
From YouTube: CDS G/H (Day 1) - CI & Teuthology Roadmap
Description
https://wiki.ceph.com/Planning/CDS/CDS_Giant_and_Hammer_(Jun_2014)
24 June 2014
Ceph Developer Summit G/H
Day 1
CI & Teuthology roadmap session
A
B
We
are
Zach,
I
guess
sack
can
take
it
away.
I
just
want
to
give
like
a
brief
update
on
what
what
it
is
you're
working
on
and
then
what's
for
the
high
level
road
map
looks
like,
and
then
we
can
sort
of
figure
out
what
I
guess
so
I
think
from
my
perspective,
they're
sort
of
two
things
I
want
to
accomplish
one
sort
of
communicate.
Broadly
everyone.
What
the
road
app
looks
like
for
tooth
ology
and
then
the
other
half
would
be
till
to
talk
about
any
general
changes.
B
C
D
First
yo
can
hear
me
right
so
yeah,
I
guess
I
can
I
can
mostly
just
speak
towards
what
I've
been
working
on
with
ethology
I've
got
several
several
kind
of
large
balls
in
play
here
that
that
are
hopefully
going
to
land
soon.
D
One
of
the
larger
ones
is
splitting
the
tooth
ology
tasks
out
from
the
framework
which,
like
a
lot
of
things
in
tooth
ology,
ended
up
being
a
larger
task
than
then
you
would
necessarily,
then
you
might
think
it
turns
out
that
that
gated
on
a
few
other
pieces
that
I
was
working
on
we've.
We've
got
this
new
new
ish
anyway.
At
this
point,
web
service
called
paddles.
D
So
some
of
that
work
depended
on
making
a
little
more
sense
of
the
way
that
internally
we
scheduled
jobs.
Currently,
we've
got
a
stack
of
several
shell
scripts
that
are
not
actually
in
the
tooth
ology
tree
that
we
use
to
eventually
bubble
jobs
through
the
queue
I.
Have
a
pork
roast
open
right
now,
actually
that
that
collapses,
some
of
those
into
one
I,
wanted
to
split
it
out
and
do
a
few
different
steps
so
that
it
would
be
a
little
bit
less
likely
to
break
things
in
the
process.
D
It's
just
the
framework
is
very
complex,
as
anyone
that
looks
at
it
knows,
though,
let's
see
I'm,
that's
kind
of
a
high
level
view
of
what
is
happening
right
now.
I
know
for
some
of
the
the
third
parties
that
have
been
asking
for
for
instructions
and
help
setting
up
their
own
sort
of
full-blown
instances
of
tooth
ology
with
queuing
and
scheduling
and
stuff.
It's
been
hard
to
to
give
easy
answers
for
how
to
set
things
up,
because
it's
kind
of
all
in
flux.
It's
about
to
be
much
simpler.
D
So,
as
far
as
what
to
do
after
that,
I
mean
one
of
the
things
that
I'm
excited
to
have
working
which
to
be
clear,
has
not
been
started
yet
is
using
something
like
OpenStack
as
sort
of
a
back-end
to
provision
VMs
instead
of
the
downstream
based
system
that
we
have
right
now.
I
think
it'll
be
nice
to
use
tools
that
the
rest
of
the
community
is
using.
D
That's
another
part
of
the
plan
and
yeah
alfredo
alfredo,
and
I
were
talking
about
that
a
bit
last
week.
I
had
done
some
really
initial
work
on
it,
a
while
back
and
we're
gonna
go
from
there.
I
lo
week
has
a
question:
how
much
of
truth,
Allah
gqa
sweets
depends
on
having
bare
metal
machines
at
the
moment,
I.
D
E
B
E
B
B
Right
so
I
think
the
idea
with
once
the
tasks
are
split
out
from
the
the
running
framework
and
all
the
other
scheduling
stuff
to
sort
out,
and
we
look
at
I'm
wiring
an
OpenStack.
The
idea
would
be
to
have
a
nicer
abstraction
for
the
thing.
That's
providing
machines
to
test
that
have
api
calls
to
you,
know
lock/unlock
reboot,
whatever
and
also
power
cycle,
and
so
it
could
power
cycle
to
the
M
or
do
whatever
in
the
same
way,.
E
E
B
B
Many
right
now
so
we
run
some
several
of
our
test,
Suites
against
a
bunch
of
beams
that
we
have
and
we
do
see
sporadic
failures
there
just
because
they
get
low
and
RAM
and
they
start
swapping
and
I.
Think
that's
just
about
right
sizing,
the
Vian's
to
the
set
of
tests
that
we
run.
B
B
See:
okay,
yes,
water
infrastructure,
okay,
okay,
well,
I
think
that
I
mean
the
broad
broad
strokes.
The
goals
are
to
make
the
installation
and
setup
with
tooth
ology
much
simpler
if
I,
consolidating
everything
into
paddles
and
it's
easy,
I
think
the
other
sort
of
guiding
goal
or
whatever,
and
all
this
is
to
make
technology
sufficiently
general
that
it's
not
set
specific.
B
D
B
B
Well
being
the
way
we
the
way
we're
using
it
right
now
we're
the
tasks
are
structured.
You
can,
you
know,
install
SEF
and
then
later
on
a
stuffed,
client
and
then
lay
our
on
NFS
demon
and
then
layer
on
an
NFS
client
and
you
can
sort
you
can
sort
of
stack
it
all
up
like
that,
and
so
it's
it's
already
largely
general
in
that
sense,
but
yeah.
D
G
G
So
guys
we
over
82
excellent
examples
of
this
from
the
last
three
days
to
relatively
innocuous
patch
was
completely
blasted.
The
bit
of
sweet,
oh
and
it
didn't
have
to
be
those
two
catches.
It
could
have
been
pretty
much
anything
else.
They
got
moved
from
that
time
period
and
I
think
there's
any
suspicious
about
them.
But
the
lesson
needs
to
be
that
it
doesn't
matter
how
innocuous
the
pathos,
if
it
didn't
run
through
the
sweep
it
still
could
great-aunts
running
every
tiny
patch
through
the
sweep
so
I
think
we
get
something
like
this
to.
D
B
Yep
I
think
all
right,
so
we
can
start
there
because
I
that
they're
a
couple
different
see
italics
want
to
cover.
So
what?
But
the
idea
is
that,
instead
of
we
have
limited
number
of
machines
of
use
for
testing
lots
of
small
patches
going
in,
but
we
want
everything
to
get
tested
before
it
goes.
A
master
semester
stays
unbroken
and
we
simply
can't
run
the
full
test
against
all
of
them.
So
the
idea,
then,
is
to
have
like
a
temporary
branch.
B
I
C
B
D
C
B
B
D
G
Good,
in
some
sense,
reduce
the
need
for
running
master
runs.
Quite
so
often
doing
that
sense,
it's
probably
okay,
because
the
main
reason
we
brought
master
runs
is
because
that
that
is
our
integration
branch
way.
They
know
right.
The
main
difference
between
this
and
just
moving
in
the
master
is
that
you
can
remove
patches
humans
range.
If,
when
they've
discovered
to
be
driven
once
you
learn
from
the
master,
you
have
to
just
patch
that
patch
that's
the
difference.
E
B
E
G
D
G
B
G
B
We
could
make
a
we
could
make
a
kid
tree.
That
is
just
the
list
and
you
would
just
you
can
edit
on
github
I,
think
right,
yeah
and
commit,
and
then
you
give
github
commit
access
to
that
repository
for
every
stress
it,
which
would
be
all
the
currents
of
developers
and
then
anybody
else.
What
it's
same
set
almost
pretty
much
I.
B
Think.
The
other
thing
that's
important
is
that
if
a
branch
is
in
the
immigration
thing
and
then
eventually
does
get
merged
into
master
and
needs
to
get
dropped
from
the
integration
thing,
I
mean
the
DeMars
will
be
no
op,
but
it
should
be
get
pruned,
but
the
but
the
script
could
just
do
a
quick
check.
That
says
that
if
the
branch
is
already
part
of
the
base,
part
of
master,
then
it
just
does
its
own
commit
and
removes
it,
and
you
have
a
history
of
that.
All
those
like
a
bit
automatically
it's
removed.
G
G
J
G
D
K
Right,
we
don't
need
to
do
it
like
exactly
the
same
way,
we're
doing
a
personal
boy,
because
the
or
tautology
is
our
test
week,
because
Seth
employee
takes
minute
to
run
same
thing
with
the
tall
you
or
less
so
we
can
do
its
access.
It's
just
it's
just
configuration
and
saying
this
is
what
we
this
is.
These
are
the
actions
we
want
to
do
whenever
there's
a
pull
request
and
why
lifting
and
the
permissions?
If
you
don't
know
someone
all
of
that
is
taken
care
of
interactively
by
posting
comments
on
the
pull
request.
Page.
B
B
B
And
then
I'd
also
has
a
poll
request
like
on
the
same
line
so
that
it
knows
where
to
comment
so
I
think
it'd
be
useful
to
be
able
to
put
in
just
the
do.
You
need
about
the
get
source
and
the
pull
request,
and
you
don't
in
you
might
not
have
a
full
request
like
you
want
to
go
to
add
sup
the
branch.
That's
just
not
world.
J
G
B
K
K
E
B
K
You
can
make
Jenkins
say:
oh
I,
just
I
just
scheduled
these
full
request
for
blah
blah
blah
I
mean
you
can
customize
whatever
you
want
there,
so
that
that
comment
from
coming
from
Jenkins
might
be
a
bit
more
informational
to
the
one
that
he's
posting
the
pull
request,
as
opposed
to
oh
yeah,
your
bill
pass.
You
know
everything's
good,
which
is
probably
not
what
we
want
right
can.
B
So
I
think
the,
but
that
means
that
it
at
like
reply,
the
cron
job
or
something
that
like
midnight
or
something
it
goes
in
a
building
integration
branch
and
pushes
it
out.
We
actually
kick
off
the
build
and
then
just
add
in
a
delay
like
two
hours
later,
we
schedule
teeth
all
g
which
hopefully
will
be
enough
time
for
all
the
builds
to
happen
or
an
hour
whatever
it
is
and
then
and
schedules
like
technology
runs.
So
I
think
the
hardest
bit.
D
Well
also,
I
mean
I
do
see
some
difficulty
in
you.
I
think
what
you
just
said
was
we
have
a
two
hour
delay
that
we
hope
is
going
to
be
long
enough
for
the
build
to
happen.
I
feel
like
any
time
we
just
cross
our
fingers.
You
know
with
regards
to
any
building
of
stuff.
We
get
disappointed,
yeah.
B
Yeah
well
in
your
worst
case,
it
fails,
and
so
it
just
doesn't
run
that
night,
so
I
think
it's
not
the
Alger
name
is
to
have
it.
Have
it
pulling
the
thicket
builders
that
it
cares
about,
like
the
precise,
the
trustee,
the
boss,
whatever
ones
we
are,
are
included
in
the
queue,
a
pool
and
make
sure
that
build
have
succeeded
on
all
those
and
once
that's
true,
then
it
does
it.
B
E
B
D
And
whatever
does
the
polling
of
the
builds
can
hold
off
on
even
scheduling
the
actual
technology
jobs
so
that
they're
not
sitting
there
in
the
queue?
And
then
it
can
schedule
them
with
a
higher
priority
or
lower
a
more
urgent
priority
so
that
they
can
preempt?
Potentially
whatever
junk
is
running
down?
Oh.
B
D
C
E
B
D
E
B
That's
a
mighty
nice
bike
shed
how
much
of
the
other
random
stuff
that
we
wanted
in
general
out
of
our.
How
much
have
we
sort
of
captured
in
all
that
because
I
think
the
other,
the
other
big
gap
that
I
think
we've
been
seeing
is
that
is
just
to
get
builder
like
we'll
have
something
that
we
push
a
branch
and
we
don't
notice
that
the
get
builders
read.
Let
me
merge
it
and
then
their
masters
read
so
this
captures
that
one
at
least
right.
B
L
B
B
K
So
the
thing
with
with
what
we
are
using
for
duty,
ology
and
set
deploy
and
integrate
with
Jenkins
is
dead.
I
believe
that
the
plug-in
is
cover
certain
use
cases.
So
if
we
want
way
way
more
flexibility,
what
we
will
have
to
do
is
like
right.
The
that
glue
the
talks
back
to
github
in
does
what
we
want
separately,
because
our
watchword
she's
gonna,
be
fighting
the
plugin
yeah
yeah.
E
K
M
K
K
Yeah
but
the
use
case
a
John
mentioning
so
it's
what
I,
what
my
experiences
with
Travis
is
like.
You
will
create
a
VM
for
you
and
you
can
have
an
environment
where
your
tests
run
as
isolated
as
possible.
I
really
don't
think,
there's
capabilities
of
like
saying:
oh
I
want
to
run
a
bunch
of
five,
then
dm's
or
10
different
things.
Oh.
C
D
M
I
think
if
we
were
to
try
and
use
it,
the
way
it
would
work
would
be
that
it
would
create
a
vm
in
which
we
ran
our
test
script,
and
then
our
test
script
would
call
out
to
something
like
OpenStack
to
provision
the
cluster
which
wouldn't
actually
be
such
an
insane
way
of
doing
it.
We
just
have
to
be
aware
that
we
still
have
to
do
the
whole
OpenStack
piece
ourselves.
B
B
B
B
So
I
mean
just
just
mirroring
some
of
the
stuff
that
the
kit
builders
are
doing
like
just
doing
like
a
simple
make
check
on
whatever
p.m.
OS
types
they
have
like
might
not
be
a
bad
thing,
and
in
addition
to
that,
it
also
runs
the
thing
that
like
adds
it
to
our
integration
list,
I
think
the
main
thing
is:
if
it
has
as
long
as
it
can,
you
know
trigger
something
from
a
pull
request.
It
can
go
back
and
it
can
comment
on
that
same
poll
request.
B
So
it
knows
what
the
pull
request
URL
is
from
this
environment
or
whatever,
and
you
can
whatever.
The
script
is
that
it's
doing
can
do
arbitrary
stuff
like
adding
that
branch
to
the
integration
list,
then
it
seems
like
it's
not
really
any
different
and
Jenkins
exceeding
that
it
doesn't
make
your
eyes
bleed.
It.
D
H
Think
it's
worth
pointing
out
that
our
existing
usage
of
Jenkins
could
be
improved
significantly
to
by
having
some
offline
management
of
whatever
drinkin
scripts.
We've
got
there
and,
like
I,
said
in
the
chat
auditing
of
what
builds
we've
got
done
and
when
and
their
status
and
that
sort
of
thing
it
may
be
that
yeah
we're
reinventing
Travis
by
doing
that.
But
but
we
can
get
middle
ground
without
too
much
effort
to.
B
Yep,
so
it's
a
bummer
that
police
isn't
here,
because
I
think
he
just
did
a
bunch
of
this
work
with
the
coaster
project,
and
I
noticed
I
was
talking
to
him
last
week.
He
was
you
had
some
weird
build
error
on
his
laptop
and
he
had
kicked
off
the
same,
build
and
Travis
and
verse
f.
So
he's
he's
familiar
with
it
some
degree,
but
he's
probably
a
person
to
talk
to
you
about
all
of
this.
E
B
B
B
B
B
E
B
E
E
B
N
B
I
Yeah
there's
a
quick
thing:
we
use
us
Travis
CI
for
personal
projects,
and
we
also
use
it
for
swift
on
file
in
Swift.
O
file
does
all
the
Python
unit
tests
on
it
no
problem,
but
when
it
deals
with
X,
adders
and
stuff,
like
that,
it
can't
because
it's
a
container
actually
running
in
Travis
CI.
I
A
Korean
you
just
about
wrapped
up
on
the
the
CI
technology
stuff,
then
yeah
all
right.
So
that
brings
us
to
the
end
of
this
session.
We're
going
to
have
a
short
15-minute
break,
I'm
going
to
leave
the
call
open.
You
can
just
mute
yourself
and
go
get
a
drink
or
something,
and
then
we'll
be
back
for
the
second
half
of
the
day
here,
starting
with
some
monitor
discussion
from
ja.
So
we'll
see
you
guys
at
about
15
minutes.