►
Description
Sage Weil walks through the basics of how to get started with Ceph development, including:
* Cloning source
* Build from source
* Run via vstart
* Make changes
* Test patches
* Push a branch
* Submit a pull request
* Use Teuthology test framework
http://ceph.com/ceph-tech-talks/
A
If
you
made
it
this
far,
but
if
not
definitely
go
check
it
out,
we've
got
all
of
the
videos
and
the
video
channel
is
featured
heavily
on
some
of
those
pages,
including
all
of
our
historic
seft
Tech
Talks,
which
you
can
find
from
the
Ceph
Tech
Talks
page
or
some
of
the
various
video
pages
there.
So,
if
you're
interested
in
finding
out
what
has
come
before
us,
definitely
go
check
it
out,
but
this
week
we're
gonna
be
focused
on
getting
started
with
Ceph
development.
A
So
those
of
you
that
are
old,
hands
at
Ceph
development
and
have
decided
to
join
us
anyway
definitely
feel
free
to
use
the
Q&A
period.
At
the
end
of
the
call.
Here,
too,
you
know
perhaps
suggest
ways
in
which
we
could
better
onboard
new
folks
or
our
changes
to
the
the
process.
That
may
improve
things
from
your
point
of
view,
and
we
can.
We
can
have
a
little
chat
about
that,
but
for
the
now
Sage
is
going
to
discuss
how
to
do
things
like
clone
the
repository
build
run
via
B
start.
A
B
Thanks
right,
so
I'm
gonna
do
I'm
gonna,
do
a
walk
through
a
whole
bunch
of
stuff
checking
out
the
code
building
it
and
fixing
a
bug
from
the
tracker
testing.
It
making
sure
the
test
run,
pushing
it
doing
a
pull
request
and
testing
it
and
then
show
you
bunch
of
other
random
stuff
on
that
I
think
is
hopefully
make
a
little
bit
easier
to
dive
in
and
get
started.
B
B
Yep
there's
a
Code
section
and
that
has
all
sorts
of
useful
stuff
for
people
who
are
actually
going
to
develop
first
and
foremost
that
links
to
stuff
on
github,
which
is
where
all
of
our
stuff
is
stuff,
is
the
main
project
which
has
all
of
our
source
code.
So
this
is
the
place
to
go.
If
you're
not
already
a
github
user,
then
you
should
just
go
sign
up.
B
You
probably
already
are,
though,
if
you're
new
to
SEF,
then
the
first
thing
you
want
to
do
is
work
the
repository,
so
you
go
here
to
the
stuff,
repo
and
click
Fork
and
I
have
to
pick
which
account
you're,
gonna
fork
it
to
in
this
case,
already
have
a
fork.
So
it's
not
really
gonna
do
anything,
but
assuming
that
I
did
do
that,
it
would
tell
me
wait
a
minute
and
then
it
would,
under
my
account
you'd
see
you
have
your
own
copy
of
another
repository
and
so
on.
B
So
assuming
you've
done
that
you've
cloned
the
code,
then
you're
sort
of
ready
to
get
started.
I
think
the
first
thing
we're
gonna
do
is
I'm
going
to
show
you
the
bug
tracker
and
we're
going
to
pick
out
a
bug
that
I
sort
of
identified
before
and
there's
one.
That's
pretty
easy
to
fix.
I'm,
not
too
nervous
about
doing
on
video,
so
click
through
to
the
tracker
and
click
on
issues.
B
There
are
lots
of
open
tickets.
Here
we
mostly
use
tracker
for
bugs
there's
a
lot
of
feature
tickets
in
here
too,
but
it's
sort
of
more
geared
towards
the
bug
tracking
the
future
stuff
is
a
little
bit
more
ad
hoc
on
the
right
side.
You'll
see
that
there
are
a
whole
bunch
of
sort
of
predefined
queries
and
that
are
useful
if
you're
doing
development,
the
most
useful
for
me
or
the
bug
queue
which
basically
takes
all
the
bugs
and
all
the
projects,
and
so
some
by
priority.
B
So
immediate
are
all
the
ones
that
are
like
causing
QA
failures
right
now
and
sort
of
are
the
most
theis
priority
and
then
so
it
goes
down
from
there.
The
one
I'm
gonna
pick
out
is
one
that
came
in
a
couple
days
ago.
Here
somebody
was
trying
a
blue
store
and
crack
in
and
they
were
testing
it
out
on
some
small
discs
and
they
filled
it
up
and
blue
store,
wouldn't
start,
and
that's
not
supposed
to
happen
so
I
already
looked
at
this,
so
I
already
sort
of
know
what
the
problem
is.
B
B
Didn't
realize
that
the
OSD
was
basically
full,
and
so
it
was
let
it
let
it
basically
get
all
the
way
to
fall
without
stopping
it,
whereas
normally
stuff
will
prevent
you
from
getting
it
and
that
led
to
this
crash
and
when
it
tries
to
start
out
the
tries
to
do
some
internal
writes
and
it
fails
because
there
was
no
space,
so
we're
gonna.
That's
this
is
the
bug
that
we're
gonna
we're
going
to
try
to
fix.
B
If
I
can
find
it
think
it's
this
one
may
I
see
that
you
know.
Let
me
know
if
the
font
is
too
small
or
whatever
looks
good.
Alright,
so
first
thing
I
want
to
do
is
clone
the
repository.
If
you
use
kept
before
give
your
account
estate
keys
setup,
this
is
the
syntax
you
use,
I,
usually
clone
the
main,
the
master
set
repository
and
then
add
my
own
posit
or
ease
the
separate
remote.
B
So
we'll
go
ahead
and
do
that
this
takes
not
too
long
will
take
a
little
bit
because
it's
a
pretty
big
repository
in
this
case
I
already
did
it
and
another
directory,
so
we
don't
wait
for
it.
So
this
is
a
republic,
a
clone
and
checked
out
for
all
the
files.
You'll
notice,
that
origin
is
the
master
one.
I'm
usually
you'll
want
to
do
something
like
remote,
add
and
then
add
the
remote
for
your
your
actual
clone
of
the
repository.
B
That's
at
least
that's
how
I
manage
it
doesn't
matter
too
much
I
do
this
we
just
so
that
way.
I
can
push
branches
to
either
my
clone
or
the
master,
clone
I
suppose
actually
for
a
normal
contributor,
they're
never
going
to
be
pushing
to
the
master
one
they're
only
going
to
be
pulling
from
there,
but
either
way
you're
going
to
want
most
tree,
both
most
remotes
in
in
your
git
repo,
so
you
can
interact
with
both
of
them.
B
So
the
first
thing
you
usually
want
to
do
is
make
sure
you
have
all
the
build
dependencies
installed.
There's
this
handy
dandy
script
called
install
bets.
That
will
do
that.
So
you
just
need
to
run
that
with
sudo
and
all
this
actually
does
is.
It
looks
through
the
it
decides
what
distro
you're
on
and
then
it
looks
through
the
packaging
files
and
looks
at
all
to
build
dependencies
that
the
package
files
have,
or
that
particular
just
true,
and
then
it
installs
everything.
B
So
as
long
as
we
have
our
packaging
stuff
up-to-date
and
it's
working,
then
this
should
always
install
all
the
things
that
are
necessary
for
you
to
compile
and
build
stuff.
So
in
this
case,
is
a
fedora
box.
It's
going
to
go
download
all
these
packages,
so
in
reality,
there's
nothing
to
do
because
I've
already
run
stuff
on
here
before.
Obviously
they
have
the
dependencies.
It
also
sets
up
some
virtual
environments,
Python
stuff
or
something
I
should've,
not
actually
sure
what
that's
for,
but
it'll
give
you
everything
that
you
need
in
order
to
to
get
going.
B
The
the
next
step,
then,
is
to
just
gonna
cancel
because
you
don't
you
finish
that
the
next
thing
you
want
to
do
is
set
up,
get
the
repository
ready
to
build.
There's
another
handy
dandy
description
here
called
juicy
make
tree
make
is
the
build
system
that
we
use
force
F,
and
this
when
you
run
without
Sedo,
you
just
run
it.
It
does
a
bunch
of
stuff.
Mostly,
it
makes
sure
that
all
the
sub
mods
are
initialized.
Recursively
turns
out.
We
have
a
whole
bunch
of
these,
so
this.
Actually,
this
step
is
actually
gonna.
B
Take
like
five
minutes
and
then
it
creates
a
pill
directory
and
run
CMake
to
initialize
all
the
build
stuff,
and
then
it
does
this.
One
last
thing
was
set
forth
at
all:
that'll
talk
about
a
little
bit,
but
anyway,
if
you
run,
do
you
see
make
its
gonna
check
out
all
these
sub
modules?
This
takes
a
long
time,
so
I
did
it
ahead
of
time
and
another
wouldn't
know
that
the
first
time
it'll
typically
take
about
five
minutes.
That's
fine!
Thereafter,
for
this
check
out
it.
It
won't
take
that
long.
B
It's
just
sort
of
getting
you're
getting
things
set
up.
The
first
time
is
sort
of
expensive
in
this
one,
though
I've
already
run
that
DC
mix
script.
This
is
all
the
stuff
that
it
did
finished
up
and
it
says
it's
done
so
we're
all
set.
So
you
go
into
the
build
rectory,
and
at
this
point
you
have
a
working,
build
environment
and
you
can
type
make
to
build
the
whole
thing.
B
I'll
do
that
I'll
start
compiling
them
at
the
files.
That
also
takes
a
long
time,
because
stuff
is
a
pretty
big
project
at
this
point
and
the
C++
compilers
aren't
super
fast.
So
on
my
box
with
that's
like
32
cores,
it
takes
somewhere
between
maybe
five
or
ten
minutes
and
I
really
sure
takes
a
while.
So
I
also
did
that
ahead
of
time
in
another
window,
so
we
would
have
to
wait
for
it
and
once
the
compile
finishes,
I'd
like
the
up
to
a
hundred
percent
and
you're
done,
yeah
you've
compiled
stuff.
B
B
One
thing:
that's
kind
of
nice
about
C,
make
it
builds
sort
of
out
of
tree.
So
this
we
call
this
directory
build
by
convention,
but
it
can
be
whatever
you
want.
You
don't
run
it
in
here
and
you
can
just
delete
this
directory
and
rerun.
So
you
make
and
have
a
completely
fresh
built
without
actually
affecting
the
rest
of
the
source
code,
which
is
in
the
source
directory.
Mostly
maybe
I'll
do
a
really
quick
tour
of
the
source
streets
before
we
get
bit
too
far
so
bill.
This
isn't
actually
part
of
the
get
checkout.
B
B
This
is
all
of
the
rst
files
that
are
used
to
generate
doc,
step,
comm,
they're,
all
managed
didn't
get,
and
when
we
merge
something
to
master
and
get
there's
some
Jenkins
job
that
actually
goes
and
rebuilds
the
docs
and
updates
a
web
page
automatically,
and
that
page
also
can
be
updated,
can
be
viewed
for
any
branch
within
the
repository.
So
you
can
go
and
view
the
hammer
version
of
the
docs
or
whatever
so
documentation.
Contributions
are
also
extremely
welcome
and
they're
also
just
managed
through
get
github.
B
B
There's
some
scripts
here
that
are
used
to
for
packaging,
make
disk
generates
sort
of
official
tarball
at
the
source
tree
that's
been
used
to
generate
downstream
packages
for
distros
rpms
and
ebbs
some
and
pages
the
QA
directory
has
all
of
our
test
stuff.
So
they're,
it's
a
kind
of
a
mix
of
stuff.
A
lot
of
these
scripts
are
just
scripts
that
you
can
sort
of
run
standalone.
B
The
work
ninis
directory
is
scripts
that
you
sort
of
run
on
a
existing
cluster
and
then
the
Suites
directory
is
all
the
mo
files
that
comprise
our
big
QA
suite
that
we
run
automatically
on
everything,
and
so
all
that
stuff
is
also
managed
here
and
good,
but
the
most
important
that
is
really
the
source
directory.
This
is
where
all
the
source
code
is,
and
so
we'll
we'll
go.
There.
B
This
bigger
so
there's
some
architecture,
stuff
indication,
yeah
bunch
of
random
pieces
here,
there's
an
include
and
common
directory.
These
directories
have
shared
code
and
their
structure
for
the
MDS,
for
the
monitor
or
cosd
sort
of
the
client
side
of
the
OSD
and
labret
dos
are
here,
it's
all
sort
of
all
over
the
place,
but
not
it's
not
too
confusing.
It's
not
the
most
tidy
organization
of
code,
because
it's
a
certain
old
code
base.
B
B
B
But
normally
you
run
this
with
a
couple
different
options
that
when
you
don't
get
any
more
and
I
I'm,
just
so
used
to
typing
these,
that
I
type
them
mostly
without
thinking
so
D
means
that
it
enables
all
the
debugging
so
cranks
all
the
debug
levels
and
the
daemons
to
very
high
levels.
And
so,
when
you're
running
this
little
test
cluster
and
something
goes
wrong
or
you're
debugging,
you
can
just
go
look
at
the
box.
Well
there
in
means
that
when
you
run
the
script,
that's
gonna
create
a
new
cluster.
B
There's
a
way
to
run
this
script,
where
it'll
sort
of
you'll
rerun
it
on
an
existing
config,
and
but
it's
not
very
well
tests
or
supported
I.
Actually,
don't
think
anybody
uses
that
so
we'll
probably
remove
that
in
the
future,
I
mean
I'll
just
means
that
it's
gonna
bind
to
localhost,
so
one
27001
by
default.
That
looks
in
your
Etsy
hosts
and
tries
to
figure
out
what
your
hostname
is
and
then
figure
out
what
your
IP
is,
so
that
this
toy
cluster
can
be
accessed
from
a
different
machine
in
practice.
B
B
It'll
take
just
a
minute
for
us
to
do
by
default.
It
starts
up
a
tiny
cluster,
with
three
monitors:
three
OS
DS
and
three
and
ds's,
and
very
soon
I
was
also
going
to
start
up
one
of
the
set
manager
processes,
but
the
important
bit
here
is
that
it
ran.
You
see,
started
incident,
it's
running,
there's
a
directory
called
out,
and
that
has
all
the
logs
and
the
socket
files.
So
you
can
you
can
look
at
what
the
clusters
doing
stage.
C
B
B
All
the
rest
on
the
OSD
directories
have
all
the
files,
but
you
basically
have
this
entire
cluster
running
in
this
directory,
because
I'm
in
the
see
make
build
directory
the
way
that
C
make
builds,
it
puts
things
and
in
and
live,
and
so
on
off
of
that
subdirectory,
so
I
can
run
and
that
B
starts
script
creates
a
set
comp
in
the
current
directory.
So
it's
set
up
so
that
I
can
just
run
commands
as
long
as
I
give
the
partial
path
they're
on
that
cluster.
So
I
can
do
SEP.
B
Yes,
there's
a
so
knowing
little
warning.
That
tells
me
that
it
detected
that
I'm
running
out
of
a
source
tree
and
it's
fleeting
my
path
and
python
path,
and
all
this
other
stuff
they
sort
of
annoying.
But
it's
nice,
didn't
it
nice
that
it
does
all
that
for
you,
but
yeah
the
Seth
stuff
cluster
is
there
it
works.
I
can
see
the
OCS
are
up
and
the
monitors
are
up
the
OS.
B
B
B
So
that's
good
when
I'm
done
with
it.
There
are
a
couple
other
ways
you
can
interact
with
this.
This
piece
art
cluster.
There
is
in
an
inset
script.
This
is
actually
the
system.
Five
init
script,
this
unit
script,
that's
normally
at
C
and
it
DCF
that's
installed
on
old
distros,
but
it's
it's
also
usable
from
this
restart
environment.
So
I
can
do
you
stop
that
zero
and
it'll?
Stop
that
one
demon
and
then
step
OC
tree
I'll,
see
that
it
stopped
I
can
start
it
again.
B
I
can
stop
the
whole
cluster
start
the
whole
cluster,
all
that
stuff.
So
it's
it's
reasonably
easy
to
interact
and
sort
of
do
all
the
things
you
could
normally
do
is
sort
of
a
real
install
out
of
your
source
tree,
which
is
kind
of
nice.
The
one
other
thing
that
I'll
mention
is
that
there's
a
sort
of
a
magic,
safe
port
file
here
that
has
a
random
number
in
it.
B
Okay,
so
sort
it
back
to
our
our
task
at
hand
where
we're
gonna
fix
this
sort
of
tribulus
blog
bug
in
blue
store.
So
the
first
thing
we
probably
want
to
do
is
make
sure
that
our
v-star
clubs
are
actually
set
up
with
these
store
with
blue
sore,
because
by
default
it
uses
file
store
still
so
I'm
running.
This
stop
script.
To
shut
everything
down,
stopping
all
the
demons
takes
a
second
right,
you'll
notice
that
these
are
these
are
file
stores
directories.
B
They
have
this
current
directory
and
they
layer
all
the
files
on
top
those
all
the
objects
on
top
of
files,
so
we
can
run
B
StarD
again
this
time,
I'll
pass
a
blue
store
flag
and
that's
just
a
happy
thing
that
these
are
script.
That
makes
sure
that
the
OSD
is
are
initialized
using
V
store,
actually
I'm
gonna
do
something
else
here
mention
one
other
thing:
there's
also
a
stop
script
that
pairs
with
you
start
that
just
does
a
kill
all
that
just
kills
any
except
demons
that
are
running.
B
So
if
it's
behaving
weirdly,
you
can
always
run
that
restart
also
takes
some
environment
variables
that
let
you
specify
how
big
of
a
cluster
you
can
make.
So,
in
this
case,
I'm
doing
I
want
to
do
some
work
on
an
OSD
I.
Only
don't
really
need
three
monitors.
I
only
need
one
of
them.
I
only
need
one
OSD
I
don't
need
any
MDS
is
I
can
specify
this
in
front
of
you
start
and
then,
when
I
run,
it
it'll
create
a
smaller
cluster.
B
It's
faster
to
run
and
pulled
it
easier
to
set
out
so,
depending
on
what
you're
working
on
you'll
want
to
do
that
by
default
at
three
months,
three
Oh
thousand
three
MDS
is
just
so
you're
generally
exercising
all
the
load,
balancing
and
distribution
good.
But
for
this
sort
of
specific
case,
I
can
set
up
a
start
of
a
small
one.
B
Barely
so
is
a
developer
script,
and
so,
if
you
look
in
this
script,
you
can
kind
of
see
all
the
stuff
that
it
pulls
in.
It
has
a
usage.
Actually
in
here
that's
not
fully
complete,
so
yeah.
It's
not
super.
Well
documented
I'm,
just
sort
of
one
of
the
things.
I
guess
icky
do
ya.
If
you'd
be
eight,
you
can
see
all
the
stuff
at
the
sports.
Although
this
doesn't
document
those,
but
those
things
that's
best.
I
am
I
need
to
start
up
and
some
of
its
out
of
date.
This
is
a
really
old
script.
B
Emendation
is
that
we
we
we
need
to
do
some
cleanup,
but
also
we
sort
of
document
the
well
the
well
used
options,
because
the
other
stuff
might
change
okay.
So
we
have
an
ofc
running
now
and
it
turns
out
that
the
core
issue
is
that,
with
with
this
particular
bug,
is
that
blue
sore
is
misreporting
the
amount
of
space
as
available
to
store
objects.
So
if
you
do
set
post
EDF,
this
tells
you
what
the
OSD
is
reporting
it's.
It
says
it's
ten
gigs
and
it's
used
and
it
has
use
10
Meg's
of
it.
B
B
B
All
right,
you
guys
see
that
okay,
yes,
all
right,
so
I'm
gonna
load
up
the
name
blue
store
source
code,
it's
in
this
OS
directory,
which
is
for
per
object,
store,
Ceph
abstracts,
that
sort
of
back
in
storage
from
implementations,
there's
file
store,
which
starts
on
files
blue
store,
which
is
the
new
one,
there's
a
men's
store.
That
starts
everything
in
memory
and
there's
a
case
store.
B
That's
the
next
another
thing
that
just
puts
everything
in
a
database,
but
in
the
case
blue
stores
here
let
it
up,
and
it
turns
out
that
the
bug
is
in
a
status.
X
status
call
just
the
function
that
that's
F
calls
to
ask
will
store
how
much
space
is
available,
and
the
bug
I
believe
is
that
if
you're
using
blue
FS,
which
you
basically
always
are,
this
is
sort
of
a
subset
of
blue
store
that
allows
it
to
use
embed
rocks
to
be
on
the
same
block
device.
B
It
adds
in
this
free
space
that
blue
FS
has
as
available
space.
So
this
first
part
is
mostly
fine,
because
if
those
C
starts
to
fill
up,
we
can
claw
database
back
from
blue
FS,
but
the
second
part
there's
a
second.
You
can
set
up
blue
source
so
that
there's
a
second
device.
That's
like
dedicated
just
for
metadata.
B
So
if
you
have
lots
of
extra
SSD,
you
can
just
sort
of
put
rocks
to
be
there
as
long
as
there's
room
and
right
now,
blue
stores
reporting
that
as
free
space
Q,
but
that
space
can
never
be
used
for
objects
or
if
it's
only
usable
for
metadata,
and
so,
but
thanks
for
this
particular
bug
is
to
not
report
that
as
free
space,
so
I'm
gonna
change
it
so
that
it
doesn't
call
it
available,
but
it
does
sort
of
add
it
into
the
total,
because
you
are
still
that
space
still
is
part
of
the
cluster.
B
B
There's
a
lot
more
to
do
than
that.
So
could
you
get
diff
you'll
see
that
on
eventually
you'll
see
is
going
really
slow,
you'll
see
that
that
line
has
been
changed
and
get
so.
The
first
thing
we
want
to
do
is
recompile
to
make
sure
that
our
changed
works.
So
you
couldn't
run,
you
can
run,
make
again
that'll
rebuild
and
relink
everything,
and
it
should
just
detect
the
things
that
are
that
have
been
changed.
B
Sometimes
you
make
a
chain,
that's
actually
like
touches
a
header
file
and
it'll
force,
everything
to
recompile
or
if
you
actually
do
a
git
commit
it'll
touch
the
file
that
embeds
the
Ceph
version
and
every
binary,
and
so
we'll
have
to
relink
everything.
And
so
sometimes
you
don't
really
want
to
build
everything.
So
there's
a
there's,
a
medic
target
called
V
start
that
will
make
everything
that
is
necessary
to
run
a
V
star
cluster.
B
So
if
you're
doing
development,
that's
often
just
shortens
your
build
times
by
not
running
out,
not
building
all
the
unit
tests
and
everything
else
they
don't
have
to
that
you're,
usually
not
or
often
not
iterating,
on
as
much
so
you'll
notice.
Here
we
compiled
blue
star
and
gets
in
a
moment
here.
It's
gonna
get
down
to
I'm,
not
sure
why
this
box
is
going
so
slow
right
now.
B
B
I,
don't
yeah
I'm
not
familiar
with
that,
but
yeah.
If
there's,
if
somebody
wants
to
put
any
of
your
bet,
if
they
know
what
it
is,
that
would
be
great
okay,
so
these
start
has
been
recompiled
myself.
Costs
are
still
running
and
remember
you.
I
can
use
a
post,
ETF
and
I
can
see
that
the
old
code,
which
is
still
running,
is
reporting
that
it
has
10
to
93.
I
can
just
restart
the
OSD
demon.
B
A
B
Honey
that
might
have
been
something
else
there.
There
are
warnings
there
about,
like
it,
tries
to
do
bunch
of
random
stuff.
It's
ignoring
this
commands
I'm,
not
rude
starting
to
start.
It
I
think
it
started
okay.
So
it's
ten
to
three
six
and
it
was
ten
to
three
eight
before
all
right,
I,
don't
know,
I,
don't
know
if
I
really
think
this,
but
let's
assume
for
a
moment
that
I
was
I
correctly
fix
this
Doug.
B
Maybe
I
should
have
picked
up
an
easier
one,
so
you
can
use
these
start
to
test
your
change,
make
sure
it
actually
behaves.
You
can
enter
the
cluster.
You
can,
you
know,
write
some
write,
some
data
and
make
sure
it
still
works.
Maybe
in
this
case
I
would
actually
try
to
fill
up
the
my
little
toy
cluster
and
make
sure
it
hit.
You
know.
Space
and
I
could
still
restart
the
OSD
alone,
but
but
for
now,
let's
assume
that
I
did
the
right
thing.
So
the
next
step
is
to
test
it.
B
Let's
see
so
I've
looked
done,
my
ad
hoc
test,
the
other
thing
that
you
can
do
is
run
all
the
unit
tests.
So
for
that,
in
order
for
that
to
happen,
you'd
run
a
full
make,
compile
everything
you
already
actually
done
that,
so
you
can
run,
you
can
always
just
run,
make
check
the
problem
with
that
is
that
it's,
it
runs
the
tests
in
single
threaded,
I,
believe,
and
so
a
better
way
to
do.
It
is
to
use
the
C
test
utility
directly.
B
This
is
part
of
the
C
make
suite
and
this
and
is
used
to
run
all
your
unit
tests.
There
are
a
lot
of
them,
so
I
don't
want
to
run
them.
I
just
run
C
test
by
itself.
It's
just
going
to
run
them
one
after
the
other
heads
gonna
take
forever,
and
so
you
can
pass
a
Jade
just
like
you
can
with
think
that's
gonna
run
like
say
twelve
or
sixteen
in
practice.
If
you
your
level
of
parallelism
this
too
high,
if
you
go
above
sixteen,
it
tends
to
not
works
as
well.
B
You
get
sort
of
spurious
failure,
so
I
usually
do
twelve
or
sixteen
that
seems
to
work
pretty
reliably,
but
this
will
go
through
and
it'll
run
all
these
unit
tests.
So
you
know
test
underscore,
tend
to
be
standalone
files
that
test
like
a
very
narrow
piece
of
code,
with
a
bunch
of
sort
of
crafted
unit
tests
there,
a
bunch
of
other
tests
that
are
bash
scripts
that
do
various
things.
B
They
tend
to
there's
a
whole
category
of
then
that
actually
set
up
these
little
D
star
clusters
in
a
sandbox
and
sort
of
run
tests
on
a
little
miniature
cluster
to
make
sure
that
since
I,
like
small
integration,
functional
smoke,
tests,
I
guess
so
there's
those
two.
But
this
look
like
they're
going
to
run
all
those
I'm
will
assume
that
it
passes,
and
then
everything
works
and
so
you're,
like
great
I've,
done
my
fix
I'm
ready
to
commit.
B
B
It
makes
it
much
easier
to
sort
of
manage
the
things
you're
committing
and
stage
stage
them
into
particular
commits
and
write
the
commitments
and
so
forth.
So
I
skipped
actually
one
thing
right
now:
I'm
still
in
the
master
branch,
I'm
fixing,
some
particular
bugs
I
want
to
actually
switch
into
a
branch.
That's
like
whipped,
loosed
or
stata
fests
book.
Whatever
I
was
switching
your
branch
before
I
commit
and
then
I'll
do
I'll
over
in
run,
get
GUI
to
actually
do
my
commits
I
believe
I
have
to
reshare
this
get
GUI.
B
That
is
that
right.
Can
everybody
see
that
so
the
nice
thing
about
get
GUI
is
that
it
lets
see
stage
the
changes
for
your
commit.
So
these
are
all
the
it
reload
it'll
refresh
it'll
show
all
the
files
appear
that
have
been
have
modifications
and
you
can
look
at
the
dish.
So
this
is
the
one
I
want
to
stage
so
I,
just
click
on
that
icon.
Now
it's
down
here,
it's
it's
actually
going
to
be
included
in
the
commit
and
I
can
write
my
commit
message.
B
So
this
is
important
because
we
want
to
get
history
to
be
clean
and
reflect
all
to
have
all
the
right
information.
So
we
know
what
the
change
is.
Why
I
was
made.
It's
easy
to
find
it
so
forth.
So
the
first
thing
is
the
first
line
should
always
be
a
short
one
line,
description
of
what
you
did
and
it
should
be
prefixed
with
the
sub
system.
B
That's
the
short
version,
and
the
malong
version
is:
if
we
report
the
DB
space
as
available
Beth
thinks
the
OSD
has
and
store
more
data
and
will
not
market
a
cluster
has
full
as
easily
and
in
reality
we
can't
actually
store
data
in
this
space
void
the
problem
by
not
recording
as
available
whatever.
So
you
want
to
sort
of
explain
what
the
problem
is,
what
the
impact
is,
what
the
fix
is
and
then
the
last
thing
you
want
to
do
is
make
sure
you
always
add
a
signed
off
by
line
and
then
get
GUI.
B
You
can
just
hit
ctrl
s
and
it
will
do
it
for
you,
which
makes
it
makes
it
easy.
What
this
line
means
is
that
you
understand
what
the
open-source
license
is
and
you're
submitting
this
code
and
attesting
to
the
fact
that
it's
being
done
in
accordance
with
that
license.
So
in
such
case,
almost
everything
is
LGPL,
and
this
is
just
saying
I
know
it's
LGPL
I'm
submitting
this
under
the
LGPL
license,
though
it's
okay,
you
can
read
about
this
in
more
detail
at
the
top
of
the
git
repository.
B
B
B
One
thing
to
note:
if
you've
looked
at,
if
you
look
at
some
older
commits
they'll
they'll
have
lines
that
look
like
this
one:
eight,
five,
nine
nine,
where
it
just
references
that
bug
by
number.
The
problem
is
that
github
also
referenced
this
things
by
number,
but
it
has
it's
different
issue
numbers.
So
this
is
a
pull
request
or
a
github
issue
and
if
you
say
fixes
this
and
you
commit
it,
it
thinks
that
it's
you're
talking
about
a
pull
request
and
it'll
actually
go
close.
That
pull
request
if
it
happens
to
exist.
B
So
I've
committed
the
change.
You
can
go
look
at
history,
so
the
other
thing
that
you
should
be
familiar
with
is
get
que
if
you're
not
already
get.
Kay
is
a
just
a
GUI
way
to
view
the
get
history.
So
it
shows
here
that
there
was
the
master
branch.
Almost
what
I
had
checked
out
I
had
a
new
branch
they
had
one
commit
on
top
here
it
is,
you
can
see.
B
I'll
commit
there
get
K
makes
it
really
easy
to
visualize
the
history
you
can
see
merges
and
where
they
came
from,
you
can
browse
you
can
go.
You
can
go
look
at
the
line
here
and
I
can
say
when
was
this
line
added
and
it'll
search
back
through
the
history
to
find
a
commit?
That
added
that
and
you
can
do
all
kinds
of
stuff
here
that
makes
it
really
really
really
nice.
So,
if
you
don't
know
already
use
this,
I
suggest
you
to
because
it
prevents
you
from
getting
lost.
B
B
Alright,
so
we
committed
our
change
and
the
next
step
is
to
push
it
push
it
back
to
github.
So
in
this
particular
check
out,
I
haven't
added
the
remote
origin.
For
my
own
check,
my
own
get
clumps
I'll.
Add
that
now
and
that's
calm,
my
github
username
and
the
repository
name
I'm
so
I
can
do
get
push
me
and
then,
with
blue
four
blue
source
data.
Fs.
B
Whenever
you
doing
push
I
suggest
you
always
always
always
specify
which
branch
you're
pushing
because
they're
certain
versions
of
git
and
variations
of
the
push
command.
That
will
try
to
push
everything,
and
particularly
when
you
have
access
to
repositories
that
aren't
just
yours.
You
want
to
be
very
careful
about
sort
of
pushing
random
stuff.
B
B
The
next
step
is
to
actually
open
a
pull
request
so
that
on
github
it
shows
up
as
being
a
request
to
merge,
merge
code
upstream.
There
are
a
couple
ways
to
do
this.
Most
people
I
assume,
use
the
the
web
interface
for
github
and
there's
a
link
that
you
can
click
that
says
open,
pull
request
and
you
specify
which
branch
to
merge
from
and
where
it
emergent
to
I.
B
Never
do
that
because
you
have
to
click
through
a
bunch
of
web
stuff.
There's
a
command-line
utility
that
lets
you
do
the
same
thing
that
I
highly
recommend
everybody
use
if
you're
not
already
familiar
with.
It's
called
hub
it
it's
written
by
by
github,
not
surprisingly,
and
it
makes
this
whole
thing
much
easier.
So
I
will
do
that
very
briefly.
B
You
go
to
github
releases
and
both
under
github
under
the
github
account.
The
program
is
called
hub
and
they
have
a
page
that
has
prefilled
packages
them.
So
it's
pretty
easy
to
install
and
you
can
go
down
here
and
if
you're,
on
whatever
I'm
on
Linux
64-bit
I
can
just
download
I
think
this
is
just
I.
Guess
the
tarball
I
don't
know
how
I
can
remember
and
you
can
install
it.
B
So
in
my
case,
I
have
it
set
up
in
my
path
on
all
the
machines
that
are
do
development
on
so
my
home
machine
of
my
app.
So
if
you
look
until
the
pin
for
my
directory,
I
have
a
I.
Have
this
hub
thing
right
here,
so
I
can
use
it
and
the
idea
of
the
command
is
that
you
can
substitute
it
substituted
forget.
So
if
it
doesn't
understand
that
command
that
you're
doing
it'll
just
run
get
the
same
arguments,
I
don't
actually
use
it
that
way,
but
you
could
use
it.
B
But
if
you
wanted
to,
you
could
actually
alias
get
to
be
this
command.
That's
probably
more
confusing
in
a
case,
if
I
want
to
create
a
pull
request,
I
can
do
the
hub
pull
request.
I
have
to
specify
the
base,
which
is
the
thing
that
I
want
to
merge
to.
In
this
case
it's
the
stuff
user
on
github
and
the
master
branch
and
the
head
that
I
want
to
merge
from
is
Regis
user
user,
and
my
branch
was
what
blue
store
or
Stata
FS
I
can
run
that
command.
B
B
The
branch
because
it
should
populate
this
yeah
I
mistyped
it
that's
what
happened
yeah.
It
should
normally
so
yeah.
So
if
it's,
if
it's
a
single,
if
it's
a
single
commit,
then
it's
actually
going
to
put
that
yeah
it'll.
Do
this
it'll
just
put
it
in
there
for
you,
which
is
really
nice.
If
it's
more
than
one
commit,
then
it'll
show
you
a
git
log,
basically
in
the
down
here
and
the
commented
out
bits,
and
so
you
can
sort
of
see
what's
going
to
be
included,
then
you
can
write
an
appropriate
description
up
here.
B
The
important
thing,
though,
is
to
is
to
follow
the
same.
Now
you
use
for
a
good
commit
so
always
prefix
it
with
a
sub
system
and
have
a
short
one-line
description.
The
main
reason
for
that
is
that
this
line
is
what
appears
in
the
stuff
release
notes,
there's
a
script
that
that
generates
the
release,
notes
based
on
these
pull
request,
merge
commits.
So
when
you
get
any
release-
and
you
want
a
summary,
a
nice
summary
of
what
changed
in
that
release
and
you
wanted
to
be
meaningful.
B
You
have
to
make
sure
that
this
actually
means
something
most
sort
of
new
contributors.
Don't
they
just
have
like
a
they
don't
have
the
prefix
and
they
don't
whatever
they
don't
do
that,
and
so
I
have
to
go
fix
it
up
manually
for
merging
the
commit,
and
it
just
takes
time.
So
so
please,
please
all
the
convention
there.
B
That's
that
I'll
save
it
and
it'll
go
and
it'll.
Give
me
a
URL.
That's
the
thing
that
it
created
and
I
can
go
and
go.
Look
at
that
and
I
have
to
switch,
which
window
is
being
shared
every
time
doing
alright.
So
here's
here's,
the
pull,
request
and
you'll
see
that
it's
merging
to
Seth
master
from
my
ranch.
You
can
see
what
commits
are
there
just
that
one,
and
that's
that
so
sort
of
the
next
step
is
for
all
these
proposed
changes
to
get
triaged
and
tagged
so
in
in
github.
B
We
use
these
labels
extensively
to
sort
of
identify
what
kind
of
change
it
is
and
what
subsystem
it
affects.
So,
in
this
case
this
is
a
blue
store
change
and
it's
bug-fix
and
I
might
tag
it
as
core
which
sort
of
refers
to
all
the
rate
of
stuff.
Also
so
I
would
set
those
tags
so
that
it's
easy
to
sort
of
cross-reference
and
find
later.
If
there's
a
particular
person
that
should
review
this,
then
I
can
call
out
and
say
you
know,
I
want.
B
You
know,
Igor
to
review
this
say
and
I
can
tag
games
explicitly
or
you
can
tag
them
offline
whatever.
As
a
new
contributor,
you
can't
actually
do
all
this
stuff,
so
you
have
to
be
a
member
of
the
stuff
organization
before
you
can
add
these
tags
and
you
should
be
able
to
request
reviews
on
your
own
code.
So
when
this
should
work,
but
the
tags
somebody
else
will
have
to
go
through
and
so
periodically.
B
B
We
can
sort
of
pretend
that
okay,
so
a
couple
other
things
happen
down
here.
It'll
say:
review
required
before
any
code
ever
gets
merged.
It
has
to
be
reviewed.
Only
a
small
study
users
can
override
that
check,
but
it
shouldn't
so
hopefully
Igor
someone
will
say
yes,
this
looks
right.
We
also
run
a
bunch
of
checks
automatically.
B
This
one
is
sort
of
trivial.
It
just
make
sure
that
every
commit
in
the
pull
request
has
that
signed
off
by
line
and
so
that
and
if
it
doesn't
that'll
fail,
you
won't
be
a
burden.
We'll
have
to
go.
Tell
you
to
go
fix
your
git
commit
messages.
This
one
is
just
a
few
check
because
seth
has
all
these
sub
modules.
It's
really
easy
and
get
if
the
for
the
sub
module
to
sort
of
get
off
on
a
different
branch,
a
different
commit.
B
So
this
just
goes
through
and
make
sure
that
you're
not
touching
into
the
sub
modules,
and
if
you
do,
then
you
notice
and
have
to
sort
of
overwrite
it.
So
that's
all
that
is
is
just
try
to
avoid
user
error.
But
the
main
thing
that
happens
is
this
default
check
and
that
actually
runs
that
runs,
make
check.
It
runs
that
see
test
desk
j16
or
what
everything
that
I
mentioned.
That
I
showed
you
earlier.
B
It
does
that
off
on
some
machine
somewhere
in
a
sandbox
and
it'll
take
a
while
and
assuming
that
passes,
then
this
little
this
little
dot.
Next,
to
the
the
pull
request
will
change
to
a
check
and
then
it'll
be
legal
to
merge,
but
right
now
it's
still
it's
still
pending.
So
in
these
cases
all
the
checks
passed
and
this
one
it
failed
for
some
reason
and
so
the
maintainer
I
guess
that's
me
or
the
developer
should
go
fix.
It.
B
Yeah,
so
that's!
This
is
usually
sort
of
the
last
step
for
a
or
a
normal
contributor
to
to
commit
no
code.
I
guess
I
guess
maybe
the
last
last
step
would
be.
If
this
is
a
ticket,
you
can
go
update
the
ticket
to
say
it's
needs
review,
that
there
is
a
there's,
actually
a
fix
associated
with
they
take
the
URL
and
paste
it
under
the
tracker
and
so
that
it's
the
bug
tracker
reflects
that.
There's
a
fix
pending.
It
just
needs
to
be
reviewed
and
tested,
and
so
forth.
B
Right,
so
that's
that's
the
first
part
so
now
I'm
gonna
go
through
and
sort
of
the
second
half
of
the
process
that
happens
behind
the
scenes
that
not
everyone
participates
in,
but
I
think
it's
helpful
for
ever'one
to
understand
how
it
works.
So
what
happens?
Sort
of
at
this
stage
is
that,
hopefully,
someone
will
review
the
code
and
they'll
say
yes.
This
is
a
good
change
in
this
case.
I
can't
review
it
because
I
wrote
it
I
think
it
maybe
I
can
oh
I
can't
approve
it
though
I
can
review
it,
but
I
can't.
B
Actually
it
doesn't
count
for
anything
which
makes
sense,
but
I
could
if
there
was
some
other
change,
for
example,
that
one's
also
mine
or
something
trivial.
Oh
whatever
it's
some
other
change
here,
I
can
go
through.
I
can
review
the
code.
I
say
yes
that
looks
right
and
I
can
say
approve,
and
in
this
case
on
this
one
you
know
John
approved
this
reviewed
it,
and
so
this
comes
up
clean,
there's,
probably
a
spurious
error.
There
would
have
to
resolve
that.
But
assuming
that
also
we're
a
check
mark,
then
then
they
could
technically
go.
Github.
B
Will
allow
you
to
merge
it,
but
in
practice
we
don't
do
that
because
we
want
all
code
that
goes
into
the
sub
master
branch
to
go
through
the
QA
suite
before
that
happens,
and
so
it
typically
happens
as
you
after
you
view
it,
you
mark
the
you
mark.
The
issue
needs
QA,
so
looks
like
John
forgot
to
do
that,
so
he
goes
through
and
there's
a
label
that
says
needs
to
come
in.
B
B
B
B
B
Add
CI
get
at
github,
comm,
SEFs,
f
CI,
so
we
have
a
special
clone
of
the
stuff
repository
and
the
github
account
that's
used
just
for
CI
and
it's
magic
in
that
any
branch
that
you
pushed
to
that
repository,
gets
automatically
built
and
generates
packages
that
show
up
within
our
sort
of
build
system,
and
so,
in
this
case,
I
want
to
build
this
integration.
Branch
I
want
to
go
test
it
and
so
I
would
run
make
in
the
CTS
tests.
B
A
I'm
just
gonna
pretend
for
a
moment
that
those
work
I
don't
want
to
wait
for
them.
For
the
purposes
of
this
demo
and
I'll
push
that
branch.
That
means
that
I
can
go.
Look
at
our
our
build
environment,
our
build
system
which
is
called
shaman
and
it
is
going
to
go,
build
build
packages
for
that
branch.
B
It'll,
take
a
little
bit
for
it
to
show
up
here.
Cuz
Jenkins
has
to
notice
that
the
branch
up
here
and
find
a
free
worker
and
everything,
but
you
can
see
here
there
are
whole
bunch
of
other
branches
that
people
have
pushed
and
that
we
build
packages
for
so
it
of
course,
builds
all
the
the
main
branches
that
are
in
the
stuff
that
git
repository
server
master
and
jewel
and
Kraken
on
all
those
and
then
also
will
build
all
the
stuff
in
this
FCI
like
all
these
with
branches.
B
So
an
example
of
that
would
be
the.
There
was
a
branch
that
I
pushed
earlier
today.
Scrub
waitlist
was
a
big
site
for
us
earlier
this
morning.
This
is
the
git
commit
and
you
can
click
to
see
all
the
repositories
that
have
been
built
for
this
looks
like
it's
built.
The
zinnia
Land
Trust
rusty
packages,
but
it
hasn't
built
sent
us
yet
they
failed.
B
Normally
he's
also
up
green
after
it
usually
takes
about
an
hour
and
a
half
for
it
to
go
through
the
system
and
get
a
good
packages.
Act
just
come
out
the
other
end,
so
in
this
case
installed,
but
unpackaged
file
found.
Oh
so
there
was
a
recent
merge
in
the
master
that
changed
the
packaging
and
it
broke
the
RPM
package
generation.
So
probably
all
the
pull
requests
based
on
a
recent
minister
are
failing
in
the
same
way,
so
hopefully
somebody's
notice
that
they're
gonna
fix
it.
B
But
so
we
won't
look
at
that
for
this.
For
the
purposes
of
this
test,
then
we
can
look
at
an
earlier
branch.
B
Yes,
er
it
just
no
just
whip
stage
testing,
it's
gonna
go
build
that
it's
probably
also
going
to
fail,
but
some
of
these
earlier
ones
here
are
fine.
Like
this,
this
branch
I
pushed
last
night,
it
looks
like,
and
it
built
all
the
repositories
for
all
three
distress.
So
when
our
test
lab,
we
have
sent
out
seven
machines,
we
have
went
to
1404
and
a
Bluetooth
1604
machines
and
we
do
two
variants
I've
every
build.
B
Let
me
if
you
look
carefully
at
the
bottom
of
the
URL,
for
these
a
slightly
different
one
of
them
is
default,
and
one
of
them
is
no
TC
Malek
here,
so
we
do
have
we
always
we
build
a
variant
of
the
packages
that
don't
link
against
TC
Malik
and
the
only
reason
to
do
that
is
because
a
lot
of
our
QA
tests
run
valgrind
to
do
like
big
checking
memory,
leak,
checking
and
that
doesn't
work
with
TC,
Malik
and
there's
I.
Think
there's
another
conflict
to
you.
B
B
It's
it
on
github
and
there's
a
whole
group
of
people
here
that
maintain
it.
Mostly
zach
is
sort
of
the
lead
developer
here,
and
we
have
a
big
installation
of
this
in
the
community
test
lab
test
lab
that
we
use
to
run
all
the
QA
before
we
merge
things
into
master.
That
test
lab
is
called
sepia
and
there's
actually
a
git
repo
that
has
somewhat
updated
information
about
the
test
lab
for
the
sub-project.
It
generates
this
webpage,
so
this
is
probably
like
10
racks
of
gear.
That's
in
the
it's
hosted
in
a
data
center.
B
B
Some
high
end
performance
gear
that
came
from
Intel
that
we
use
for
some
of
the
performance
testing
all
kinds
of
stuff
it
just
paged
attempts
to
catalĂ
to
catalog
it,
but
it's.
This
is
actually
really
out
of
date.
So
actually
a
slightly
better
place
to
look
is
one
of
our
other
tools
called
papito,
which
is
used
to
look
at
all
the
test
results.
There's
a
part
of
it
that
lets
you
see
all
the
nodes
in
the
cluster,
so
the
main
ones
that
we
use
are
for
testing
or
smithy.
B
B
So
the
subject
test
lab
is
available
for
use
for
any
active
contributor
to
SEF.
So
if
you
are
doing
active
development,
you've
merged
code
and
you're
sort
of
involved
in
the
developer
community,
then
you
can
and
should
request
access
to
it.
So
you
can
use
the
shared
resource
and
the
way
to
do
that
is
on
this
seppia
web
page.
There's
a
link
about
requesting
lab
axis.
B
Until
that
point,
then
you
all
have
to
rely
on
other
developers
to
run
the
tests
and
do
that
integration
stuff
for
you,
which
is
usually
fine.
It's
more
efficient
that
way
anyway,
but
if
you're
doing
extensive
work
then
then
you
should.
You
should
get
access
to
the
lab,
because
it's
it's
pretty
valuable.
B
B
Setting
up
technology
isn't
too
bad?
You
basically
want
to
you
clone
it
like
you,
normally
would
anything
else,
technology
algae
that
get
whatever,
and
then
you
would
once
you
have
the
check
out.
There's
a
script
called
bootstrap
that'll
set
up
a
virtual
environment
and
get
all
the
pull
in
the
Python
dependency.
You
see
actually
run
it
and
that
creates
a
virtual
own
directory
that
has
that
the
binaries
that
actually
all
the
little
bits
and
pieces
most
of
the
commands-
archaeology,
something
all
the
things
you
wanted.
B
You
I'm
lazy
when
it
comes
to
typing
typing,
especially
a
weird
word
like
pathology,
so
I
have
this
goofy
little
script
that
I
put
in
my
directory
called
key
that
passes.
All
your
arguments
along
and
types
includes
the
tooth
ology
part,
so
I
don't
have
to
type
that
every
time,
but
whatever
it
also
my
path
and
the
virtual
end
isn't-isn't,
and
so
this
sort
of
solved
the
problem
of
getting
in
my
path
any
case.
B
You'll
remember
that
in
in
the
get
check
out
for
staff,
there's
this
QA
subdirectory,
all
of
the
stuff
that
you
thought
uses
is
in
here.
So
tasks
has
a
bunch
of
Python
code.
That's
used
for
running
tests,
there's
code
in
here
that
sets
up
test
clusters
and
runs
workloads
on
it
and
so
on,
and
then
the
Suites
directory
is
a
collection
of
sort
of
piecing
together
those
bits
of
code
and
to
run
run
a
coherent
set
of
tests.
B
So
for
the
ratos
test
suite
we
have
a
bunch
of
sort
of
separate
components,
look
at
the
basic
one,
so
it's
sort
of
the
simplest.
The
idea
here
is
that
it
forms
you
can
form
a
test
matrix
in
each
of
these
directories
of
all
the
different
combinations
of
pieces
of
the
test
that
you
want
to
sort
of
combine
into
test
cases.
So
in
this
case
this
little
magic
percent
file.
B
That's
in
this
pop
directory
means
that
I
want
to
form
a
matrix
starting
at
this
directory,
and
then
all
the
sub
directories
have
sort
of
the
different
dimensions
of
that
matrix.
So
you
know,
I'll
run
the
test
on
either
butter,
FS
and
x,
FS
or
X.
The
pest
I'm
gonna
build
test
cases.
I
do
both
I
want
to
use
different
versions
of
the
messenger,
this
little
method,
tracing
messenger
or
random.
B
One
I
either
will
inject
a
few
messenger
failures
or
lots
of
them,
and
then
I'll
run
one
of
these
different
workloads
and
when
you
actually
in
this
case,
this
is
like
this
is
like
10
10,
odd
things
on
there
2
here
and
3
there
and
two
there
and
plus
things
a
combined
success.
This
one
case,
so
this
is
like
412.
This
is
like
a
hundred
tests
right
here.
B
If
you
multiply
all
these
dimensions
and
so
there's
a
whole
combination
of
ways
that
these
pieces
can
get
assembled
and
they're
assembled
in
sort
of
alphanumeric
order,
so
it'll
take
it'll,
go
by
direct
sort,
the
territory
self
in
America
and
I'll
pieces
together
and
all
the
pieces
are
just
bits
of
the
animal.
So
these
specify
some
fragment
of
the
overall
test
case
and
tasks.
You
know
some
some
fragment
of
the
test
case
and
I
met
over
at
some
config
variables
and
some
actual
tests.
B
That's
gonna
run
so
for
each
sort
of
combination
of
these
pieces,
you'll
have
a
demo
that
describes
what
the
test
is
going
to
be
and
then
tooth
ology
will
go
scheduled
some
nodes
to
go
to
go,
run
it
and
the
tests
can
be
put
together
in
a
bunch
of
different
ways,
but
they
usually
look.
They
usually
have
a
tasks
list,
one
step
that
says:
install
SEF
or
install
the
packages
which,
by
default
just
install
SEF
SEF
install
sets
up
a
cluster
on
the
nodes,
and
then
this
runs
the
scrub
test
test,
which
is
maps
to.
B
You
know
I
on
script
here.
That
actually
is
a
test
case
that
gets
run
so
anyway.
That's
that's
sort
of
a
very
quick
crash
course
on
and
what
pathology
is
so
if
I
want
to
schedule
it,
then
I
would
do
teeth.
Ology
salty-sweet
I'm,
using
my
little
alias
here,
I'm,
going
to
run
it
on
a
particular
staff
branch
which
is
whipped,
I'm
going
to
do
one
of
the
ones
that
shaman
actually
well.
Let's
pretend
that
I'm
doing
with
sage
testing,
even
though
it
didn't
build
I,
don't
actually
run
this
command.
B
B
C
B
I
just
want
to
give
you
a
surface
flavor,
but
what
happens
behind
the
scenes
there,
so
this
will
mouth
it
that
package
didn't
build,
so
I'd
have
to
specify
real
branch,
but
assuming
it
did,
this
would
schedule
a
whole
pile
of
tests.
That'll
get
dumped
in
the
queue
and
then
we'll
eventually
get
run.
And
then
you
can
go
look
at
the
papito
tool
and
you
can
see
what
what
actually
got
to
run
gets
run.
B
A
B
Anyway,
so
Pepito
shows
us
all
the
QA
runs,
so
this
is
who
scheduled
to
run
when
it
was
scheduled.
What
test
suite
they're
running,
which
branch
they're
running
it
on
which
type
of
machines
are
running
on,
and
then
it
tells
you
how
many
jobs
are
cute,
how
many
passed
failed
dead
waiting
running
so
right
now
it's
running
it's
working
on
the
smithy
notes
are
working
on
this
FS
test
suite
run
on
master
that
was
scheduled
back
cron
job.
It
looks
like
the
tooth.
B
Ology
wants
are
the
ones
that
run
automatically
earlier
today,
I
scheduled
this
one
looks
like,
but
you
can
see
so
we
look
back
a
little
ways.
We
can
go
at
an
example
of
let's
see.
I
can
look
at
my
previous
past
around
that
I
did
yesterday.
This
is
an
example,
so
here
I
was
testing
a
change
in
blue
store,
so
most
of
the
test
past.
B
If
you
look
at
any
of
these
any
single
tests,
you
can
see
that's
kind
of
ugly
here,
but
if
you,
if
you
look
at
the
the
log
file
that
technology
generates,
you
can
see
the
yellow
file
that
describes
what
the
test
was
right
here.
This
is
exactly
what
technology
was
told
to
do,
and
then
you
have
this
huge
log
file
that
actually
shows
the
testing
run
package.
It's
going
to
install
it
something
around
and
so
on.
So
you
can
see
it,
so
we
actually
did
what
it
was
supposed
to
do.
B
But
I
had
a
couple
failures,
so
I'm
going
to
click,
just
look
at
the
failures
expand
tooth
ology
does
its
best
to
sort
of
pick
out
the
salient
error
message
out
of
the
run
so
I
already
looked
already.
Did
it
analysis
of
these?
It
turns
out
that
two
of
the
pull
requests
that
I
had
in
my
testing
branch
had
bugs
in
it.
The
first
pull
request
had
one
bug
that
made
this
fail.
B
The
other
pull
request
had
another
bug
that
made
valgrind
complain
about
a
memory
leak,
and
this
one
was
another
error
that
was
related
to
infrastructure.
I.
Think
but
any
case
well
generally
happen
is
someone
will
batch
up
a
bunch
of
pull
requests?
They'll
run
these
tests,
we'll
go
look
at
the
results,
make
sure
there
are
no
failures
or
if
there
are
failures
that
their
tickets
open
for
them.
B
When
we
understand
what
the
problem
is
and
if
we're
satisfied
that
everything's,
ok,
the
new
code
didn't
break
anything
then
we'll
go
and
actually
finally
merge
the
pull
requests.
God
would
be
the
next
step,
the
last
step
and,
as
an
example,
here's
a
here's,
a
pull
request
that
I
submitted
earlier
it
was
I,
got
a
really
trivial
change
that
moved
changed
this
to
this
and
ego
reviewed
it.
B
This
was
a
simple
enough
change
that
it
doesn't
need
to
go
through
the
full
UI
suite,
although
usually
usually
it
would,
but
in
that
case
I
would
go
through
and
very
little
thing
and
the
one
last
thing
I'll
mention
is
that
when
we
do
these
these
funnel
merges,
we
put
the
reviewed
by
lines
in
here.
Let's
say
you
know
who
reviewed
it
so,
there's
a
record
indicate
history
that
indicates
for
every
merge
who
reviewed
the
code.
That
was
part
of
that
merge.
B
D
B
So
you
would
check
out
your
branch
if
you're
sort
of
doing
this
in
a
tidy
way,
you
would
each
change
would
be
in
its
own
branch.
So
it's
easy
to
make
these
changes.
I
would
go
and
change
that
make
some
change
to
the
code
to
fix
it.
You
know
whatever
I
want
to
do
differently,
you
can
do
get
commits
fixed,
fixed
thing
right,
you
change
it,
but
now
your
get
history
looks
ugly
right.
B
B
Yeah,
yeah
and
and
rebase
is
like
the
best
thing
ever.
It
is
a
good
tool
for
managing
just
patches,
so
in
sort
of
old
style
of
Linux
kernel
development.
You
have
that
patch
series
that
you're
working
in
and
editing
and
so
arthanori,
adding
your
revising
and
so
or
so
forth.
You're
doing
the
same
thing:
you're
just
happened
to
be
keeping
them
and
get.
The
difference
is
that
nobody
else
was
really
depending
on
your
branch
it.
So
it's
fine
to
find
it.
B
Do
these
three
bases
three
bases
so
I've
I've
made
the
change
and
then
I
can
just
read:
push
it
back
to
github
under
the
same
branch
name.
So
the
way
to
do
that
is
they
push?
That's
my
remote
remote
name,
and
you
add
this
force
flag,
because
if
I
try
to
push
it
without
the
force
flag,
it's
gonna
say
it's
rejected,
because
it's
not
adding
commits
on
to
it.
B
E
B
Kind
of
depends:
we
do
it
both
ways.
So
half
the
time,
someone
will
do
a
code,
review
and
they'll
say
you
know,
change
this
and
change
that
and
and
you'll
you'll
sort
of
edit
then
do
two
rebasing
and
squashing
in
your
own
branch.
Andrey
push
it
again.
If
you
think
it's
like
basically
ready,
that's
one
way
to
do
it.
The
other
way
to
do
it
is
to
push
commits
that
are
on
the
top,
with
like
squash
in
the
title.
B
That's
a
that
fix
each
individual
issue
and
then
once
everything's
done
and
they've
rearview
dit,
and
they
can
see
the
changes
that
you
made
and
it's
obvious
it's
clear
that
you've
sort
of
addressed
the
problems
at
the
very
end.
The
last
step
would
be
to
squash
it
down
into
a
nice,
commit
history
either
way
before
we
merge
it
into
the
tree.
We
want
it
to
be
sort
of
a
clean
series
of
changes
without
sort
of
fixes
halfway
through
or
at
the
end,
but
I
think,
depending
on
the
complexity
of
the
change.
C
In
my
mind,
that
the
goal
to
aim
for
is
by
the
time
you're
done
and
ready
for
the
final
push,
have
the
commits
make
sense.
Have
them
tell
a
good
story?
You
have
to
be
split
up
in
in
the
right
way.
Instead
of
you
know
one
record
all
your
development
history,
you
just
want
to
have
it
compartmentalize
the
way
it
ought
to
be
for
understandability
right.
B
B
A
Excellent:
okay,
incredibly
comprehensive
Thank
You
sage!
Thank
you,
I'm.
Sorry.
What
over
folks
but
I
think
this
was
very
useful
and
if
you
know,
people
that
were
unable
to
attend,
this
will
be
up
on
the
YouTube
channel
within
the
next
day
or
two
so
keep
an
eye
out
there.
Other
than
that
we'll
see
you
back
here
next
month
for
a
look
at
enabling
fast
big
data
analytics.
The
Alexia
folks
are
gonna
talk
about
some
of
their
efforts
around
staff
and
big
data
analytics.