►
From YouTube: Walkthrough: Ceph Release Process
Description
The Ceph release process. Presented by David Galloway
A
Thanks
David
for
giving
us
a
go
away
present
with
walkthrough
of
the
release
process
for
Seth
without
further
Ado
I'll.
Let
you
take
it
away.
B
B
Yeah,
so
there's
there's
just
a
couple
things
to
be
aware
of
the
signing
machine
used
to
be
a
virtual
machine
that
lived
in
the
OCTA
lab
behind
Red
Hats
firewall,
but
there
was
a
a
plea
to
move
it
to
the
sepia
lab
so
that
non-red
Hatters
could
also
participate
in
the
release
process
if,
if
necessary,
so
it's
a
it's
a
virtual
machine.
It's
highly
available.
It
lives
on
the
Rev
cluster
that
has
been
in
place,
for
you
know
four
or
five
plus
years.
Something
like
that.
B
So
the
gpg
key
was
moved
to
a
like
a
hardware
secured
token
a
couple
years
ago.
So
in
order
to
use
the
gpg
key,
you
just
need
to
know
the
pins.
Those
pins
live
in
a
text
file
on
magna001
in
the
octolab.
B
The
pin
should
only
be
needed
if
the
VM
is
restarted.
However,
so
there
there's
a
gpg
agent
there
and
I
think
the
timeout
is
set
to
like
a
year
or
maybe
four
years
or
something
whatever
whatever
the
max
is
so
yeah.
So
the
the
key
itself
should
only
need
to
be
unlocked
if
the
if
the
signing
VM
is
rebooted
for
whatever
reason.
B
The
purpose
of
the
signing,
VM,
obviously
or
maybe
not
so
obviously,
is
during
the
release
process,
all
of
the
packages
that
get
built
get
pushed
to
a
chakra
node
and
then
the
packages
are
pulled
to
the
signing
machine
signed
by
the
release
key
and
then
pushed
to
download
sf.com
yeah.
B
So
that's
that
there's
a
another
note
here
about
new
major
releases,
so
this
would
be
like
an
alphabetical
release,
so
this
will
need
to
be
done
for
Reef,
but
there's
like
a
separate
package
that
doesn't
get
built
as
part
of
a
regular
stuff,
build
and
I
think
what
this
package
does
is
installs
a
repo
file,
I
think
I.
Think
I
think
that's
what
the
purpose
of
this
stuff
release
package
is.
B
But
basically,
once
there
is
a
reef
branch,
chakra
will
need
to
be
made
aware
that
Reef
exists,
sort
of
like
this
and
then
shocker
will
just
need
to
be
redeployed,
and
then
this
job
will
need
to
be
run.
You
can
see
yeah,
it
gets
run
like
roughly
once
a
year,
which
makes
sense
so.
B
Yeah,
so
those
are
the
two
prerequisites
before
a
ceph
build
can
happen.
I,
don't
I,
don't
fully
understand
what
what
like
Yuri's
process
is,
but
the
the
current
release
process
now
is.
You
know,
once
once,
Yuri
has
tested
everything
and
found
a
stopping
point
that
we're
you
know
we're
we're
ready
to
cut
a
new
release.
He
he
pushes
that
stopping
point
to
the
corresponding
Dash
release,
Branch
and
then
notifies
me
that
the
branch
is
ready
to
be
built.
B
There
are
a
couple
deviations
from
that,
for
example,
if
we're
doing
a
hotfix
release,
what
we
would
want
to
do
instead
is
is
check
out
the
most
recent
tag.
So,
since
1723
is
the
last
Quincy
release,
we
would
check
that
out.
Cherry
pick
any
hotfix
commits
on
top
of
it
and
then
push
that
Branch
to
Quincy
Dash
release
and
it's
sort
of
the
same
thing
for
a
security
release.
The
only
difference
is,
it
would
get
pushed
to
the
Seth
private
repo
instead
of
cef.get.
B
I,
don't
think
yeah
I'll
I'll
leave
that,
as
is
yeah,
so
once
once,
he
already
notifies
the
build
lead
that
it's
time
that
the
branch
is
ready.
B
One
would
just
go
to
this
Jenkins
job,
we'll
use
Quincy
as
an
example,
so
you
would
put
Quincy
there
tag
should
be
checked
and
then
the
next
version
is
going
to
be
17
to
4..
So
that
would
go
there
and
then
the
only
other
thing
that
needs
to
be
changed
is
the
list
of
distros.
Here
you,
you
could
just
hit
build
at
this
point
and
the
ones
that
we
don't
normally
build
packages
for
would
fail,
but
it
it's
cleaner
in,
in
my
opinion,
to
specify
it
so
yeah.
B
You
can
see
here
for
Quincy
we
build
for
Ubuntu
focal,
which
is
20,
yep,
20.04,
Centos,
8,
Centos,
9
and
then
Debian
Bullseye,
so
just
copy
that
in
there
and
then
hit
build
there
isn't
currently
currently
like
a
notification
or
anything
when
the
build
finishes,
I
just
I,
just
kind
of
keep
an
eye
on
it,
but
once
it's
done
it
will
it
will
look
like
this.
B
Nothing
will
be
blinking
anymore.
It
should
be
green.
All
that
good
stuff,
if
it's
not
done
yet
it
will
look
something
like
this,
where
you've
got
the
blue
progress
bar
and
the
spinny
Circle.
B
Yeah,
so
those
builds
will
take
anywhere
from
two
to
three
hours,
depending
on
how
busy
the
other
Jenkins
Builders
are
yeah.
So
what
I?
What
I
normally
do
while
those
packages
are
building,
is
do
the
release
notes.
B
B
These
end
up
as
a
as
a
blog
post.
So
if
you
go
to
stuff.io
news
blog,
this
is
filing
filing.
This
pull
request
here
resulted
in
this
blog
post
here
so
and
then
the
third
one
is
just
writing.
Like
the
the
email
announcement,
the
instructions
to
write,
the
release
notes
have
historically
always
been
on
on
this
wiki
page
on
the
tracker
it
it
could
move.
I
guess
but
I,
just
I
sort
of
clarified
it
here,
but
yeah.
All
the
commands
are
step
by
step.
B
Exactly
what's
a
run,
I
guess
I
can
show
what
the
output
would
look
like
just
because
it's
not
there.
B
B
I'm
just
going
to
modify
it
a
bit,
so
the
most
recent
version.
B
Of
Quincy
was
1723,
and
then
we
want
to
know
the
difference
between
the
tip
of
Quincy
or
one
C
Dash.
It
would
be
Quincy
Dash
release,
but
just
so
you
can
see
what
the
output
looks
like.
So
what
this
does
is
goes
through
using
your
GitHub
API
token
and
checks
all
of
the
pull
requests
that
were
merged
between
17-3
and
the
tip
of
Quincy,
and
then
it
will
spit
out
a
nice
markdown
version
of
of
the
changelog,
and
once
that's
done,
you.
C
B
The
output
will
get
put
here
like
this,
so
this
is
eventually
that
command
will
return
and
and
look
like
this
typically,
the
component
leads
will
add
their
own
commits
to
this
release.
Notes,
pull
request
with
any
notable
changes
that
are
necessary
and
then
finally,
the
the
email
so
yeah
I
normally
just
start
a
draft.
B
A
couple
things
need
to
be
replaced
in
here.
Obviously,
the
version.
B
Name
here
and
then
I've
got
this
little
note
here.
If
there's
like
a
major
change
that
should
be
mentioned
there,
it
would
it
would
go
there
and
then
you
can
just
see
there's
you
know
the
other
couple
places
that
need
to
be
fixed
like
here.
B
B
I
normally
just
leave
this
as
a
draft,
and
then
you
know
you
know,
wait
wait
for
the
release
to
finish
so
so
that's
that
once
the
ceph
build
it's
F
itself
is
done.
You
will
end
up
with.
B
The
first
step
is
to
pull
it
and
that's
going
to
pull
it
from
chakra.sef.com
and
the
output
will
look
like
regular
rsync
output.
This
is
going
to
be
a
little
different
since
it's
already
pulled,
but.
B
Yeah
so
that'll
that'll
take
if
I
had
to
guess,
I
think
it's
about
a
half
hour
and
then
once
that
is
finished,.
B
Time
to
sign
it,
I
think
Alfredo
wrote
this
tool
called
Murphy
I.
Think
yeah.
A
B
Sorry,
my
dog
is
not
happy
yeah,
so
the
the
next
step
is
going
to
be
signing
the
actual
Debian
packages.
This
will
use
the
gpg
key.
That's
on
that
Nitro
key.
This
is
what
the
output
looks
like
not
super
interesting
and
then
the
next
step
is
signing
the
RPMs.
B
So
that's
that's
why
I
haven't,
but
once
that
is
all
done,
that
that
also
takes
quite
a
bit
of
time
signing,
but
once
it's
done,
you
just
run
the
sync
push
and
I'm,
not
I'm,
not
going
to
run
sync
push
right
now,
just
because
this
octopus
repos
in
a
a
weird
State
since
I,
haven't
finished
it,
but
so
that
will
end
up
pushing
the
packages
to
so.
Here's
an
example
of
the
latest
Quincy
the
Debian
packages
get
pushed
here.
B
The
RPM
packages
get
pushed
here
and
then
at
the
end
of
that
sync
push
command
or
the
the
script.
Sorry,
this
Sim
link
here
gets
updated,
so
people
that
have
a
repo
file
that
still
use
packages
don't
need
to
like
change
the
path
or
anything.
So
this
this
is
just
a
Sim
link
that
points
back
to
the
most
recent
version.
B
Once
that's
done
the
it's
time
to
build
the
containers,
we
have
x86
containers
and
arm
64
containers.
This
job
doesn't
take
any
parameters
you
just
hit
build
now.
B
What
the
job
does
is
checks
download.sef.com,
if
there's
a
new
version,
and
if
there
is
it
will
build
a
container
based
on
it
and
then
those
containers
end
up.
C
B
Key
to
IO,
so
you
can
see
here:
15
217
was
the
most
recent
Upstream
release,
so
that
is
the
the
last
container
that
got
pushed.
B
B
And
then
yeah
the
the
release
notes
PR
can
be
merged
sort
of
whenever.
But
hopefully
you
know
around
around
the
same
time,
but
usually
some
review
goes
on
even
after
the
build
is
done
in
my
experience
but
yeah.
That's
that's
the
process.
B
I
I
don't
frequently
see
things
go
wrong
if
I
go
if
I
go.
Look
at
these
I
believe
these
failed.
B
Yeah,
so
this
is
like
Centos
9,
and
this
failed,
because
I
I
tried
to
build
Centos
9
packages,
but
it
was
at
a
point.
The
Quincy
Branch
was
at
a
point
where
I
tried
to
build
1723
for
Quincy,
but
there
were
things
merged
into
Quincy
after
1723
that
are
are
necessary
to
build
Centos
9
packages.
So
the
next
version
of
Quincy
should
have
a
central
S9
release,
but
yeah
and
typically
I,
don't
I,
don't
see
things
go
wrong.
It's
just
it's.
B
A
super
lengthy,
like
many
manual
steps
processed
like
you
can
see.
Most
of
these
are:
are
green
I'm
just
trying
to
jog
my
memory
and
see
oh
right
yeah.
So
when
I,
when
I
change
the
process
to
use
a
different
use
like
the
dash
release,
Branch,
all
of
these
failed
builds
were
from
from
then
so.
B
That's
why
there's
a
bunch
of
red
and
black
down
there,
but
yeah
I'm,
trying
to
think
of
like
what
could
go
wrong,
but
there
there
really
isn't
a
lot.
B
D
B
B
That
sort
of
thing,
I,
guess:
I,
don't
I,
don't
know
if
that's
considered
tribal
knowledge
or
just
experience
or
what,
but
you
know,
based
on
whatever
happened
in
the
build,
would
determine
who
I
would
reach
out
to
like.
If,
if
the
build
failed
super
early
like
somewhere
around
here,
for
example,
I
would
know,
that's
probably
like
something
infrastructure
related
and
something
that
the
build
lead
or
I
would
would
need
to
fix.
B
If
it
was
a
compilation,
error
I
would
try
to
sort
of
figure
out
which
component
it
was
breaking
during
so
like.
If
it
was
rgw
I
would
reach
out
to
the
rgw
lead,
for
example,
but
yeah
you
sort
of
just
can
tell
after
a
while,
where
in
the
build
like
do
I.
How
do
I
say
this?
B
You
can
sort
of
tell
eventually
like
what's
a
compilation,
error
and
what's
not
and
typically,
if
it's
not
a
compilation,
error,
it's
it's
my
role's
responsibility
to
fix
it.
D
B
Yeah
good
question:
yes
yeah,
so
you
would
go
back
to
the
Seth
job.
B
If
you
keep
tag
checked,
that's
going
to
change
the
shell
one
of
the
version
commit
so
as
as
long
as
nothing
has
been
pushed
to
download.saf.com.
B
It
doesn't
really
matter
the
the
version
commit,
gets
removed
and
re-added,
but
yeah.
If
you
just
wanted
to
do
one,
you
would
just
keep
tag
unchecked
and
then
replace
the
distros
with
whichever
one
broke
and
then,
if
you
only
need
one
architecture,
you
can
get
rid
of
arm2.
For
example,.
B
To
be
honest,
I'm
not
sure
these
three
do
anything
these
three
or
this
one
do
anything
they're
sort
of
relics
from
when
Alfredo
wrote
the
job
they're
not
harming
anything
and
and
being
in
there,
though,
and
their
defaults
are
off
anyways.
So.
A
What's
involved
with
adding
another
picture
David
is
it
that's
I
doubt
it's
as
simple
as
just
adding
another
like
myths
or
something
right
you
just
that
would
I
think
I
mean
we
would
have
to
change
the
cmake
file.
I
assume,
but.
B
Adding
another
architecture:
yeah
yeah,
I
I,
don't
I,
don't
know.
To
be
honest,
so
you
know
when,
when
we,
when
we
cut
a
new
alphabetical
stuff
release,
what
I
typically
do
is
just
go
into
this
F
Dash
build
repo,
which
is
where
all
of
the
Jenkins
jobs
file
definitions
are
and
just
grep
for
Quincy
and
then
check
each
and
every
one
of
those
files
and
then.
C
B
Know
then
Reef
will
get
added
above
or
below
or
next
to
wherever,
wherever
it
needs
to.
So
that's
probably
what
I
would
do
if
we
were
adding
a
new
architecture
is
rep
for
arm
or
aarch
64.
in
the
ceph
build
and
cefgit
stuff.get
repos.
A
Yeah,
that's
all
right.
I
wouldn't
be
surprised
to
eventually
we'll
have
to
do
that,
but
at
least
right
now,
I'm
not
aware
of
any
plans
to
add
architectures
yeah.
D
One
thing
that's
come
up
lately
is
the
idea
of
maybe
doing
a
like
the
relevant
release.
Again
like
we
used
to
a
few
years
back.
I
was
tagging
things
as
like
17
or
like
18
or
19
that
one
that's
something.
Would
that
be
the
same
process,
just
changing
the
tag
ID.
B
Yeah
and
then
I
I
would
just
leave
tag
unchecked
and
then
yeah.
The
version
can
be
whatever
you
want.
B
I
I
think
as
long
as
tag
is
left
unchecked
I'm
just
trying
to
look
at
the
what
the
Jenkins
job
does
again.
We
used
to
have
to
like
manually,
create
the
tags
and
and
push
them,
but
that's
all
that's
all
automated
now,
with
the
releasing
from
a
dash
release
branch.
B
I,
don't
think
that
even
gets
called
tag
is
right,
yeah,
so
so
the
the
on
on
the
back
end
of
this,
what
happens
when
this
job
gets
started?
Is
there
is
a
CEF
Dash
release
if
that
releases
repo
and
it's
basically
a
mirror
of
the
ceph
repo?
You
can
see
here
it's
private,
but
what
happens
is
the
version
commit
gets
created
and
when
I
say
version
commit.
B
But
you
can
see
here
as
long
as
tag
is
not
checked,
then
that
that
part
doesn't
happen,
so
the
version
commit
would
get
pushed
to
the
self-releases
branch.
So
if
you
wanted
to
do
like
a
an
RC
release,
for
example,
the
version
commit
would
get
pushed
here,
but
the
next
time
you
ran
it,
it
would
overwrite
the
tag.
If
you
wanted
to
reuse
the
same
number,
if
you
didn't,
then
it
would
just
it
would
just
be
another.
E
Yes,
I
have
a
quick
question,
please
about
new
dependencies
that
we
need
to
include,
but
they're
not
available
in
any
distribution.
Yet
so
usually
we'll
check
if
they're
available
in
apple
and
if
not
we'll,
have
them
build
and
available
in
a
copper
repo
and
then
it
will
be
available
in
the
containerize
the
deployments.
B
I
I
feel
like
this
is
like
an
age-old
conversation
that
has
has
continuously
gone
in
circles.
I
I,
remember
specifically
Alfredo
saying
something
along
the
lines
of
we
don't
want
to
get
into
the
business
of
building
and
signing
packages
that
are
dependencies
that,
like
we
didn't
write.
B
I
think
so
what
we've
done
historically
is
like
when
Sentosa
or
relate
whichever
one
was
first
came
out.
You
know
we
obviously
hit
a
bunch
of
like
Python
3
dependencies
that
that
didn't
exist
and
Ken
dryer
and
Justin
I'm
gonna,
I'm,
gonna,
murder,
your
last
name,
I'm,
sorry
karatsis,
build
the
packages
and
then
pushed
them
to
the
Fedora
build
system,
and
then
we
would
just
sort
of
go
around
and
thumbs
up
those
packages
which
automatically
I
think
puts
them
into
Apple.
B
So
that
is
what
we
would
do
for
an
RPM
build
I,
don't
recall
a
scenario
where
we
there
was
a
new
dependency
that
we
needed
for
an
Ubuntu
based
build
and
it
wasn't
available.
I.
Think
it's
more
common
with
with
RPMs.
B
And
I:
don't
I,
don't
have
the
documentation
handy
on
how
to
do
that,
but
I
know
we've
done
it.
E
And
if
we
cannot
get
it
to
Apple,
we
basically
need
to
have
it
in
a
copyright
repository
until
it
is
available,
either
in
apple
or
in
main
distribution.
B
Yeah,
so
with
with
build
time
it's
easier,
so
we
have
historically
always
kept
I'm
going
to
share
my
screen
again.
We
have
historically
always
kept
what's
like
a
lab
extras.
B
Repo-
and
this
is
something
that
I
have
maintained.
B
Embarrass
me
like
this
here
we
go
I
think
I
have
something
weird
in
DNS
mask
that
doesn't
that
doesn't
like
this
host
name
but
yeah.
So
historically,
we've
we've
had
like
a
lab
extras
repo.
B
B
The
process
to
do
that
is
documented
here.
B
We
go
yeah
so
once
once
the
packages
get
built,
it's
basically
just
a
matter
of
rsyncing
them
to
the
appropriate
directory
and
then.
A
C
B
B
So
these
these
were
manually
built
and
then
pushed
here.
There
may
be
newer
versions
of
these
in
apple,
and
if
that
is
the
case,
then
those
get
installed
instead
and
these
these
aren't
used,
but
though
I'm
sure
I'm
sure
you
guys
saw
these
installing
ceph
test
on
the
test.
Nodes
has
been
failing
the
past
couple
days,
and
that
was
because
they
removed
the
XML
Starlet
package
from
the
base.
Os
repo
super
not
helpfully
it
is
going
to
be
in
Rel,
8.7
and
I
I.
B
Don't
remember
what
sent
to
West
equivalent
but
they're
in
the
process
of
getting
it
re-added
back
to
a
different
repo,
but
anyway,
in
the
meantime,
I
I,
just
they
they
did
build
XML,
Starlet,
I,
think
today.
So
what
I
did
is
downloaded
it
and
pushed
it
to
lab
extras
so
that
we
can
continue
testing
just
as
another
example
of
a
type
of
package
that
would
go
here
but
yeah
as
far
as
like
run
time
dependencies,
if
they're
not
available
in
another
repo
like
we,
we
can't.