►
From YouTube: Kubernetes 1.16.0-alpha.1 Live Release (Part One)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone:
this
is
a
special
edition
of
release
management.
What
we're
going
to
be
doing
here
is
doing
a
cut
of
the
1/16
zero
alpha
one
release
for
kubernetes
so
be
aware
that
this
is
an
upstream
kubernetes
meeting
that
is
recorded
will
be
on
the
internet
later.
So
please
be
mindful
of
what
you
say:
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct,
and
all
of
that
said,
let's
get
started,
I'm
going
to
share
my
screen
and
close
slack
and
maybe
close
all
of
these
tabs.
A
All
right
so
first
things
first
go
to
Chrome
we're
going
to
sig
release.
I'll
start
there.
So
part
of
the
reason
for
this
recording
is.
We
want
to
make
sure
that
we
have
an
opportunity
to
do
extended
mentoring
for
people
and
make
sure
that
they're
aware
of
the
kind
of
processes
that
we
walk
through
to
be
able
to
do
certain
things
like
cutting
a
kubernetes
release.
A
A
So
there
are
two
role
handbooks
you'll
see
here
is
a
patch
release
manager
and
the
branch
manager
for
this.
For
the
purposes
of
this
call
we'll
be
going
through
the
branch
management
handbook
and
and
kind
of
so
one.
Let's
I
mean,
let's
just
read
through
together
right,
so
the
role
is
responsible
for
essentially
cutting
releases
of
kubernetes,
as
well
as
maintaining
the
branches
and
the
associate
tests,
infra
jobs
or
configurations
that
are
involved
in
being
able
to
manage
multiple
branches
of
kubernetes
right.
A
So
there
are
a
few
tools
involved
in
that
all
right,
so
we
mentioned
branch
fast
forward
here,
it's
not
something
that
we're
going
to
be
required
to
use
for
this
call
at
least
that's
kind
of
a
task
that
happens
throughout
the
relay
cycle.
Once
we
actually
cut
the
the
116
branch,
all
right
so
right
now.
Basically,
what
happens
at
the
end
of
a
cycle
is
as
we're
getting
to
cut
a
release.
It
basically
kind
of
like
bumps
the
version
in
CI,
alright.
A
A
You
have
some
familiarity
with
google
cloud,
the
cloud
build
and
cloud
storage
specifically,
so
I
don't
have
a
lot
of
experience
with
GCP,
so
this
is
all
always
fun
to
run
through
some
of
these
commands
and
try
to
get
an
understanding
of
like
what
we're
doing
or
what
the
process
that
actually
involves
right.
So
we've
got
some
shadow
expectations,
which
are
the
shadows
for
branch
management,
as
are
now
the
release
manager
associates.
A
So
the
idea
there
is
that
we
wanted
to
build
a
team
right,
so
a
team
composed
of
the
patch
release
team,
which
are
the
people
who
are
responsible
for
cutting
patch
releases
of
kubernetes
right
so
15.1,
which
will
be
coming
out
on
Thursday,
which
will
be
handled
by
the
the
patch
release
team
right
from
there.
We
have
the
branch
managers
and
you
can
actually
read
all
of
this
stuff
sig
release
and
release
managers
right.
So
this
gives
you
a
breakdown
of
some
patch
release
team.
A
The
branch
managers
are
responsible
for
the
minor
releases
right
so
and
then
the
associates
rate,
so
the
associates
are
the
old
branch
manager
shadows.
The
idea
here
is
that
we
want
to
essentially
build
a
career
or
contributor
ladder
right
to
give
people
the
tools
that
they
need
to
be
able
to
elevate
from
an
associate
to
a
branch
manager
to
a
patch
release
team
member,
as
each
of
those
roles
have
different
requirements
in
terms
of
the
level
of
access
and
the
things
that
you
can
and
cannot
do,
namely
one
of
them
being
so.
A
The
patch
release
management
team
is
subject
to
the
security
embargo
right
so
they're.
The
people
who
they're
part
of
the
group
of
people,
including
the
product
security
committee,
which
actually
handles
CV
mitigation
and
private
distributors
lists
who
get
notified
on
CVS
ahead
of
time
right.
So
the
patch
release
team
is
also
subject
to
that
embargo
because
they
are
actually
the
ones
who
would
be
cutting
the
releases
to
mitigate
those
those
CVS
right.
A
We
have
the
the
build
admins
so
build
admins
or
Google
employees
right
now
that
have
the
access
to
actually
be
able
to
push
the
button
to
generate
Deb's
and
rpms
for
kubernetes
release,
so
that
team
right
now
is
Alexandre,
Linus
and
unassuming,
and
then
and
then
the
sig
release
chairs.
So
the
sig
release
chairs
are
inherently
depending
on
the
lists,
so
I
I
believe
that
sig
release
chair
should
have
ownership
of
all
of
the
assets
that
are
under
sig
release
and
from
a
release
engineering
standpoint.
A
That
would
also
mean
having
access
to
maintain
the
lists
that
that
notifications
go
out
on
right
because
they
would
be
maintaining
these
lists.
That
means
that
they
are
also
implicitly
subject
to
the
security
embargo,
because
they
would
be
on
the
same
lists
that
would
be
getting
information
about
that
stuff.
Right
so
I
can
check
that
page
out.
That's
release
managers
so
back
to
what
we
were
talking
about
all
right.
So
have
you
run
through
the
release
team,
onboarding
guide?
A
I,
don't
know,
may
I
may
have
right
so
I'm
part
of
slack
part
of
the
sig
release
channel
I'm,
a
kubernetes
member
I'm
on
the
contact
sheet
for
the
release
team,
I'm
part
of
the
release
team
mailing
list,
the
sig
release
mailing
list
I'm.
Also
a
member
of
milestone,
maintainer,
x'
and
I
have
some
appropriate
owner
file
entries,
as
well
as
the
the
same
meeting
owner
code
right
so
I
think
I
think
we're
covered
in
terms
of
on
the
onboarding
guide.
A
One
here
for
the
screech
pad
and
I'll
pop
the
release
managers
link
in
there
too,
and
feel
free
to
take
notes
on
this
as
well
I'll,
be
using
it
I'll
kind
of
collate
the
notes
afterwards
and
use
that
as
the
basis
for
improving
the
documentation.
Now
all
right.
So
a
few
things
that
we
need
to
do
right.
We
need
to
make
sure
that
we
are
on
the
release
managers-
Google
Group.
A
A
That
is
slowly
changing
we're
trying
to
enable
the
scripts
to
work
for
Mac
OS.
So
we
can
get
more
people
involved
in
that,
and
there
are
some
references
here
that
I
won't
need
because
I
think
that's
assuming
that
we
want
to
do
some
of
the
stuff
in
a
docker
container,
but
I
don't
need
to
since
I
have
a
Linux
OS
right
do
I
have
SSH
keys,
I
do
have
I
installed,
Google
Cloud
SDK
I
have
so.
A
Let's
get
here,
Google
Cloud,
G
cloud,
an
it
right,
so
I
will
open
a
new
tab
and
let's
do
this
go
over
to
suit
release
right,
so,
okay,
head
detached
I,
don't
want
that
right,
making
sure
I
have
some
new
stuff
from
the
repo
and
then
I'm
going
to
resets
and
just
take
a
look
at
that
really
quick
to
add
some
unit
tests
fix
indentation
all
right,
so
everything
should
be
good
good
here,
clearing
that
all
right,
so
we
said
G
cloud
and
it's
right.
Okay
right
so
it
looks
like
I
am
currently
I.
A
Have
this
account
I
have
I'm
attached
to
this
project.
Kate's
release
tests
prod?
That's
not
the
project
I
need
to
be
in,
but
we'll
fix
that
soon
right,
I
want
to
reinitialize
that
config.
The
default
settings
it's
going
to
do
some
stuff
I'm
going
to
use
my
email
address
and
now
I
can
choose
the
project
that
I
want
to
target
right.
So
I
have
accessed
a
few
different
projects.
The
project
in
question
that
we're
going
to
use
today
is
a
carbonaceous
release
test
project.
A
B
Steven,
do
you
hear
me
yep
Hey,
so
one
of
the
things
I
think
we
should
add
to
the
talk
here
is
to
update
the
G
cloud
SDK.
If
you
already
have
it
installed,
because
I
think
that's
the
recommendation.
So
add
a
note
to
the
scratch
pad.
Okay,
sure
thanks
mm-hmm.
A
A
Right:
okay,
all
right!
So
a
few
more
things:
it's
mentioning!
Are
you
able
to
run?
Send
mail
I
have
not
configured
send
mail
on
my
computer.
I
probably
will
never
do
that,
but
that's
for
essentially
being
able
to
send
the
email
notification
to
kubernetes
announce
what
we
also
do
is
we
dump
an
announcement
notification
within
the
buckets
so
there's
like
an
announcement
HTML
that
you
can
use.
Instead,
you
can
just
copy
that
out
so
I'm,
probably
never
going
to
bother
to
to
to
configure
send
mail
I.
Don't
think
that
anyone
should
need
to
do
that.
A
So
we
should
add
a
note
to
not
explicitly
require
this
so
again,
checking
to
see
if
we're
part
of
the
right
mailing
list,
sig
release
say
release.
So
you
release
the
release
team
in
dev.
That's
all
true,
all
right,
so
GCB
manager,
right
so
GCB
manager
is
a
tool
that
allows
us
to
send
send
a
configuration
from
an
AA.
Go
you
soin
hago
is
our
release
tool
right.
So
if
you
want
to
see
an
AA
go.
A
Right
that
is
an
AA
go
and
I
go
is
a
1800
or
so
line
bash
script.
That's
does
lots
of
lots
of
things
to
get
kubernetes
out
of
the
door
right,
so
we've
got
yep.
Eighteen
hundred
and
eleven
twelve
lines
right,
so
Onaga
is
doing
a
lot
of
stuff,
but
GCB
manager
is
a
rapper.
That
is
also
it's
less
long.
It's
less
long
only
only
about
500
lines
right,
but
we
can
see.
A
If
we
take
a
look
at
this
right,
it
manages
kubernetes
container
release
builds
it
gives
you
gives
you
a
heads
up
about
some
of
the
different
commands
that
you
can
run
how
to
stage
a
release,
how
to
stage
an
RC
release
candidate,
make
sure
that
the
candidate
is
built
from
the
head
of
the
kubernetes
kubernetes
branch
master
and
then
officially
a
release
right.
So
a
lot
of
these
commands
will
run
as
they
will
mock
right
they
by
default.
A
They
will
run
in
a
mock
mode,
alright,
so
that
you
know
that
kind
of
ensures
that
when
we
run
them,
if
we're
trying
to
test
out
a
workflow,
we
can
run
them
safely
without
actually
pushing
artifacts
to
certain
place,
especially
if
these
commands
fail
right.
So
the
idea
is
that
you
run
through
the
command
a
few
times
or
once
or
twice
to
make
sure
that
everything
is
sane
within
your
environment
and
then
from
there
you
can.
You
can
actually
do
the
staging
of
the
release
or
do
the
releasing
of
the
release
right.
A
So
a
few
things
happening
here,
we
so
we
have
a
set
of
libraries,
a
common
library,
a
get
library
and
a
release,
library
that
that
have
some
functions
on
them
that
we're
going
to
use
within
GCB
manager.
Alright,
so
one
of
you
know
one
of
the
things
that's
like
clean
exit
right,
so
that
makes
sure
that
we
pick
up
check,
check,
return,
codes
and
check
exit
codes
and
make
sure
that
we
safely
exit
out
of
any
specific
function
right.
A
We
have
things
like
staging
the
build.
You
know
getting
getting
information
about
which
builds
are
staged
being
able
to
see
the
job
log
right
being
able
to
see
which
jobs
are
which
which
are
the
existing
jobs
right,
which,
for
the
most
recent
existing
jobs
within
Google
cloud,
build
submission
of
the
job
right.
So
it
does
some
substitutions
here
and
tells
you
that
jobs
are
submitted
and
then
actual
releasing
staging
and
releasing
so.
A
Say
you
need
to
be
part
of
the
OSS
release
manager
group
or
something
which
is
not
a
group
that
everyone
has
access
to.
So
you
know
part
of
running
through
this
process
is
that
we
want
to
try
to
get
an
idea
of
the
permissions
required
for
someone
to
be
able
to
see
this
so
part
of
the
reason
that
I
am
doing
this
tutorial
or
I.
A
Absolutely
no
okay,
so
there
is
that
there
is
no
association
between
between
GCP
and
anything
that
we
have
stored
in
github.
Now,
eventually,
what
we'd
like
to
see?
What
I
to
do
is
have
a
so
something
similar
to
what
the
k10
for
group
is
doing,
right,
which
is
essentially
groups
and
groups
IMO
right,
so
they
have
a
set
of
groups
that
are
defined
for
google
groups
within
the
kto
domain,
which
define
who
gets
what
right
and
then
they
use
these
groups
to
assign
roles.
A
A
So
today,
today
we
are
not
opening
up
access
to
that,
but
now
that
I
have
owner
rights
on
this
I
can
start
dumping.
I
am
roles
and
crafting
roles
to
allow
us
to
do
that.
At
that
point
will
fold
those
roles
into
the
release,
managers
and
release
managers.
Private
groups
that
way
we
can
just
assign
I.
Am
that
way
right
and
then
eventually
we
can
move
to
something.
That's
ammo
based
and
pretty
like
this,
but
that
is
you
know,
that's
kind
of
all
future
state,
all
right,
Thanks,
uh-huh
right
so
back
to
GCB
manager.
A
A
A
So
that
is
probably
something
funky
in
our
bash,
but
you
can
see
running
GCD
manager.
It
has
checked
that
I
have
the
required
packages
and
my
system,
it's
made
sure
you
can
see
it
failed
here,
because
I
was
I
was
not
set
to
the
head
of
Master,
which
was
fixed
and
then
a
major
that
I
actually
had
the
cloud
tools
in
place
right.
B
A
So,
in
changing
some
of
the
things
in
GCV
and
an
ago,
we
ran
into
some
issues
that
caused
some
release.
Some
master
blocking
failures,
so
I
did
it
right
up
here.
That
says
yes,
one
of
the
longer-term
goals
is
a
full
refactor
of
the
existing
tools
from
shell
to
go.
The
the
first
step
will
be
of
the
longer
term.
Goals
is
replacing
the
shell
libraries,
so
those
command
get
lip.
The
common
get
lib
and
release
lib.
That
I
was
talking
about
before
rewriting
those
and
go
first
right,
then
wrapping
those.
A
So
we
get
some
immediate
benefit
out
of
doing
that,
then
we
can
build
testing
around
that.
Those
are
probably
the
most
important
parts
of
all
of
the
things
that
are
composed
in
the
release
tooling,
so
fixing
those
first
will
give
us
a
media
benefit.
Then
we
can
write
tests
around
that
and
then
we
can
start
to
think
about
how
we
want
to
refactor
the
rest
of
it.
A
But
there
are
some
more
immediate
things
to
do
like
making
sure
that
we
have
a
blockade
in
place
for
each
of
the
files
that
touch
actually
cutting
it
release
actually
examining
why
the
failures
happened.
So
if
you
scroll
down
a
little
bit,
you'll
see
that
I
wrote
something
quite
extensive
about
the
job
failures
so
feel
free
to
to
check
that
issue
out,
and
then
we
want
to
move
to
a
place
where
we
can
safely
make
changes
to
the
kubernetes
release
repo
without
breaking
every
job
right.
A
So
the
idea
here
would
be
tag
the
repo
after
executing
a
known
good
release
rate.
So
we
did
that
last
week
and
I
need
to
retag,
because
I
need
to
before
I
can
read
tag.
I
need
to
reset
the
expert
on
my
my
GPG
subkeys.
So
that's
and
that's
another
thing
entirely,
but
that'll
happen
soon.
We
have,
but
we
started
tagging
and
basically
what
we
want
to
do
is
then
point
each
of
the
jobs
that
use
kubernetes
release
to
a
known
good
tag
of
the
kubernetes
release
repo
from
there.
A
Then
we
can
start
to
edit
master
more
safely
right,
because
we
know
that
the
things
that
depend
on
it
are
only
using
known
good
tags.
Once
we
verify
that
changes
are
good.
We
can
then
cut
another
tag
and
then
update
the
testing
for
jobs
to
use
that
tag
instead
and
that's
kind
of
the
way
that
they
do
it
for
bumping
images
for
like
cubic
ins,
CDE
and
stuff
like
that
right
and
then
so,
a
periodic
presubmit
that
emulates
one
of
the
existing
jobs
at
broke
recently.
That
is
in
process
already.
A
A
Alright,
so
the
release
process
right,
we
do
a
few
things,
there's
a
stage.
There
is
a
release
and
a
released
notify
right.
You
can
see
that
when
we're
actually
doing
a
release,
we
include
that
no
mock-
which
we
won't
be
doing
here
today
or
won't
need
to
do
here
today
and
we
won't
be
doing
the
release,
notify
stage
right.
So
some
notes
about
bookkeeping
how
to
cut
a
release.
A
There
is
a
release
issue
that
you
can
open
right
and
it
will
give
you
a
set
of
things
to
do
right
to
do
screenshot
the
test
grid
boards
out
of
comments,
notify
sig
release
that
you're
actually
doing
some
of
the
stuff.
Send
notification
emails
all
about
good
stuff.
That
is
all
for
that
all
tends
to
be
for
a
more
official
release
right.
So,
let's
go
through
the.
A
Right
and
that
should
return
fairly
quickly
right
now.
It
says
that
it
has
submitted
a
job.
We
have
a
build
here.
We've
submitted
it
essentially.
What
we've
done
here
is
wrapped
Anagha
and
submitted
a
job
to
Google
cloud
built
right
to
run
that
and
not
go
command
now.
I
can
do
I
can
either
do
GCB
manager
tail
of
that
build
idea
right.
A
All
right
for
the
purpose
of
this
call
I'm
not
going
to
spend
time
on
that.
What
I
will
do
instead
is
click
to
the
actual,
build
run
all
right
and
if
you're
curious
about
you
know
what
this
is
doing.
Let's
take
a
look
at
the
tools
right
and
just
go
to
GCB
/
stage
right,
so
there
is
a
stage
not
Y
amel
in
our
under
GCB,
under
the
GCV
folder,
and
our
release
tools
that
defines
and
anyone
who's
concerned
about
me
leaking
secrets:
I'm
not
leaking
a
secret.
A
A
So
we
can
swap
these
out
if
we
needed
to
if
we
need
to
chest
test
changes
or
you
want
it
to
use
a
fork
for
whatever
reason
you
can
swap
that
out
by
specifying
this
environment
variable
and
that'll
get
pushed
into
the
GCP,
the
GCB
run
right.
The
next
step
is
it's
doing
a
kubernetes,
it's
pulling
in
the
so
it's
using
this
release
directory,
it's
setting
some
environment
variables
for
the
go
path
and
go
binaries
path
and
then
doing
a
compile,
release
tools
right
so
compile
release
tools
is
kind
of
does
as
it
implies.
A
There
are
a
set
of
release
tools
right
now,
it's
only
so.
It's
checking
that
the
environment
is
set
up,
it's
installing
depth,
it's
checking
the
dependencies
and
then
finally,
it's
it's.
It's
only
doing
that
compile
step
four
for
the
blocking
test
grid
tests
right
now.
This
is
helpful
because
it
it
the
blocking
test
grade
tests
essentially
gives
you
a
heads
up
about
which
test
grid
tests
are
failing
right
and
if
it
is
blocking
tests,
it
can
fail
out
on
certain
steps.
A
So
it
does
I
compile,
release
tools
step
then,
after
that
it
uses
as
github
token
it
submits
Annanagar
command
of
the
release
branch
stage
to
do
a
pre
build
write,
build
it
head,
it
sets
by
default.
If
you
decide
to
say
no
mock
equals
our
no
mock
is
set,
then
it
will
do
a
no
mock
right.
The
build
version,
whether
or
not
it's
an
RC
yes
will
for
go
for,
go
command
line.
Input
and
GCB
will
is
basically
saying
like
this
is
a
build
that
I'm
going
to
submit
to
GCV
right.
A
A
So
you
can
see
that
you
know
it's.
You
know
the
Onaga
is
long
for
a
reason,
because
it
does
provide
some
pretty
input
when
you're
running
certain
things
right,
so
we're
checking
the
prereqs.
Do
I
have
the
right
packages
on
my
system,
docker
versions
right,
do
I,
have
the
appropriate
ACLs
to
do
what
I'm
about
to
do.
A
Do
I
have
write
access
to
release
tests,
access
to
you,
know
Kate
statue,
Sara,
IO
and
so
on
and
so
forth.
Right
setting
up
a
release
candidate
and
checking
through
checking
through
some
of
the
states
of
the
different
tests.
Right
we
can
see.
We
had
a
test
that
failed
got
some
tests
that
passed
another
one
that
failed,
and
this
could
simply
be
because
the
tests
may
have
moved
on
dashboards
or
the
configurations
changed
right,
but
we
can
see
here.
We've
got
a
bunch
of
passes
and
one
fail.
Anyhow,
it's
it's
checking
tests
right.
A
Seeing
some
of
these
failures,
aren't
you
not
necessarily
concerning
in
the
early
stages
of
the
cycle,
but
this
could
be
concerning
later
on
in
the
game.
I
know
that
some
of
this
see
I
signal
team
is
already
working
to
drive
down
and
get
in
contact
with
some
of
the
SIG's
related
to
those
failures.
So
we
won't
worry
about
it
here.
Right.
A
A
A
Right
so
here
what
we
could
have
done
is
make
it
a
little
clearer
that
there
are
two
things
that
you
could
do
build
from
head
essentially
says:
let's
forget
about
analyzing
the
test
data,
and
you
know
we
know
where
we
want
to
build
from.
Let's,
let's
forget
about
that
right,
let's
forget
about
analyze
like
it
the
the
test
that
it's
going
through
right
now,
let's
not
do
that
right,
so
I'm
going
to
submit
the
build
again
and
say
build
it
head,
alright,
so
that
we
can
see
you've
got
history
and.
A
Right
so
it
said,
hey
I
was
unable
to
find
a
green
set
of
test
results,
failed
and
get
billed
candidate
right.
So
you
know
again,
this
kind
of
shows
you
that
we
have.
We
have
some
some
fail-safes
in
the
system
to
make
sure
that,
if
you're
running
these
commands
in
a
certain
way
that
we
will
only
allow
you
to
release
or
finish
the
build
if
certain
requirements
are
met
like
we
have
green
tests
right.
So
this
is
one
of
the
reasons
that
it's
so
important.
A
So
you
can
see
that
there
are
some
sketchiness
in
terms
of
failing
tests
here,
a
you
know,
a
few
flaky
tests
here
as
far
as
I
know,
I
believe
these
are
all
invested
being
investigated
right
now,
so
I
am
so
we're
not
going
to
spend
time
on
this
call
trying
to
debug
these
tests.
I
think
there
are
already
issues
open
for
them,
so.
A
A
A
Yep
so
early
in
the
release
cycle,
it
is
likely
that
the
build
will
fail
by
default.
The
command
will
automatically
look
for
a
place
where
the
release
master
blocking
tests
have
green
results,
which
traditionally
has
not
happened
on
an
ongoing
basis.
We
need
to
get
there
Wacka
Wacka
Wacka,
bla
bla
bla,
bla
bla,
but
in
the
meantime
this
is
what
you
should
run
right,
so
we
should
have
said
this
up
front
and
then
how
to
stand
about
this
stuff
right.
A
A
B
A
Know
initially
I,
you
know
my
grand
vision
is
that
I
would
love
people
to
feel
comfortable
in
consuming
any
kubernetes
release
that
we
release
right?
That
is
not
true.
Today,
I
had
done
a
survey
done
a
survey
about
this
are
like
a
quick
poll
on
Twitter
I'm
asking
like
hey.
Did
you
know
that
we
do
this
right,
like
or
kind
of
like
how?
How
many
of
you
today
are
consuming
kubernetes
patch
releases
or
kubernetes
dot?
Zero
are
our
release?
Candidates
or
betas
are
even
alphas
right
and
overwhelmingly
I.
A
So
definitely
you
know,
one
of
the
goals
will
be
raising
visibility
for
you
know
for
this
stuff,
but
we
can
only
do
that
if
we
have
confidence
in
the
things
that
we're
releasing
right
and
I
think
right
now
there
needs
to
be
some
work
around
building
out
something
that
we
feel
more
confident
in
releasing
as
an
alpha
right
and
that
is
not
to
say
the
quality
of
the
code
going
into
kubernetes,
because
again,
that
stuff
is
presubmit,
checked,
is
end-to-end
tested,
but
more
so
the
quality
of
the
underlying
tools
right
so
and
and
the
the
things
that
go
into
building
the
base
for
storing
for
hosting,
for
managing
artifacts
and
so
on
and
so
forth
for
kubernetes
right.
A
A
B
In
the
LDS
survey,
we
asked
people
how
many
of
you
are
using
I
mean
which
version
of
kubernetes
are
you
using,
and
very
few
people
were
actually
on
the
latest
version,
so
it
did
percentage
of
community
that
would
be
on
the
latest
and
then
would
be
eager
to
also
you
know,
consume
and
alpha
beta
would
be
less
what
kind
of
audience
we
expect
to
test
the
alpha
and
beta.
Then.
A
Maybe
we
don't
know
yet:
okay
yeah,
you
know,
but
it's
it's
does
it.
Is
it
a
place
that
we
need
to
get?
You
absolutely
I
think
that
there
is
definitely
enough
release
work
to
do
ahead
of
time
to
to
be
able
to
get
there.
So
let's
take
a
look
at
1:15
right,
so
right
now,
I'm
in
the
I'm
in
the
after
Evo
right
right,
so
1:15
we've
published
cubm,
keep
CTL
cubelet,
4:1,
15.0
and
nothing
else
right.
A
That's
both
on
the
unstable
track
and
the
main
track
right
and
do
channel
thing
to
look
at
is
that
we
the
way
we
organize
these
right:
Jesse,
Jesse,
unstable,
lucid,
lucid,
unstable,
precise,
so
on
and
so
forth
right.
The
primary
repo
that
we
use
on
the
on
the
on
the
upside
is
kubernetes
Daniel
right,
and
this
is
kind
of
a
misnomer.
We
publish
all
of
our
packages
there
right.
So
it's
not
specific
packages
that
are
meant
to
support
xenial.
It's
not
that
we
don't
have
support
for
Bionic.
A
We
do
have
support
for
Bionic
those
packages
just
are
just
in
this
n
Yule
repo
right.
So
one
of
the
things
to
do
is
like
before
we
can
unleash
people
on
this
I
would
like
to
see
something
like
a
kubernetes,
stable
right
and
kubernetes
stable,
publishes
all
the
packages
instead
of
xenial
right,
so
I
think
there's
you
know.
There
are
also
several
issues
that
we've
gotten
where
people
have
asked
like:
hey.
Are
you
doing
support
for
Bionic
or
hey
Buster's
coming
out
soon,
our
Buster's
out
now
right?
A
Have
you
or
do
you
plan
to
support
it
or
is
there
like?
Is
there
a
repo
that
I
can
reference
to
use
that,
and
the
answer
is
yes,
its
kubernetes
annual
right,
but
I?
Don't
think
enough.
People
know
that.
So
you
know
one
of
the
tasks
that
I
have
is
to
make
sure
that
we
update
the
the
instructions
when
installing
a
cube
ATM
to
mention
that,
like
xenial
is
the
go-to
place
for
packages
for
cube,
ATM,
cubelet,
cube,
CTL,
kubernetes,
CNI
and
and
CRI
tools.
A
A
A
A
You
can
see
that
we
have
some
dependent
here
right,
so
we
ran
into
a
interesting
little
thing
for
the
last
release
cycle
about
115
but
114,
where
I
believe
the
configuration
of
these
dependencies
were
list
was
changed
right.
So
you
can
see
that
this.
This
is
also.
This
is
also
reference
to
how
frequently
the
release
tooling
is
updated
or
the
specs
within
the
release.
Joint
is
updated,
and
this
is
also
part
of
the
reason
that
we
are
starting
to
like
spin
up
a
vast
team
behind
this,
because
it
needs
to
be
fixed
right.
A
You
know
so
so,
and
the
only
way
we
get
that
done
is
by
having
people
in
chairs
right
to
do
this
right.
Yeah.
B
B
A
So
what
we
can
do
here
is
and
you'll
notice
for
cube
ADM
for
178
right,
obviously
not
supported
anymore,
but
as
an
example,
you
can
see
that
the
cubelet
is
still
set
to
greater
than
or
equal
to,
1/6,
right
and
and
and
same
with
cube
CTL.
Now,
if
we
move
into
114
spilling
right-
and
we
just
jump
through
here-
we
can
see
that
here,
here's
one
of
our
issues
right
so
13
8-
is
greater
than
or
equal
to.
13
8
was
just
released.
But
if
you
look
at
14
0,
it's
just
equal
right.
A
So
this
is
where
we
ran
into
the
problems.
So
so
one
we
have
to
be
aware
that
anything
that
we
published
to
these
repos,
especially
because
we're
only
using
one
repo.
If
we
make
a
dependency
change
for
new
packages,
we
have
to
be
aware
of
the
old
packages
that
exist
in
that
repo
and
how
to
manage
dependency,
changes
for
them.
So
I'm
thinking
that
maybe
the
right
move
is
new
repo
start
publishing.
A
I
wanted
to
I
wanted
to
see
if
I
could
get
some
of
this
jumping
for
115
won,
but
that's
Thursday,
so
I
think
it's
a
little
too
tight
of
a
timeline
to
try
to
do,
but
maybe
we
say
for
the
following
release:
you
know
1
115,
2
or
something
we
say
from
there.
We
use
this
new
repo
right
and
be
a
little
stricter
about
the
way
that
we
set
these
dependencies
right.
So
if
we
say
kubernetes
C&I
greater
than
or
equal
to,
0
75,
that's
fine
right,
but
we
also.
A
Maybe
we
should
set
a
ceiling
to
those
as
well
right.
So
you
know
if
packages
if
previous
packages
had
something
more
appropriate
set,
like
so
also
notice
here,
cubelet
right,
it's
still
checking
for
this
super
old
version
of
kubernetes.
This
has
since
been
fixed
right,
but
here
is
here's
a
tricky
one
to
write,
we
say
greater
than
or
equal
to
112
right,
so
our
tooling.
That
means
our
tooling
checks
for
this
before
it
builds
a
package
or
uses
that
as
a
requirement
to
build
the
package.
A
So
because
we
have
it
set
to
112,
it
means
it
was
not
building
for
113
114
115
right,
so
we're
exercising
the
tooling,
but
we're
not
publishing
packages
for
CRI
or
CNI,
because
because
the
of
the
way
that
the
the
tooling
is
built
that
is
already
being
worked
on,
I
have
to
myself
in
10
have
PRS
that
are
in
flight
to
fix
some
of
this
stuff.
But
it's
kind
of
like
deciding
what
to
fix.
First
and
making
incremental
changes
like
I
have
a
I
have
a
really
large
PR?
A
A
Changes,
yes,
and
no
so
so
you
can
see
the
part
of
it
like
if
you
want
to
read
through
kind
of
like.
What's
in
my
head
for
this
kate's
package,
CTL
I,
don't
like
this
name,
I'll,
probably
change
it,
but
the
idea
being
that
you
should
be
able
to
at
least
for
now
the
the
RPM
stuff
is
written
in
it's
just
a
bash
script,
referencing
a
spec
file,
the
immediate
fix
for
that
is
essentially.
A
It
will
try
to
build
each
of
these
packages
or
it
will
build
each
of
these
packages,
but
it's
all
contained
within
this
so
package
block
a
package.
Bla
package
block
right,
so
it
basically
means
that
we're
tied
to
the
existing
logic
here
right
and
that
every
time
we
make
changes
to
this
one,
we
should
log
the
changes,
but
we
should
we
should.
We
should
try
to
try
to
break
some
of
this
stuff
out.
A
Right,
have
a
set
of
manifests
that
explain
the
state
of
the
world
for
a
specific
version
of
kubernetes
right,
as
opposed
to
continuing
to
roll
forward
the
this
speck
so
like.
If
someone
cuts
a
release
at
138
or
114
for
right,
maybe
there's
some
edits
they
they
do
before.
They
run
that
and
by
maybe
I
mean
I
know
that
there
are
edits
that
they
do
before
they
run
that
that
don't
make
it
back
into
this
spec
right.
A
So
you
know
so
the
idea
here
is:
is
being
able
to
being
able
to
bump
the
specs
on
on
individual
packages
right
and
control
their
dependencies
more
tightly
without
having
to
be
bound
to
the
entire
ecosystem.
That
is
our
package
publishing
right.
So
that's
the
source,
the
specs
for
structuring
all
about
the
repo
structuring.
So
that
is
what
we're
working
on
right
now.
So.
B
A
Recently,
last
week
worked
with
Tim
and
Tim
Hawken
and
DIMMs
to
get
staging
resources,
as
well
as
prod
resources
for
sig
release
to
start
prototyping.
What
our
repo
structure
needs
to
look
like.
So
if
you
want
to
read
about
some
of
some
of
the
things
that
we
have
in
our
head,
this
brainstorm
is
a
essentially
brain
dump
of
conversations
that
Tim,
pepper
and
I
have
had
over
the
last
three
weeks
before
he
was
on
vacation.
We
spent
maybe
six
hours
or
so
on.
Video
chat
talking
through
some
of
this
stuff.
So
this
is.
A
Right,
so
that
is
kind
of
like
the
the
synthesis
synthesis
of
all
of
that
discussion
right.
What
will
happen
next
is
now
that
we
have
a
lot
of
the
details
that
we
need.
We
were
like.
What
do
we
need?
What
do
we
even
like
to
even
know
what
we
need
yet
right,
and
you
know
one
of
the
things
like
we
need
a
place
to
host
this
stuff
right.
We
need
buckets.
We
need
to
understand
the
artifacts
that
we
produce,
and
you
know
part
of
that
is
having
someone
go.
A
Do
the
investigation
on
the
artifacts
that
we
produce,
which
is
done
and
merged
as
of
today,
docker
images,
storage,
binaries,
run
through
all
of
the
architectures,
the
shah's
that
get
produced
right,
the
extra
files,
the
tars
the
packages
for
rpms
and
Deb's,
and
then
exactly
what
the
repo
structure
looks
like
at
any
one
state
of
time
when
we're
doing
a
release
right.
So
this
this
is
documented
in
excruciating
detail,
and
you
can
see
that
at
release
engineering,
artifacts,
MD.
A
So
I
probably
should
have
scheduled
more
time
and
not
try
to
shove
this
into
the
slot
between
the
PM
meeting
and
the
release
meeting.
But
here
we
are
right.
So
so
tell
you
what
I
think
that
that's
a
good
start.
We
can
do
a
part
two.
If
people
want
to
or
I
can
just
do
the
part
to
myself
and
record
it.
Let
me
know:
does
anyone
have
feelings
on
that?
We
can.
A
We
can
pick
up,
I
would
I
would
I
would
want
to
break
between,
but
between
all
of
these
meetings,
so
we
can
pick
up
later
in
the
day
or
I
can
just
record
the
second
part
and
and
publish
that
to
the
release
playlist.
So
let
me
know:
okay,
yeah
yeah
I'll
put
out
a
zoom
and
if
people
join,
they
join
and
I'll
record
that
one
too
cool
all
right.
So
thank
you.
Everyone
for
for
dropping
in
I
will
see
you
in
right
now
for
the
release
meeting.
If
you're
showing
up
take
it
easy.