►
From YouTube: Kubernetes Community Meeting 20190718
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
All
right,
hello:
everyone
live
from
Jorge,
Castro's
basement.
It
is
the
kubernetes
community
meeting
I'm
your
host
Jeffrey
chica.
If
someone
could
be
a
note-taker
that
would
be
awesome.
Someone
has
dropped
the
link
to
the
agenda
and
the
zoom
chat
so
go
ahead
and
pick
that
up.
If
you
don't
mind,
I'd
also
like
to
remind
everyone
that
we
abide
by
the
CN
CF
code
of
conduct
and
this
meeting
is
being
streamed
and
broadcasted
on
the
YouTube.
So
please
treat
each
other
awesomely.
A
B
B
Making
a
cluster
in
AWS
looks
very
different
than
making
a
cluster
in
GCP
or
measure,
or
these
fear,
and
so
we've
made
cluster
API
extensible
enough
to
be
able
to
plug
in
different
back-end
infrastructure
providers
and
I
was
here
mentally
last
year
at
some
point
giving
a
demo
on
the
AWS
provider.
But
this
year,
this
time
I'm
doing
a
demo
on
the
docker
provider
where,
instead
of
using
a
cloud
provider,
we're
using
a
local
machine
and
using
docker
as
the
infrastructure
provider,
so
the
idea,
the
general
architecture.
B
Let
me
share
my
screen
first
and
then
I
will
show
you
a
picture
that
might
help,
probably
not
but
we'll
see
so
the
way
that
it
works
is
you
need
a
kubernetes
cluster,
so
this
bootstrapping
machine.
This
is
a
kubernetes
cluster.
You
need
this
to
use
cluster
API,
because
that
kubernetes
cluster
is
gonna.
Have
the
CRD
definitions,
it's
gonna.
Have
the
cluster
API
controllers
running
on
it,
so
you
can
send
cluster
objects
and
machine
objects.
The
two
main
objects
that
cluster
API
exposes
to
this
bootstrap
cluster
or
this.
B
This
management
cluster
is
what
we
call
it
and
that
will
read
the
cluster
objects.
The
machine
objects
figure
out
what
kind
of
a
cluster
you're
trying
to
make
call
up
to
whatever
provider
you've
described
and
go
build
you
a
cluster
and
it
will
create
and
give
you
a
coop
config.
So
you
can
interact
that
cluster,
but
doesn't
sure
you
can
imagine
the
the
iteration
time
on
developing
cluster
API
when
you
have
to
go
talk
to
the
cloud
to
build
the
cluster
it
can
get
pretty
slow.
B
B
B
The
awesome
work
that
the
folks
over
kind
of
been
doing
and
build
a
management
cluster
in
the
style
of
kind,
so
it'll
create
a
brand
new
cluster
and
it
will
apply
all
of
the
CR
DS
and
the
controller
to
manage
those
CR
knees,
and
this
part
takes
just
a
second.
But
cap.
D
control
is
a
binary
that
ships
with
the
docker
provider-
it
is
very
analogous
to
the
cluster
CTL
command
that
ships
with
just
about
every
other
provider,
except
because
we're
not
actually
interacting
the
cloud.
B
We
don't
need
a
lot
of
the
complexities
in
cluster
CTL,
so
I
replace
that
with
gap,
D
CTL
to
be
a
little
bit
more
straightforward.
Ok,
so
now
I
have
a
watch
going
on
docker,
so
we
can
see.
I've
got
two
two
containers
up
and
running.
There's
one
here
called
the
management,
the
management
control
plane
and
the
management
external
load
balancer.
So
the
load
balancer
is
the
thing
that
sits
in
front
of
your
control
planes.
This
allows
for
multi
control,
plane
clusters
to
be
created.
B
The
management
management
Couture
plane
doesn't
technically
need
one,
but
right
now
it's
every
every
kubernetes
cluster
in
this
world
needs
an
external
load
balancer,
because
we
don't
know
if
you're
gonna
be
adding
more
control
planes
or
not
so
we're
just
gonna
have
an
external
load
balancer
and
one
control
plan,
and
now
this
is
just
waiting
for
all
the
internal
components.
To
start
up
so
it's
creating
inside
this
container
is
an
entire
kubernetes
cluster.
It's
got
the
API
server,
that's
CD
everything
you
need
for
kubernetes,
so
that
finished
it
goes
and
gets
the
latest
CR
DS.
B
It
applies
them
to
the
control
place
and
now
we've
got
these
custom
resource
definitions,
clusters
and
machines,
and
some
of
things
not
super
important.
We
will
be
going
over
those
and
then
it
creates
the
docker
provider
controller
manager,
which
is
the
thing
that
understands
what
to
do
with
a
machine
object.
So
cap
T,
also
cap,
T
control.
Also
do
some
nice
things
like
print
out
ya,
know
for
us,
so
a
cup
D
control,
cluster
defaults
and
that's
terrible
output.
So
we
will
type
that
to
JQ.
B
So
you
can
see
what
it's
doing
and
you
can
see
here.
We've
got
a
cluster
object
and
we
stick
it
in
the
default.
Namespace.
Other
providers
provide
a
lot
of
information
in
the
provider.
Spec
that's
going
to
be
unique
to
each
provider.
Docker,
however,
doesn't
need
anything
because
it's
a
very
simple
provider,
so
we
can
take
that
same.
Let
me
just
get
my
environments
on
that
one.
So
we
can
take
that
same
cluster,
but
we
just
saw
and
we
can
pipe
it
to
control
apply
and
we
can
create
a
cluster.
B
Cluster
external
load
balancer,
so
the
cluster
object
is
really
just
kind
of
an
infrastructure
placeholder
and
the
only
infrastructure
that's
necessary
for
this
provider
is
an
external
and
external
load
balancer
that
will
map
to
all
of
the
control
planes.
So
now
that
we
have
that,
we
can
create
something
interesting,
so
we
can
create
a
control
plane,
keep
it
in
the
same
namespace
piped
up
to
Jake
huge.
B
It
looks
like
so
you
can
see,
we've
got
a
machine
and
it
has
the
set
control
plane,
so
that
tells
cluster
API
that
we
want
a
control
plane
machine,
so
I
can
run
that
same
command
through
good
control,
I
get
a
machine,
and
if
we
watch,
if
we
watch
talker
it
will
yep
show
up
here,
so
you
can
see
we're
using
the
kindest
nodes.
This
is
this
is
the
work
that
were
really.
It
is
pretty
much
impossible
to
do
this,
without
so
reason
the
kindest
node
to
create
our
kubernetes
nodes.
B
So
we
have
this
up
and
running
this,
my
machine,
if
you,
if
you
check
out
the
port
mappings
you'll,
see
that
the
load
balancer
Maps,
properly
or
yeah-
well
yeah
internal
configuration
anyway.
Now
we
want
to
create
some
workers
because
we
have
a
control
plan.
We
don't
have
to
do
workers,
we
could
do
more
control
planes,
it
doesn't
really
matter.
B
B
B
B
B
Maybe
my
control
one
two
and
my
control
blade
three
and
it
would
just
spin
up
I
would
spin
up
those
control
planes,
so
you'll
just
see
after
as
time
passes
that
these
two
spin
up
and
the
cluster
is
sitting
around
running.
You
can
make
multiple
clusters
and
you
can
just
delete
the
whole
thing
like
cleaning
your
docker
by
just
deleting
all
the
doctor
containers
running
on
your
host.
So
this
has
helped
a
lot
for
testing
cluster
API
and
we
have
and
we
have
weekly
meetings.
Cluster
API
has
weekly
meetings
on
Wednesday
at
1:00
p.m.
Eastern.
B
C
C
B
C
C
A
All
right,
then,
moving
on
Thank,
You
Chuck.
That
was
an
awesome
demo.
We
now
have
the
release
updates
for
116.
I
will
change
hats
and
now
put
on
my
release,
lead
shadow
hat
because
Lackey,
unfortunately
couldn't
make
it
so
July
16th
we
released
the
alpha
one
release,
so
the
116
zero
alpha
one
managed
to
get
cut.
It
is
not
looking
great
as
far
as
CI
is
concerned,
but
we're
also
very
early
in
the
release,
and
we
know
that
so
we're
moving
towards
fixing
a
lot
of
our
CI
signal.
Big
reminder:
July
30th,
is
enhancements
freeze.
D
A
And
that
is
more
or
less
where
we're
at
as
far
as
116
and
release
team
in
general
info.
So
unfortunately,
due
to
AHS
con,
we
have
quite
a
bit
of
a
drought
as
far
as
contribute
ifs
of
the
weekend
caps
of
the
week,
so
I
wanted
to
get
us
right
into
the
sync
updates.
So
first
up
we
have
seizure
is
that
one
gonna
be
used
to.
D
You
that
is
me
sharing
my
screen:
I'm,
probably
quitting
slack
first
yep,
all
right,
so
hello,
everyone,
I'm,
Steven,
Augustus
I,
am
one
of
the
sig
Azure
chairs
I,
wanted
to
give
a
quick
update
and
also
note
that
on
this
slide,
deck
you'll
see
that
it
is
the
final
sig
ashore
update
and
I'll
go
into
that
a
little
bit.
So
a
quick
overview
on
what
we
did
last
cycle,
so
we're
continuing
to
work
on
our
a
sure
out
of
tree
provider.
D
Things
are
moving
well
there
we're
bolstering
the
the
testing
as
well
as
the
documentation
around
some
of
that.
There
are
some
remaining
things
that
we
have
to
concern
ourselves
with
API
throttling
for
entry
credentials
provider,
some
of
the
some
of
the
storage
related
things
for
as
your
file
and
azure
disk
that
we
need
to
look
at
to
finally
move
out
of
out
of
tree.
So
I'm
thinking
I'm,
projecting
118.
If
everything
goes
well,
so
we
have
been
contributing
to
the
discussions
on
the
cig
cloud
provider,
consolidation
efforts.
D
So
the
idea
here,
if
you
have
not
had
a
chance
to
see
the
proposal
or
the
proposal
that
was
presented,
we
basically
got
together
on
a
Google
Doc,
as
well
as
a
few
cloud
provider
meetings
and
the
mailing
list
to
discuss
what
it
would
look
like
to
take
all
of
the
existing
cloud
provider,
SIG's
and
fold
them
under
sig
glad
provider.
That
proposal
has
since
been
approved
both
by
all
of
the
sig
chairs
and
by
steering
so
we're
moving
forward
on
that
stuff.
We
have
been
so
we've
actually
had
the
PR
out
this
week.
D
So
thank
you
to
Andrew
psych
him
to
consolidate
all
of
that
stuff.
So
the
initial
administrivia
of
changing
the
SIG's
amo
file
and
and
changing
slack
channels
over
so
you'll
see
more
details
of
that
coming
soon.
We're
just
I
think
we're
internally,
just
making
sure
we
have
all
of
our
ducks
in
the
in
a
row.
What
will
happen
will
be
happening.
There
is
the
the
current
asier
sub-projects,
a
sure.
Signature
itself
will
be
folding
under
sig
cloud
provider.
The
current
asier
sub-projects
will
be
come
sub-projects
of
sig
cloud
provider
as
well
right.
D
So
we're
continuing
work
on
cluster
api,
a
sure.
So
if
you
were
here
for
chucks
demo
just
now,
this
is
one
of
the
multiple
provider
implementations
of
cluster
api.
So
you
can
check
that
out
and
and
finally,
we
have
a
new
sig
chair
so
welcome
to
in
light
of
in
light
of
the
recent
news
about
us.
Solving
in
this
will
no
longer
be
true,
but
it
welcome
to
Craig
Peters.
He
is
shortest-lived
chair
ever
so
he
is
an
upstream
of
ins.
D
First
program
manager
at
Microsoft,
so
works
with
locky
has
been
on
the
release
team,
as
active
in
sig
p.m.
as
well
really
glad
to
have
him
on
the
team
working
to
move
forward
some
of
the
azure
sub
projects,
so
the
plans
for
the
upcoming
cycles
116,
specifically,
as
always
we're
going
to
be
continuing
work
on
getting
our
entry
provider
out
of
tree.
So
continuing
with
the
docs
and
testing
that
will
remain
in
beta
phase
4
for
116.
D
There
are
some
dependencies
that
we
have
entry
for
the
agile
credential
provider,
so
we'll
be
trying
to
decouple
some
of
that
and
hopefully
pull
our
provider
specific
implementation
of
credential
provider.
Out
of
tree,
the
Azure
availability
zones,
so
there
is
a
cap
for
this,
which
is
under
we've
moved
that
under
a
sig
cloud
provider.
At
this
point,
it's
a
cloud
provider
kubernetes
enhancements,
sig
cloud
provider
assure
you'll
be
able
to
find
these
caps
so
either
availability
zones
we're
moving
to
GA
in
116
I
think
that
there,
the
feedback
has
been
generally
good
on
that.
D
A
D
Cluster
API
side
we're
working
with
the
teams
at
Microsoft
when
we
did
some
planning
over.
You
know
I
think
immediately
after
keep
gone,
add
Microsoft
to
talk
about
some
of
the
efforts
for
the
second
half
of
the
year.
So
one
of
them,
one
of
the
big
things-
will
be
vmss
integration,
so
virtual
machine
scale
sets
within
cluster
API
Azure.
So
the
idea
here
is
that
we
are.
D
There
is
an
active
proposal
for
that
that
we're
discussing
with
the
cluster
API
overall,
what
we
think
will
end
up
happening
is
a
an
initial
mock
of
what
this
can
look
like
the
idea
of
bringing
auto-scaling
groups
into
cluster
api
within
the
cluster
api
Azure
provider
first,
and
what
that
will
probably
end
up
being
is
generalized
so
that
we
can
fit
that
into
whether
it
be
a
SGS
or
vmss
or
MiG's
right
so
figuring
out
a
solution
that
will
work
for
multiple
cloud
providers.
So
that's
some
of
the
work
that
will
be
happening.
D
D
So,
as
I
mentioned
with
the
cloud
provider
consolidation,
there
are
some.
If
you
want
to
check
out
the
pr
what's
happening.
It's
it's
over
here.
Three,
eight,
nine
five
in
the
community
repo,
so
there's
some
administrivia
to
be
done,
whether
it
be
migrating,
the
mailing
lists,
zoom,
Google,
Groups,
and
also
spinning
up
the
azure
user
group
right.
D
So
I
think
that
you
know
this
may
be
one
of
the
first
times.
We've
done
a
consolidation
like
this
for
the
community,
so
this
is
going
to
be
a
combination
of
figuring
out
what
assets
to
move
as
well
as
also
walking
through
the
steps
for
turning
down
a
cig
right.
So
there's
going
to
be
some
overlap
between
what
we
do
and
don't
do
so.
I
think
we're
all
discovering
that
as
a
group,
so
the
things
that
we
need
from
you,
people
who
are
interested
in
trying
out
cluster
API
for
Azure.
D
Please
do
I'm
one
of
the
primary
maintainer
for
that
I'm,
always
curious
to
see
more
people
get
active
there.
Now
that
it
is
now
that
over
the
last
few
cycles,
we
moved
it
into
an
actual
usable
state.
So
please
feel
free
to
ask
any
questions
either
here
on
the
channel,
we
have
a
cluster
api,
asher
channel
on
slack
and
also
feel
free
to
shout
questions
in
the
group
net
e
cig
badger
mailing
list.
In
addition
to
that,
we're
always
looking
for
more
contributors
to
the
out
of
tree
provider
efforts.
D
So
if
you're
interested
in
that,
please
let
me
know-
and
we
can
point
you
in
the
right
directions
so
where
to
find
us
this
will
be.
This
will
be
less
true
soon
once
the
once
the
consolidation
is
complete,
but
so
the
chairs,
again
myself
at
you,
know,
I'm
a
cloud
native
architect
at
VMware,
Craig
Peters,
open-source
program
manager
at
Microsoft.
Cal
is
one
of
the
principal
software
engineers
at
Microsoft
and
ping.
Fei
is
one
of
the
least
senior
software
engineer
at
Microsoft,
so
our
homepage
is
cute
community
sig,
asher
slack.
D
A
E
They
are
so
this
is
our
sigrist
update
for
July
2019
will
ask,
did
an
update,
I
believe
it
was
in
January,
but
since
then
the
last
cycle,
so
over
the
115
cycle,
the
release
team
did
quite
a
bit
of
work
kind
of
ongoing
sort
of
stuff.
This
is
one
of
the
main
areas
that
we
talked
about
in
cig
release,
but
it's
only
a
portion
of
what
we
do
so
they're
continuing
to
work
on
bringing
more
community
members
through
the
process.
E
We've
got
a
pretty
well
established
shadow
process
there
to
bring
in
new
people,
and
what
that
does
is
give
us
a
new
set
of
eyes
constantly
to
look
at
gaps
in
documentation,
automation
and
folks
have
been
doing
cool
stuff
there.
Two
particular
things
to
note
from
the
prior
cycle
is
the
release
teams
test
infrastructure
role,
which
was
mostly
about
some
manual
tasks
for
branch
creation
and
test
management
that
has
completely
been
automated
away.
E
So
a
big
THANK
YOU
to
Katherine
berry
on
that
and
then,
as
hopefully,
people
have
seen
at
this
point,
a
new
release
notes
website
is
out
there.
Thank
you
to
Jeff
on
the
call
today
for
helping
build
that
out
in
the
team
of
others
within
the
release,
notes
portion
of
the
release
team.
So
that's
a
really
cool
change,
because
we've
had
a
lot
of
feedback
from
folks
that
the
changelog
is
this.
Sixty
page
document
that's
impossible
to
read
and
no
matter
which
way
we
slice
and
dice
it
and
try
and
bin
it.
E
E
There
is
a
single
individual
for
nine
months
who
ran
that
branch,
manages
the
content,
managed
getting
the
builds
out
on
something
for
a
schedule,
and
we
worried
that
that
wasn't
sustained.
So
we've
been
growing
that
into
a
team
and
getting
a
better
document,
a
better
documented
process
around
how
that
team
collaborates
on
the
content
for
the
branches
produces
the
builds
and
gets
them
out
again
on
a
hopefully
a
predictable
schedule
for
our
community.
And
then
we
kicked
off
our
release.
Engineering
sub
project
next
unit
in
the
upcoming
cycle.
E
We're
gonna
be
doing
a
bunch
around
that
release.
Engineering
sub
project-
this
is
in
conjunction
with
the
working
group
gates
infrastructure,
and
it's
about
really
reworking
our
release,
tooling,
to
be
more
sustainable,
maintainable,
automated
and
community-based.
A
lot
of
the
core
of
these
things
are
historical
infrastructure
and
automation
that
had
been
set
up
by
Google
and
we're
bridging
that
out
into
CN
CF
and
working
on
improving
aspects
of
that
one
of
the
first
things
there
is
doing
an
audit
of
what
we
actually
published
today
and
there's
a
link
in
the
document
there.
E
If
you're
curious,
a
key
deliverable
is
going
to
be
getting
cleaned
up
package
repositories
right
now.
The
way
we
do
our
PM's
and
Deb's
is
it
breaks
our
users,
and
we
need
to
do
better
and
split
it
out
into
something
much
more
correct,
and
what
that
will
give
us
is
alpha
beta
RC
releases
that
are
actually
consumable
from
our
premier
dev
standpoint
or
cube
ADM,
for
example.
E
Also
in
the
upcoming
cycle
release
team
is
gonna
continue
to
work
on
improving
the
release
process.
We've
got
things
around
our
stab
better,
establishing
what
it
is
for
a
release,
blocking
criteria
documenting
advertising
that
making
sure
it's
reflected
in
our
test
grid.
Dashboard
layouts,
our
branch
manager
role,
has
shifted
into
a
release.
Managers
team
I'll
talk
about
that.
A
bit
more
later
and
kind
of
an
outcome
of
the
scalability
issue
that
we
saw
is
we've
had
some
meetings
with
scalability
now
in
conversations
about
how
we
can
more
tightly
interlocked
and
avoid
these
issues.
E
D
Yeah
I
think
that
we,
you
know,
we've
already
made
some
some
nice
progress
with
that
we
met
with
scalability
last
week
and
kind
of
walk
through
some
immediate
action
items.
Making
sure
that
they're
in
milestone
maintainer
is
making
sure
that
they're
aware
of
the
release
cycle
schedule
things
like
that,
as
well
as
as
well
as
Tim
was
mentioning
firming
up
this
release
blocking
criteria
and
what
it
means
to
be
released
informing.
So
you
know
I
sent
an
email
summarizing
some
of
that
stuff,
as
well
as
the
meeting
notes
to
kubernetes
dev.
D
So
you
can
go
ahead
and
check
that
out.
Also
one
of
the
things
that
we
noticed
is
that
some
of
the
failing
tests
are
not
actively
owned.
So
this
is
a
kind
of
a
tangential
thing
that
we
found
as
a
result
of
having
this
discussion
with
scalability,
and
we
will
be
moving
things
that
are
currently
and
informing
or
blocking
that
are
not
owned
out
into
a
new
sig
release,
orphan
jobs
board
great.
D
E
So
how
this
affects
you,
we've
got
a
brainstorming
document
where
we're
trying
to
collect
what
it
is,
we're
looking
to
do
there
and
that's
sort
of
ahead
of
a
newer,
better
kept,
describing
what
we
want
for
a
release
process
in
getting
it
towards
an
implementable
state,
but,
as
mentioned
better
repository
layouts
for
the
RPMs
and
Deb's,
adding
artifacts
or
getting
packages
for
alpha
beta
RC.
So
those
are
usable,
probably
removing
some
artifacts
there's
a
number
of
things
that
are
a
little
odd
in
there
where
we're
like.
Why
is
it
this
way
and
and
they're?
E
One
of
the
things
that
we
discover
is
a
lot
of
these
stuff
is
just
there
for
reasons,
but
then
also
as
we
try
to
clean
it
up,
we
we
run
into
other
reasons
they're
there,
and
we
have
to
do
this
relatively
deliberately
and
safely.
So
we
don't
purchase
break
things
and
a
really
nice
final
piece
of
this
will
be
the
artifact
signing
and
the
final
step
of
public
publication
today,
which
is
a
Google
owned
process
that
will
become
a
community.
D
Owned
process
so
yeah,
so
within
you
know
so
within
each
of
these
steps
we
kind
of
we
kind
of
break
out
into
you,
know
the
ideas
of
hosting
of
artifacts
management,
of
the
the
build
and
release
and
test
a
kind
of
life
cycle
of
of
release
engineering
and
for
the
project
overall.
So
what
we
want
to
do
is
kind
of
tie
all
these
threads
together
in
an
omni
cup
right,
so
this
omni
cup
is
going
to
reference.
D
You
know
what
our
expectation
of
the
state
of
releasing
kubernetes
the
I
guess
we
can,
you
know,
are
we
calling
it
the
kubernetes
distribution
yet
dim,
maybe
not,
but
but
tying
these
threads
together
and
then
linking
out
to
the
individual
caps.
So
we
have
kept
links
a
little
further
in
our
presentation
that
will
show
you
our
talks
about
artifact
generation,
as
well
as
management,
image
promotion,
different
things
like
that.
E
So
Hollies
affect
you
is
we
start
making
those
changes
to
the
publication
process?
We'd
love
to
get
folks
involved
in
trialing
using
those
and
letting
us
know
how
it
works,
and
we
can
iterate
on
that
and
and
do
so
deliberately
so
that
we're
not
destroying
the
existing
process
upon
which
people
depend
on
and
stand
up
something
next
to
it
and
hopefully
better
over
time.
As
always,
we
really
need
folks
to
give
attention
to
CI
signal.
E
Keep
working
on
d,
flaking
tests,
as
Steven
mentioned,
making
sure
that
tests
have
ownerships
that
somebody's
there
to
get
notified
of
a
failure
and
that
the
failures
are
acted
on
promptly.
We
really
for
the
release
team
need
tests
to
stay
green
and
then
also
be
mindful
of
the
schedule,
we're
starting
the
116
cycle.
So
there's
a
link
to
the
schedule
dates,
and
they
were
mentioned
earlier
in
this
call
and
I'm
gonna
mention
them
again
into
more
slides.
So
it's
important
that
we
have
people
aware
of
the
schedule
and
not
getting
caught
short.
D
Just
to
pop
back
over
to
the
keeping
test
green
thing
and
I
wanna,
like
just
hone
in
on
the
importance
of
this,
our
tooling.
Actually,
the
first
thing
that
we
do
when
we
staged
a
release
is
there
are
set
of
functions
in
our
tooling.
That
will
look
for
a
green,
build
right.
It
will
look
for
the
dashboard.
It
will
look
for
a
dashboard
that
has
the
first
green
build
and
it
will
try
to
choose
that
right.
So
when
we
have
no
green
builds,
the
the
initial
version
of
us
staging
a
release
will
fail.
E
E
E
So
the
big
team
sub
project
that
we
always
are
talking
about
release.
This
is
basically
stuff
that
I've
already
spoken
to
her,
that
we've
spoken
to
earlier
in
the
meeting
more
generally
in
just
a
normal
weekly
update
on
the
dates,
but
just
once
more,
so
it's
there
in
front
of
your
eyes.
The
end
of
this
month,
enhancements
freeze
code,
freeze
coming
at
the
end
of
August,
and
our
target
release
is
mid-september.
D
That
is
leaked
there,
so
something
that
I
realized
we
didn't
put
on
the
slide,
but
I
really
wanted
to
shout
out
in
terms
of
building
the
release
team.
This
that
this
time
around
over
the
last
few
cycles,
we
have
introduced
a
roll
called
the
emeritus
advisor,
so
the
release
team
right.
So
the
emeritus
advisor
is
a
former
release
team
member
that
is
responsible
for
doing
some
of
the
doing
some
of
the
work
that
provides
this
continuous
idea
of
a
sub-project
with
them
or
the
release
team
as
a
set
project
right.
D
D
Is
you
know
that
we're
looking
at
a
team
that
is
about
46%
yeah,
either
either
non-white
male
are
underrepresented,
underrepresented
groups.
So
this
is
something
that
we,
you
know
we
specifically
focused
on
in
shadow
selection
and
it's
something
that
we
continue
to
do.
We
want
to
see.
We
want
to
see
kubernetes,
be
this
global
community.
We
want
it.
We
want
to
this
this
opportunity
for
contribution
within
kubernetes.
D
D
So
what
we
saw
is,
as
Tim
mentioned,
the
the
patch
release
team
are
the
patch
release
manager
initially
and
I
think
that
was
maybe
three
four
cycles
back-
that
we
still
had
a
patch
release,
just
a
single
manager,
but
you
know
one
of
one
of
the
first
steps
that
we
did
here
is
create
a
team
around
patch
release
right.
So
now
it's
it's
very
interesting
to
see.
You
know
we
have
a.
We
have
a
slack
channel
release
management,
it's
very
interesting
to
see
that
we
kind
of
have
this
like
it's
almost
a
follow-the-sun
situation
happening.
D
If
we
start
off
a
release,
we
may
see
ping
Fei
pick
it
up
from
Shanghai,
and
then
it
may
end
up
at
with
with
Hannes,
who
is
I.
Think
based
in
London
and
I
may
touch
something
on
on
the
on
the
on
the
US
eastern
side,
and
then
it
may
shift
over
to
US
Pacific,
so
I
think,
and
it's
really
interesting
to
see
how
we've
gone
from
like
a
single
person
who
can
handle
this
for
the
span
of
a
cycle
over
into
a
team
that
can
kind
of
carry
the
ball
forward.
D
If
you
know,
if
we
run
into
issues
or
you
know,
if
someone
needs
a
break
right,
so
we
wanted
to
do
something
similar
with
the
patch
released
with
the
branch
managers
by
pulling
them
out
of
you
know,
pulling
them
out
of
the
release
team
and
more
closely
reliant
aligning
them
with
the
idea
of
release
engineering
for
kubernetes
right.
So
you
know
this
is
you
know
part
of
the
reason,
for
this
is
it's
getting
to
a
place
where
we
can.
D
We
can
actually
have
a
team
that
is
aligned
on
on
a
specific
effort
where
that
effort
being
paying
down
the
technical
debt
that
we've
accrued
over
the
last
few
years
in
kubernetes.
In
terms
of
like
how
do
we
build
test
release?
All
of
this
all
the
things
that
Tim
was
talking
about
a
little
earlier
right,
so
the
branch
manager
shadows,
the
the
former
branch
manager
shadows
of
the
release
team
have
transitioned
over
to
a
release.
Manager's
associates
role
right,
so
normally
we
take
in
about
three
to
four
shadows
for
each
release:
team
role.
D
We
decided
with
this
group
that
it
would
be
nice
to
nice
to
build
a
very
large
team
to
be
able
to
drive
down
some
of
these.
Some
of
this
tech
debt
right,
so
we
pulled
of
the
I
believe,
is
somewhere
between
22
and
28
of
the
the
people
who
applied
for
branch
manager
shadow.
This
time
we
and
thirteen
of
them
for
the
release
managers
associates
role
right.
So
the
idea
is
that
you
know
moving
forward.
D
We
want
we
want
a
team
that
is
kind
of
a
sister
org
to
the
product,
Security
Committee
right,
so
the
and
and
the
reason
for
that
is
one
we
we
want
to.
We
want
to
essentially
stand
on
the
shoulders
of
giants,
make
sure
that
we're
adopting
some
of
the
processes
that
they've
already
developed
to
to
move
people
through
their
their
their
contributor
ladder
right
so
find
a
way
to
you
know
often
someone
will
participate
on
the
release,
team
and
and
move
from.
D
You
know,
shadow
to
a
role
lead
and
then
maybe
question
mark
afterwards
right,
but
we
want.
We
want
to
see
release
manager
associate,
become
a
branch
manager
become
a
patch
release
team
member
right
we
so
we
want
to
build
that
contributor
ladder
and
encourage
people
to
stay
around
because
we're
we're
kind
of
giving
them
the
keys
to
the
kingdom
right.
So
you
know
of
these
roles,
the
patch
release
team
is
responsible
for
cutting
patch
patch
releases
of
kubernetes
right.
D
The
branch
managers
are
responsible
for
all
of
the
all
of
the
things
that
go
into
one:
maintaining
the
branches
for
maintaining
the
release
branches
and
the
associated
test.
Infra
and
and
and
CI
signal
bits
for
that,
as
well
as
cutting
a
kubernetes
minor
release
right,
the
release
manager
associates
are
people
who
will
essentially
be
shadowing
the
branch
managers
as
well
as
being
mentored
by
both
the
branch
managers
and
the
patch
release
team
build
admins.
Are
you
know?
D
As
Tim
mentioned,
there
are
certain
buttons
that
we
cannot
push
there
certain
buttons
that
can
only
be
pushed
by
Googlers
right
now,
so
the
build
admins
is
a
set
of.
So
we
noticed
in
the
documentation.
There
were
lots
of
places
where
it
said
either:
King
Caleb
or
ping
Sumi
on
slack
or
you
know-
and
we
said
we
want
this
to
be
a
process
that
happens
in
the
open.
D
Engineering
process,
as
well
as
make
sure
that
we're
abiding
to
the
security
of
embargo
policies
that
are
that
are
put
forth
by
the
product
security
committee.
So
that's
essentially
the
idea
that
we're
going
to
move
forward
on
with
the
release
managers
group
for
starting
in
116
that
work
has
already
started.
D
We
also
recorded
that
so
there's
part
one
in
part:
two,
you
in
the
116
0
alpha
1
announcement,
so
you
can
check
out
exactly
how
a
really
secure
Burnett
is
if
he
wants
you
so
as
as
we
described
lots
of
work
to
lots
of
work
to
happen
in
us
in
this
area
and
I
think
the
the
summary
of
that
is,
as
is
the
bash
firearm
or
emoji
right.
We
have.
You
know
there
are
a
lot
of
things
that
we've
written
in
bash.
You
know
Onaga,
one
of
our
primary
release.
D
Tools
is
eighteen
hundred
lines,
1819
lines
of
bash
right
and
that
loops
into
a
bunch
of
other
bash
libraries
that
call
github
and
another
release
functions
right.
So
we
want
to
get
to
a
point
where
we're
starting
to
consolidate
some
of
the
functions
that
happen
in
there
write
them,
be
able
to
rewrite
them
and
go
and
and
eventually
destroy
all
of
the
bash.
That
is
around
right.
D
So
if
you
want
a
fun,
if
you
want
to
check
out
a
fun
issue
of
all
the
things
that
can
go
wrong
with
when
you
make
changes
to
bash,
feel
free
to
check
out
kubernetes
release,
816
I
do
some
analysis
of
the
test,
jobs
of
things
that
fail
and
master
blocking
and
the
reasons
for
them.
That's
still
active
analysis
going
on,
but
that's
it's
definitely
a
fun
issue.
I
had
a
lot
of
fun,
trying
to
debug
some
of
the
stuff
or
understanding
things
that
I
didn't
before
Tim
you
wanna
I.
E
And
a
couple
related
things,
so
the
standard
template
has
a
has
us
mentioning
working
group.
So
cig
release
is
one
of
the
sponsors
of
the
long-term
support
working
group.
That's
looking
at
what
it
is.
We
do
for
our
support
stance
on
the
project
and
whether
there's
ways
to
improve
on
that
so
and
I
would
kind
of
defer
to
that,
but
there's
a
link
to
their
cig
information.
There
are
a
couple
things
going
on
over
the
last
quarter
is
that
a
few
proposal
documents
have
bubbled
up
on
what
it
might
look
like.
E
Yeah,
we
got
a
really
good
spread
of
responses
back
and
we've
been
trying
to
collate
and
statistically
kind
of
analyze
and
bin
and
figure
out
what
statements
we
can
make
from
the
community
in
terms
of
what
was
expressed
for
support
desires
and
then
that
also
is
gonna
feed
back
into
those
proposal
documents
and
then
maybe
less
LTS
working
group
specific.
But
there
has
been
a
project
wide
emphasis.
E
That's
ramping,
I
feel
like
around
promotion
of
key
API
is
to
view
unstable
and
also
improving,
conformance
and
just
want
to
mention
that
there,
because
that's
that's
critical
work
and
ties
into
how
we
release
a
quality
product
or
project
and
then
not
related
to
sig
release
directly.
But
the
working
group
around
Kate's
infrastructure
is
something
especially
that
our
release.
Engineering,
sub
project
is
working
with
closely
and
the
stuff
is
really
starting
to
get
some
momentum.
I
feel
like
over
the
a
month
or
so
so
that's
a
really
positive
thing
how
you
can
contribute.
E
We
we've
mentioned
release
engineering
repeatedly,
join
our
cig
release,
meetings
to
to
catch
the
ongoing
discussions
of
that
stuff
and
also
the
working
group
gates
and
for
a
slack
channel
discussions
go
on
their
licensing
as
mentioned.
Nikita
is
the
place
to
reach
out,
and
our
release.
Teams
are
always
looking
for
new
and
experienced
collaborators,
and
we
have
the
shadow
process
that
is
really
designed
to
bring
in
and
lift
up
and
and
help
people
learn
about
the
project
more
broadly
and
find
areas
to
increase
our
contributions.
E
We've
obviously
just
started
the
1/16
cycle,
so
it's
members
are
sets,
but
we're
only
two
months
away
from
the
next
one,
starting.
So,
if
you're
interested
in
doing
that
feel
free
to
lurk
around
the
1/16
or
at
least
pay
attention
to.
What's
going
on
how
it's
happening
and
if
you're
interested
you
see
opportunity,
then,
when
the
release
team
opens
up
for
one
seventeenth
row,
your
name
in
the
Hat,
we
would
love
to
have
more
folks.
D
A
lot
of
things
are
happening
right
now
and
I
need
to
step
back
and
focus
primarily
on
the
release,
engineering
efforts,
because
I
think
that
that's
probably
one
of
the
the
bigger
impact
things
in
the
project
right
now
and
I
asked
her
to
step
up
as
the
primary
sub
project
owner
for
licensing.
So
so
thank
you,
Nikita
for
for
doing
that.
E
Places
to
find
us
yep,
we,
we
have
chairs
we're
name.
There.
We've
got
a
page
in
community.
Like
everybody
else
to
read
me:
we've
got
our
charter
out.
There,
we've
got
a
slack
channel,
we
got
a
mailing
list,
we
have
bi-weekly
meetings.
The
minutes
and
agendas
are
linked
off
of
our
home
page
there
and
everything
gets
uploaded
to
YouTube.
So,
like
everybody,
we've
got
all
our
artifacts
out
there,
so
you
can
see
what
we're
up
to
and
hopefully
get
involved.
Thank
you.
Everyone
for
the.
D
A
Thank
you
for
the
awesome
updates,
I
realize
that
was
a
bit
of
a
longer
one,
but
I
also
knew
you
had
a
lot
to
talk
about
and
we
had
a
little
bit
of
time.
Thank
you,
yeah
with
that.
We
have
reached
the
end
of
our
agenda.
If
anyone
has
anything
they
want
to
talk
about
for
the
next
few
minutes,
we
have
open
mic
going
once
going
twice.
I
give
everyone
13
minutes
of
their
life
back.
Thank
you
for
coming
hope.
It
was
enlightening
and
awesome,
and
everyone
happy
Thursday,
keep
on
being
awesome.
People.