►
From YouTube: Kubernetes Community Meeting 20190418
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
Hello:
everyone
welcome
to
the
kubernetes
community
meeting
today
is
April
18th
2019,
my
name
is
Bob
Dylan
I
am
filling
in
for
Paris
Pittman
today,
who,
unfortunately,
is
a
bit
under
the
weather
at
moment.
I
am
a
research
cloud
administrator
for
the
University
of
Michigan
in
the
wonderful
in-state,
and
this
is
my
first
time
actually
hosting
the
weekly
community
meeting.
So
I
apologize
for
any
flubs.
A
We
have
a
relatively
light
scheduled
today.
First
up
with
is
a
demo
of
pectin,
a
cubed
native
pipeline
resource
from
Dan
Loring,
and
the
weekly
release
update
from
our
wonderful
115
release,
lead
Claire
Lawrence.
We
also
have
some
awesome
news
in
ops
area
for
slack
management
from
Catherine
Barry,
and
we
have
sig
updates
from
Roger
and
release
before
we
start
diving
into
the
demo.
Let's
make
sure.
A
Don't
seem
on
there
yet,
but
before
we
start
diving
into
this-
and
please
remember
to
meet
yourself
if
you're,
not
speaking
and
as
the
stream
will
be
posted
to
YouTube,
just
want
to
get
all
the
way
that
we
adhere
to
the
code
of
conduct
and
in
general,
just
be
awesome
to
each
other.
Next
is
Dan
here.
B
Well,
I've
got
a
few
quick
slides
just
to
explain
one
I'm
in
a
demo
that,
hopefully
you
have
a
little
bit
more
context
and
then
I'll
jump
into
so
today.
I'm
going
to
be
talking
about
tech
on
tech.
Con
is
a
set
of
kubernetes
style
series
and
api's
for
declaring
with
CIS
CD
pipelines.
So
the
idea
here
is
that
we're
going
to
be
only
up
CICP
pipelines
that
run
on
kubernetes
and
are
declared
with
kubernetes
style
api's,
but
to
build
and
deploy
any
kind
of
facts
to
any
kind
of
environment.
B
B
The
vision
here
is
that
we
want
type
on
pipelines
to
be
used
to
build
up
CI
CD
systems,
such
iconic,
Azzam,
selves,
you're,
kind
of
building
blocks
for
that
and
the
main
goals
that
we
started
out
with
a
little
composable,
declarative,
reproducible
and
cloud
native
CSE,
D
pipelines.
So
composable
that
means
that
we
can
build
up
and
share
steps
that
can
be
used
in
CIC
pipelines
and
use
them
separately
and
combine
them
in
different
ways.
B
Declarative
means
taking
advantage
of
the
kubernetes
style
api's,
so,
instead
of
a
whole
bunch
of
and
just
stringing,
we
typed
bash
scripts
that
copy
things
around
implicitly,
but
we
have
kind
of
typed
components
here
that
declare
exactly
what
they're
going
to
do
with
what
and
what
they're
going
to
produce
reproducible
same
thing
here.
These
things
are
all
running
inside
the
containers
and
inside
of
declaratively
provisioned
kubernetes
resources.
B
So
these
pipelines
should
be
reproducible
and
well
as
reproducible
as
possible,
and
then
cloud
native
is
kind
of
a
way
to
summarize
all
of
those
who's
been
working
on
this
tech
on
it's
been
going
on
for
about
six
months
now,
with
all
weather
contributors
from
a
whole
bunch
of
familiar
names.
Here,
I'm
gonna
started
out
as
a
collaboration
inside
of
native
actually
with
Google
it
a
little
obvious
Red,
Hat
IBM
and
many
more
I'll
also
be
showing
off
a
new
Tecton
dashboard
it'll.
B
How
much
you
folks
for
my
path
put
together
today
and
we
do
try
to
be
as
new
contributor
friendly
as
possible.
So
if
you're
interested
in
getting
started,
you
can
check
out
our
github
repo
last
two
slides
and
then
I
promise
I'll
switch
to
yanil
and
bash.
What
everybody
really
came
here
to
see
the
the
first
kind
of
building
blocks
here
in
Tecton
pipelines
is
the
task
CRD.
So
then
you
see
Odie
inside
of
Tecton
and
it
encapsulates
a
sequence
of
steps.
B
Each
one
of
these
steps
is
a
container
image
with
some
outputs
that
all
run
in
sequential
order
and
inside
of
a
kubernetes
pot.
So
these
steps
can
all
communicate
with
a
shared
file
system.
Inside
of
that
pod
and
Ronnie
can
watch
me
on
one
know
inside
of
the
cluster.
The
big
benefit
here
task
is
that
we
have
declared
inputs
and
outputs
so,
instead
of
in
addition
to
declaring
these
steps,
what
happens
inside
of
it,
the
task
declares
what
parameters
that
tasties
inputs
and
what
parameters
it
produces.
This
outputs-
and
these
are
typed
too.
B
So
if
a
tasks
a
requires,
a
container
image,
that's
declared
as
an
input
and
that's
one
of
the
types
that
we
support,
and
so
you
can
use
this
to
kind
of
build
up
slightly
more
typesafe,
see
sed
pipelines
instead
of
just
passing
everything
around
as
strings
in
the
same
for
outputs.
So
if
a
tasks
that
takes
a
docker
file
and
source
code
and
produces
a
container
and
then
just
it
can
be
deploy
somewhere,
it
declares
all
of
that.
So
we
put
these
things
together
safely.
B
That's
for
the
next
piece
comes
in
the
pipeline,
C
or
D,
so
the
task
C
or
D
is
one
set
of
steps
with
declared
inputs
and
outputs.
A
pipeline
builds
on
that
to
express
a
graph
of
these
tasks,
and
since
these
things
are
typed
in
has
inputs
and
outputs,
we
can
kind
of
build
up
this
dag
automatically
and
execute
things
in
parallel
across
an
entire
cluster.
B
All
right.
So,
let's
jump
to
the
demo
here.
So
this
is
dashboard
showing
some
of
the
pipeline
runs
that
we've
created
and
let's
pull
up
some
code
here.
This
does
this
one
takes
a
little
while
to
run
so
I'll.
Kick
this
off
so
in
the
dashboard
and
then
explain
all
the
different
pieces
inside
of
it.
But
this
is
a
pretty
simple
pipeline.
It
builds
a
couple
different
microservices
from
one
github
repo
run.
Some
tests
creates
a
container
in
the
news
and
then
deploys
these
to
or
through
the
Nettie's
cluster.
No
first,
let's
create
this.
B
This
creates
that
pipeline
run
itself
great,
so
we
can
see
this
inside
of
the
dashboard.
It
should
be
ticked
off
and
we
can
see
that
it's
started
and
not
everything
is
finished,
actually
keeping
it
a
locker
is
all
should
go
off
all
the
different
pieces
here,
so
the
tasks
I've
talked
about
before
these
are
the
kind
of
composable
and
reusable
pieces
with
inputs
and
outputs.
This
file
contains
all
of
those,
so
the
first
one
here
runs
unit
tests.
The
name
of
the
past
is
unit
tests.
This
is
the
app
we're.
B
Building
is
a
sample
app
that
we
used
for
scaffold,
but
it's
written
long
ago,
so
this
runs
basic,
go
set
of
tests
I'm
after
that
finishes,
we
build
and
push
a
couple
images.
The
is
a
task
to
build
and
push
images
using
Kanaka,
so
the
inputs
are
typed
here
we
need
a
git
repository
for
this
task
to
work
path
through
a
docker
file
and
a
path
to
the
build
context
and
limits.
Rocker
builds
should
recognize
these
there's
nothing
in
here
specific
about
what
we're
building
this
past
event
to
be
reusable
down.
B
B
It's
the
same
set
of
parameters
both
of
these
building
push
tasks,
take
the
same
interface,
the
same
set
of
parameters
coming
in,
so
they
can
be
swapped
out
easily,
depending
on
which
implementation
law
we
have
another
one
here
that
deploys
using
acute
control,
apply
slightly
more
complicated
set
of
steps
to
templates
some
of
the
images
that
we
just
built
in
here
and
do
the
final
deploy.
So
these
these
tasks
are
just
kind
of
templates.
You
create
these.
Nothing
actually
happens
when
these
get
created
inside
of
your
cluster.
B
They
just
become
available
for
use
inside
of
a
pipeline
or
available
to
be
executed.
Here's
the
pipeline
that
stitches
all
these
together.
We
can
see
the
tasks
run
in
this
order.
First,
your
unit
tests.
We
build
our
two
micro
services
that
are
inside
of
the
same
repo
and
then
we
deploy
to
our
cluster
again
just
like
tasks.
This
pipeline
gets
created
and
it
doesn't
do
anything.
It
just
sits
in
the
cluster
and
waits
to
be
executed
to
actually
execute
something.
B
We
have
to
bind
it
to
the
set
of
resources,
we're
going
to
run
it
against
and
then
create
a
run
notice
again
here.
The
pipeline
needs
a
set
of
resources,
but
these
are
meant
to
be
reusable,
so
this
is
not
coupled
to
any
specific
git
repository
or
any
kubernetes
cluster.
The
same
pipeline
can
be
shared
and
reused
across
teams,
or
even
across
different
companies,
these
to
be
stored
in
some
kind
of
catalog
and
reused.
That
way
any
git
repository
and
any
place
to
store
container,
and
this
can
be
used
with
this
pipeline.
B
The
way
those
bindings
are
expressed
is
with
the
set
of
pipeline
resources,
so
these
also
get
permanent.
This
specifies,
which
github
repository
we
actually
want
to
use
and
where
we
want
to
be
storing
our
container
images.
So
the
combination
of
these
resources
and
the
pipeline
definition
itself
can
be
and
run,
and
that's
the
pipeline
run
works
here,
and
this
is
what
I
have
created
inside
of
our
cluster.
So
this
specifies
a
couple
different
things
which
serves
to
count
to
run
as
which
resources
we
want
to
use
for
this
invocation,
etc.
B
We
should
see
that
it
everything
we've
finished
up
here
now.
I
refresh
awesome
so
built
the
app
I
don't
want
to
miss
still
running.
It
looks
like
fill
up
the
web
app
and
deployed
the
web
app
and
ran
our
unit
tests.
We
should
be
able
to
see
some
of
the
logs
from
this
one
here,
maybe
get
it
completed,
and
here
are
the
logs
from
that
run,
looks
like
this.
One
is
still
going.
B
She
doesn't
see
the
logs
since
it's
about
to
finish
them
just
to
show
off
the
kind
of
composability
piece
that
I
mentioned
before
perfectly
finished.
I
can
go
and
switch
that
one
out
before
the
dr.
B
B
B
Well,
we
can
see
the
that
was
running,
so
one
of
these
is
going
to
build
mechanical
and
the
other
one
is
going
to
build
with
docker
I
mean
since
they
both
produce
container
images.
They
can
both
be
deployed
at
the
same
way
later
on
in
the
pipeline.
So
that's
it
for
the
demo
I
had
today.
If
you
have
any
questions,
you
can
find
all
the
documentation
about
how
to
setup
and
use
technologies.
A
B
Cool
yeah
great
question,
so
one
of
the
challenges
in
this
space
is
there's
a
whole
bunch
of
OCD
tools
and
I
use
a
whole
bunch
of
different
names
to
express
the
same
things
in
this
world.
A
Jenkins
pipeline
which
we
can
express
inside
of
a
Jenkins
file
is
more
closely
related
to
a
say,
Tecton
task
and
they
normally
execute
all
in
the
same
environment.
They
do
have
some
support
for
everything
across
this.
B
The
actual
goal
of
Tecton
is
to
be
a
set
of
primitives
and
lower-level
building
blocks
for
things
like
Jenkins
pipelines
and
Jenkins
to
be
able
to
use
to
express
themselves.
So
we
are
actually
working
with
a
whole
bunch
of
folks
at
club
use
to
translate
Jenkins
pipelines
to
tacked
on
CR
DS,
so
you
can
automatically
schedule
across
at
Google
in
a
t's
cluster
awesome
yeah,
there's
another
question
that
looks
like
about
a:
can.
They
be
scheduled
or
supported
web
hooks
they.
B
So
the
the
goal
of
tectonic
pipelines
is
to
be
a
couple
from
how
they
are
invoked,
as
you
probably
saw
that
pipeline
does
not
know
how
it
was
run
or
why
it
was
wrong.
It
just
needs
the
parameters
passed
into
it
to
execute.
We
don't
have
any
automatic
support
yet
for
scheduling
or
invoking
it
through
web
books.
B
A
C
C
So
as
a
reminder,
what
needs
to
happen
by
enhancements
fries,
if
you
have
an
issue
that
you're
planning
to
introduce
or
graduate
in
115
by
enhancements
fries,
it
needs
to
have
a
cup
in
an
implementable
state
and
it
needs
to
have
an
issue
open
in
the
kubernetes
enhancements
issue
repo
and
some
quick
stats
on
how
enhancements
are
looking
for
115
right
now,
we're
tracking
34,
though
the
majority
of
those
I
believe,
are
alpha.
We
have
18
alpha
enhancements
targeted
right
now.
C
Additionally,
for
those
who
are
gonna
be
at
cube
con
in
Barcelona
we're
gonna
have
a
face
to
face
release
team
meeting
on
the
Monday
of
contributor
day
and
I
believe
our
timeslot
starts
at
9:00
a.m.
so.
If
anyone
is
interested
in
joining
us,
make
sure
to
add
that
to
your
schedule
at
cube,
con
I
was
sorry
about
it's.
10:00
a.m.
thank
you
for
correcting
me,
and
that
is
my
update
for
the
day.
Any
questions
on
how
115
is
going.
A
D
Mm
okay,
so
we've
been
working
on
in
our
continuing
crusades
to
put
everything
in
the
world
in
get.
We
have
put
the
console
at
configuration
in
gits
using
something
called
temp
Ellis,
and
this
basically
enables
us
to
define
all
of
our
slack
channels.
D
All
of
our
slack
channels,
all
of
our
slack
user
groups
and
such
in
bc
amal
files
in
biscuit
repo.
The
practical
upshot
of
this
is
that
it
is
possible
for
anyone
to
make
a
PR
against
this
repo
and
thereby
have
the
channels
all
the
user
groups
created.
So,
for
example,
if
you
make
a
PR
without
some
channels
to
this
channels,
file
and
you
will
find
out
as
soon
as
it
merges
that
we
actually
create
that
a
channel
for
you
automatically.
D
So
this
is
now
the
third
way
for
people
to
request
channels
and
also
to
request
user
groups.
On
top
of
that,
it's
also
possible
for
us
to
delegate
responsibility
for
channels
or
user
groups
with
certain
descriptions
or
names
or
something
to
SIGG's
or
other
groups.
So
you
can
request
that
we
do
that.
If
you
have
a
cig
or
similar,
look
requires
a
bunch
of
channels
or
user
groups
that
you
can
manage
itself,
and
that
is
about
it.
A
D
E
So
for
what
it's
worth
I
feel
like
this
is
different
than
the
mode
of
operation
we
have
for
our
get
off
strip
and
github
configuration
stuff
where
the
bots
will
revert
changes
that
people
may
annually
do
so.
If
you
manually,
create
a
team
or
you
manually
add
humans
to
a
team.
You
manually
remove
humans
from
the
team.
The
bots
will
undo
that
because
what
is
in
the
repo
is,
what
needs
to
be
so
yeah
Joe's
suggestion
of
maybe
renaming
things
was
a
non-destructive
way
signaling.
E
F
So
I
put
up
a
cap
of
the
week.
It
is
a
kept
from
sig
cluster
lifecycle.
Sorry,
can
everyone
hear
me?
Okay,
okay,
so
it's
a
cup
from
sig
cluster
lifecycle,
cube
ADM
is
getting
ready
to
they're,
starting
to
assess
what
the
V
1
beta
2
spec
for
cube.
Adm
configs
are
so
that
cap
is
currently
work
in
progress
provisional,
but
it
looks
like
there's
some
nice
meat
on
it
already.
If
people
have
questions
comments,
concerns
I
would
direct
it
to
the
link.
That's
in
the
agenda.
That's
all
I
got.
F
Right
so
I'm
going
for
I'm
going
for
brief,
updates
all
the
way
down
today.
All
right
here
we
go
and
you
can
see
the
screen.
Okay
may
make
the
little
guys
disappear
all
right.
So
what
we
did
last
cycle
we've
been
working
on
together
with
a
sig
sig
cloud
provider.
To
start
well
continue
doing
testing
around
the
out
of
tree
cloud
providers
so
moving
after
out
of
tree
so
driving
down
each
of
the
little
things
that
are
counted
as
dependencies
for
the
entry
cloud
provider
and
starting
to
and
starting
to
move
that
app.
F
There
are
continued
discussions
around.
You
know
something
that
kind
of
started.
I
want
to
say:
middle
was
last
year,
but
has
been
talked
about
for
quite
some
time
at
this
point,
which
is
the
consolidation
of
the
sig
of
each
of
the
cloud
provider
SIG's
under
as
sub-projects
force
a
cloud
provider.
So
that's
starting
to
happen,
the
you
know
lots
of
questions
around.
F
F
So
the
idea
that
the
idea
that
you
can
define
clusters
and
machines
and
things
like
that
machine
sets
and
deployments
within
kubernetes
and
have
kubernetes
deploy
itself
very,
very
meta
thing.
So,
finally,
there's
an
implementation
of
that
for
a
sure,
I'm,
the
primary
maintainer
of
that.
So
if
people
are
interested
in
contributing
to
that,
please
let
me
know,
please
feel
free
to
kick
the
tires
on
it
and
file
bugs
and
hop
in
the
cluster
API,
a
sure
channel
on
slack.
F
Don't
quote
me
on
that
I
think
we
have
some
deprecation
notices
up
for
cloud
provider
in
general,
but
the
yeah,
so
the
the
target
is
around
118,
so
continue
to
work
on
producing
keps
around
what
documentation
should
look
like
how
we
structure,
how
we
structure
provider
configs
specifically
for
Azure,
as
well
as
continued
continued
testing,
primarily
end-to-end
testing,
so
that
we
can
ensure
that
everything
looks
the
same
as
as
the
entry
provider.
So
continuing
the
the
work,
the
prep
work
around
the
sig
sig
cloud
provider
consolidations.
F
So
everything
share,
you
know
whether
or
not
their
chair
changes
updates
to
updates
to
the
projects
figuring
out
again
tracking
the
dependencies.
What
happens
when
we
change
repo
names?
What
happens
when
we
fold
them
under
different
orgs
from
the
Microsoft
side?
There's
heavy
work
around
OPA,
open
policy
agent
and
gatekeeper
their
gatekeeper
pattern
so
expect
to
see
some
more
of
that
in
the
115
and
116
cycle,
as
well
as
like
integration
between
those
those
tools
and
they
and
both
the
cloud
provider
and
cluster
API
implementations.
F
Cal
mentioned
to
me
yesterday
that
there
there's
going
to
be
more
testing
around
large-scale
clusters
in
Azure,
so
five
hundred
thousand
five
thousand
of
that
scale.
So
we
would
expect
to
see
that
more
details
on
that
within
the
next
few
cycles,
as
well
as
a
production
ready
cluster
API
implementation.
So
we've
gotten
past
the
pieces
at
this
point
where
you
know
where,
when
I
picked
up
the
project,
that
was
kind
of,
can
we
build
from
master
and
the
answer
was
no,
so
we've
done.
F
You
know
outside
of
the
migration
away
from
the
migration
into
kubernetes
eggs,
so
hats
off
to
the
platform,
nine
people
who
at
platform
9
and
and
Microsoft
who
started
the
initial
implementation.
Since
then,
we've
done
a
massive
refactor.
It
builds
her
master
now
lots
of
little
things
that
we're
talking
about
like
Bastion
house
is
how
do
we
do
H
a?
How
do
we
make
sure
things
are
secured
by
default,
so
everything
that
you
know
people
would
start
to
consider
for
production
writing
implementation
of
that
will
be
happening
within
the
next
few
cycles.
F
Again,
if
you
have
any
questions,
feel
free
to
reach
out
to
me
or
feel
free
to
hop
on
the
cluster
API
Azure
channel
things
we
need
from
you.
So
more
contributors
in
general,
I
think
both
on
the
cloud
provider
side
and
the
cluster
API
Azure
side
cloud
provider
is
primarily
the
people
who
are
working
on
it
primarily
from
Microsoft.
F
There
are
start
I
would
love
to
see
more
contributors
and
people
testing
out
cluster
API,
a
sure
if
that
one
is
near
and
dear
to
my
heart.
So
we
have
we've
on
boarded
more
contributors
on
the
Microsoft
side
and
we're
starting
to
chat
with
Red
Hat
about
some
of
the
stuff
as
well.
So,
overall,
anyone
who
is
interested,
please
again
feel
free
to
reach
out.
How
can
you
reach
out
so
our
chairs
for
lovely
sig
Asher
myself
gave
shuttle
from
Microsoft
the
technical
leads
Cal
and
ping
Fei
from
Microsoft.
F
F
F
We
so
one
of
the
interesting
things
that
we
did
had
to
Aaron
for
for
running
into
every
sick
and
kind
of,
as
he
describes
a
bull
in
a
china
shop
and
making
keps
a
requirement
for
entry.
Kubernetes
enhance
entry,
kubernetes
enhancements,
I
think
that
that
has
driven
a
lot
more
activity
around
the
kubernetes
enhancements.
Repo
in
general
I
think
that
you
know
part
of
what
we
did.
F
We
also
shored
up
some
of
the
cap
submission
process
so
in
making
that
release
team
members
are
working
on
having
complete
enhancement
issues
and
adding
a
release
team
checklist
to
the
kept
template.
So
there
are
improvements.
The
cap
template,
which
included
a
release
team
checklist,
which
basically
says
hey.
Do
you
have
an
enhancement
issue?
Do
you
do
B
know
where
your
docks
are?
Do
you
have
a
test
plan?
Do
you
understand
what
your
graduation
criteria
are
right?
F
So
so
things
like
that,
then
make
it
make
it
dreadfully
simple
for
the
release
team
to
at
a
glance
understand
what's
happening
with
these
enhancements
also
improves
the
visibility
for
people
who
are
say
passerby
to
the
process
and
want
to
understand
what
these
enhancements
like.
So
that
happened
last
cycle.
We
also
introduced
a
questionnaire
for
the
release
team
shadow
process.
So,
if
you're
familiar
with
the
release
team,
what
we
do
is
we
have
a
release
team
lead.
F
They
also
have
a
set
of
release:
team
shadows,
release
shadow
release,
team
lead
shadows
and
then
each
of
the
individual
release
team
roles
also
have
shadows
attached
to
them.
So
what
we
wanted
to
make
sure
of
was
classically
what
we've
done
is
I
have
been
like
the
release
team
PR
for
the
last
release
team,
a
jar
for
the
last
few
cycles,
so
essentially
a
github
issue
opens.
F
We
track
all
of
the
people
who
are
interested
in
leading
or
shadowing,
and
it
becomes
like
a
first-come,
first-serve
process
right
and
that's
not
necessarily
the
best
way
to
put
together
a
team
so
so
moving
forward.
What
we
wanted
to
do
is
essentially
try
to
try
to
create
a
set
of
criteria
that
makes
that
defines
like
what
a
good
release
team
member
looks
like
or
better
define
the
process
around
how
you
would
become
a
great
candidate
for
the
release
team
all
right.
F
So
we
we
created
an
initial
questionnaire
for
the
for
the
114
cycle,
which
we
have
drastically
improved
on
so
hat
tip
to
you:
Josh,
burkas,
hats,
up
to
Jim
angel
and
everyone
else
who
is
involved
in
kind
of
improving
the
question
here
for
the
cycle.
I
think
that
there
will
be
additional
improvements.
F
F
If
you've
noticed
the
release
team,
the
release
team
table
within
each
of
the
releases
that
they're
the
patch
release
a
patch
release
role
is
no
longer
on
that
and
and
the
reason
for
that
is
that
we
have
a
patch
release
team.
So
the
idea
is
that
no
longer
will
we
have
a
single
person
responsible
for
one
cutting
patches
for
the
entirety
of
the
support
cycle
for
a
release,
and
we
move
that
responsibility
into
a
team
right.
F
How
do
we
track
licenses
and
licenses
for
our
dependencies
to
make
sure
that
we're
compliant
with
at
scene
CF
and
with
Linux
Foundation
requirements
right
so
starting
to
look
at
how
how
we
can
introduce
automation
into
that
scheme?
So
right
now
that
is
myself
Steve
Winslow
from
LF
Nikita
and
DIMMs
work
on
that
project?
If
anyone
is
interested
in
licensing
excites
you
and
legalities
excite
you
please
let
us
know
we'd
love
to
onboard
more
people
for
that.
F
So
what
we're
planning
for
the
next
cycle
is,
we
need
to
continue
evolving
the
enhancements
tracking
process
and
we
need
to
work
closely
with
sig
diem
to
do
that
as
someone
who
straddles
both
both
SIG's.
This
is
something
that
is
near
and
dear
to
me.
I
think
that
you
know
over
the
last
few
cycles
and
by
few
I
mean
the
last
million
cycles.
F
We
have
used
a
spreadsheet
to
subtract
enhancements
and
it's
been
a
lot
of
manual
work
to
get
that
done,
and
you
know,
there's
no
there's
no
guarantee
that
anything
will
be
in
sync
at
any
one
time.
It's
really
on
the
release
team
to
ensure
that
that
happens
and
I
think
that
we
can
apply
more
process
and
automation
around
that.
So
that
is
something
that
we
want
to
try
to
execute
on
within
the
next
two
cycles:
continuing
to
staff
for
the
release,
engineering
sub
projects,
so
the
release
engineering
and
the
licensing
sub
projects.
F
So
if
the
idea
of
releasing
kubernetes
and
helping
to
build
the
process
and
also
crank
the
levers
that
are
required
to
build
kubernetes
are
exciting
to
you.
We're
working
on
policies
to
to
build
essentially
similar
to
what
the
product
security
committee
does
have
a
shadowing
and
review
process
where
we
can
onboard
more
because
these
are.
These
are
roles
that
require
heavy
heavy
amount
of
access
into
the
kubernetes
org
structure.
F
So
we
want
to
make
sure
that
the
people
who
hold
these
keys
are
are
the
right
people
they
were
properly
trained
and
and
can
execute
on
it
when
leads
are
not
available.
So
we
need
to
also
continue
improving
the
feedback
loop
for
keps
for
a
long
whoops,
sig
VM.
Part
of
that
is
I
am
I'm.
Looking
at
my
schedule
right
now,
when
we
can
do
this,
what
I
would
like
to
do
is
a
essentially
a
cap
retrospective
as
an
addendum
to
the
114
retrospective.
F
So
what
we
would
do
here
is
essentially
target
target,
some
of
the
SIG's
that
have
high
traffic,
kept
Weis
enhancements
Weis
and
go
to
each
of
the
sake.
Meetings
talk
about
what
we
can
improve
in
the
process
right
so,
along
along
the
last
cycle,
we've
improved
the
we've
improved
the
template
overall,
including
a
release
team
checklist.
F
What
I'd
like
to
do
is
speak
to
the
people
who
are
submitting
these
day-to-day
right
so
kind
of
again
straddling
the
cig,
release
and
say
PM
stuff,
I
think
that
Erin
had
suggested
the
idea
of
cap
hours,
where
you
know
you
can
either
have
someone
review
the
the
foreman
functions
of
your
caps
are
have
someone
or
continue
to
discuss
like
how
I
can
push
I
kept
along
right
if
something
isn't
and
not
in
the
right
form,
how
do
I,
how
do
I
fix
it
right?
So
that
is
a
right
now.
F
F
The
so
we've
I
think
we've
done
a
pretty
great
job
at
defining
what
it
is
to
be
on
the
release
team
and
some
of
the
requirements
through
constant
revision
of
the
of
the
release
of
the
release
team
role
handbooks,
as
well
as
a
revision
of
the
process
overall,
I
think
every
every
cycle.
We
have
the
opportunity
to
kind
of
look
back
at
the
process
and
see
what
works.
F
What
doesn't
work
you
know
so
introducing
and
one
of
the
things
that
we
introduced
this
cycle
was
the
idea
of
an
aberrant,
an
emeritus
advisor
for
the
release
team,
so
that
is
Josh
Burke
is
currently
so.
What
he's
helping
doing
in
the
background
is
ensuring
that
you
know
shadows
have
what
they
need,
ensuring
that
the
shadow
selection
process
moves
moves
forward
in
appropriate
manner.
F
What
we
need
to
do
is
establish
concrete
membership
criteria
for
the
patch
release
and
branch
release
management
groups
right
so
really
defining
again,
as
I
was
as
I
was
mentioning
the
reviewer,
the
reviewer
and
apprentice
process
right
for
removing
someone
into
someone
who's
just
interested
in
at
release
management
to
being
someone
who
has
trusted
to
have
the
keys
to
do
that.
Release
management.
A
large
topic
that
come
up
is
the.
How
do
we
track
or
do
we
track,
or
do
we
not
track
out
of
tree
enhancements
right?
F
So
what
will
happen
that
the
you
know
when
a
release
published
will
see
a
major
state,
a
major
themes
section
of
the
changelog,
which
includes
lots
of
updates
from
different
SIG's
I.
Personally,
think
that
this
is
those
updates
are
something
that
have
classically
happened,
but
are
not
something
that
is
appropriate
for
the
changelog
for
kubernetes
kubernetes
I
think
that
that
should
only
include
changes
to
kubernetes
Cooper
nowadays
right.
So
how
do
we
still
provide
the
visibility
that
a
cig
would
get
out
of
being
part
of
that
change?
F
Log
and
I
think
that
it's
some
different
presentation,
magnet
mechanism
right
so
figuring
out
what
that
is
and
how
we
can
track
that
or
how
things
can
feel
enabled
to
present.
Those
changes
across
the
release
cycle
is
something
that
we
want
to
drive
down
within
the
next
cycle.
Okay
and
here's
a
big
one.
These
these
two
are
high
high
value
targets.
We've
been
talking
about
them
over
the
last
few
days,
especially
with
regards
to
some
of
the
binary
confusions
that
happened
during
the
the
most
recent
patches
and
the
the
114
release
cycle.
F
We
want
to
establish
support
policies
for
release.
Artifacts
right
is
this
something
you
know?
What
are
we
willing
to
support,
especially
as
everyone
who
is
on
board
for
supporting
release,
artifacts
or
producing
release?
Artifacts
are
part
of
a
volunteer
army
right.
You
know
so
we
have
to.
We
have
to
establish
guidelines
for
what
support
looks
like
for
kubernetes
overall
right,
so
in
that
working
with
the
with
the
working
group
or
the
team,
the
Katyn
four
team
on
creating
a
visible
community-centric,
artifacts
and
release
management
process
right.
F
So
what
that
means
covertly
is
that
there
are
some
keys
that
there
are
some
keys
in
the
release
process
that
belong
to
that
belong
to
specific
specific
companies.
That's
you
know,
the
you
know,
part
of
the
the
foundation
of
the
Katyn
four
team
was
to
bring
all
of
the
infer
that
we
maintain
into
the
community
right.
So
this
is.
This
is
essentially
a
corollary
to
that.
You
know.
The
artifacts
that
we
produce
and
manage
for
the
release
for
each
of
the
release
cycles
should
be
something
that
our
something
that
is
driven
by
the
community
right.
F
So
how
do
we?
How
do
we
start
to
do
that?
There
are
some
issues
up
that
I
can
start
to
link
I.
Can
it's
actually
slides
later
I
can
attach
to
the
meeting
agenda,
but
those
those
conversations
are
happening
in
the
background,
so
I
remember,
Erin
had
put
up
a
an
issue
about
follow-ups
on
our
cig
release.
Charter
I
think
that
we're
right
around
the
time
I.
F
It's
it's
been
a
few
quarters
now
right
around
the
time
to
reassess
what
the
sig
Charter
looks
like
and
now
that
some
of
these
teams,
licensing
and
release
engineering
have
started
to
spin
up.
We
can
better
detail
what
the
in
and
out
scope
items
for
sig
release
are
and
then
finally,
we
want
to
build
more
process
around
what
it
looks
like
to
do.
Org
wide
license
management
and
automation
right.
So
currently,
there
are
kind
of
two
streams
that
happen
for
for
license
management.
F
One
stream
is
Steve,
Winslow
uses
philology
and
then
creates
a
bunch
of
spreadsheets
and
tries
to
D
dupe
and
understand
dependencies
that
are
in
a
like
in
scope
or
in
violation
of
our
policies.
What
we'd
like
to
move
to
is
is
fossa,
so
fossa
is
on
in
the
background
and
kind
of
doing
quiet
scanning
of
each
of
the
the
repos
within
the
urban
eTI's
orgs.
What
we
would
like
to
eventually
get
to
is
being
able
to
do
pre,
submit
checks
for
licensing
right
across
repos
game.
A
F
F
For
sure
for
sure
so
yeah,
that
is,
that
is
one
of
the
eventual
goals:
they're
a
bunch
of
umbrella
issues
around
license
management
that
that
I
will
I
am
working
on
collating
but
will
also
see
a
license.
License
management,
automation,
sub-project
a
meeting
spin
up
soon,
so
that'll
be
exciting
again.
Anyone
who's
interested
in
that
stuff.
F
Please
feel
free
to
reach
out
to
me,
so
some
related
caps
about
what
I
was
talking
about
the
artifact
management
package
generation
package
publishing
and
then
finally,
there
has
been
an
effort
from
you:
men
wah
to
rebase
each
of
the
underlying
kubernetes
images
on
dis,
realist,
static
or
digital
space
right.
So,
if
you
want
to
read
more
about
that
effort,
there
have
been
a
few
emails
sent
out
to
Kerr
Bonetti,
steve
kubernetes
cluster
life
cycle
sake,
release
seed
cloud
provider
as
well
as
this
cap
that
you
can
check
out.
F
A
Did
actually
have
another
cig
in
between
cig
big
data,
but
for
those
that
you
know,
might
not
be
paying
attention
to
some
news
in
the
space
cig.
Big
data
has
now
been
converted
to
a
user
group,
whether
they
chime
in
on
these
community
meetings.
Some
other
stuff
is
still
sort
of
TBD
at
the
moment,
and
they
didn't
really
have
anything
you
court
today,
as
it
is
I
think
that
takes
us
next
to
announcements.
I
had.
E
A
quick
question
on
sale
early
so
on
the
topic
of
tracking
things
that
are
out
of
tree.
The
rubric
that
I
tell
people
to
use
a
cig
release
only
cares
about
what
is
in
the
kubernetes
release,
and
the
only
thing
that
is
in
a
community's
release
is
code
that
lives
in
a
repo
called
kubernetes,
kids
or
nettings,
but
from
other
reasons,
does
not
land
in
the
communities
release.
E
So
we
should
be
tracking,
but
we
all
know
that
eventually
we
do
want
to
live
in
a
world
where,
like
cloud
providers,
for
example,
from
out
of
tree
instead
of
a
new
tree
and
I,
feel
like
we
are
eventually
going
to
have
to
answer
this
question
of
what
is
a
kubernetes
release.
And
how
are
we
creating
with
entry
and
outer
three
components,
feel
like
that's
a
long-running
discussion
and
something
that
hasn't
even
been
started.
E
G
This
is
an
architecture,
decision
and
I.
Think
we've
already
started
approaching
this
with
discussing
around
like
what
happens
in
Cube
control
actually
gets
extracted
at
some
point
in
the
future.
Okay,
so
we've
already
sort
of
looked
at
that
so
but
I
in
my
mind,
I
think
this.
This
lands
on
state
of
architecture,
okay,.
F
So,
for
the
purposes
of
absurd
release,
at
least
I
think
that
there
is
enough
burden
on
the
release
team
in
terms
of
tracking
entry
kubernetes.
If
we're
talking
about
what
a
release
cycle
is,
it
is
kubernetes
kubernetes,
strictly
coburn
a
disturber
Nettie's
and
I
think
only
the
things
that
are
in
urban
areas.
Kubernetes
should
be
tracked
and
managed
by
the
release
team.
We
I
think.
F
Overall,
we
need
a
process
for
presenting
the
work
that
has
been
done
by
other
SIG's
throughout
the
release
cycle,
because
I
think
the
the
release
cycle
is
tied
to
our
like
our
communication
drain
right,
the
blog
posts,
the
webinars,
all
the
things
that
happened
there.
So
we
need.
We
need
a
different
presentation
mechanism
for
people
to
to
say
those
words
right,
I
think
it's
classically
been
the
changelog
for
the
change
log
and
the
release
frame
that
happens
for
each
each
release
cycle,
but
I
needs
to
change
so.
H
E
Particular
don't
blow
by
that
on
that
front,
I
feel
like
the
packaging
proposed
the
caps
that
Stephen
linked
need
to
take
that
into
account.
I
have
heard
that
there
is
much
frustration
over
the
current
debian
and
RPM
packages
that
are
provided
by
this
project
and
I
think
we
all
agree
we're
not
doing
a
great
job
of
it.
However,
different
distros
have
different
ways
of
declaring
and
scoping
and
maintaining
their
dependencies
and
there's
a
real
uncertainty
about
like
what
you
know.
E
Where
does
the
line
of
liability
stop
for
this
project
with
respect
to
its
bench
dependencies,
including
all
of
the
code
in
vendor,
as
well
as
all
the
other
dependencies
that
make
up
the
communities,
release
and
again,
I
feel
like
the
packages
and
artifacts
that
were
described
for
we're
all
in
the
context
of
stake
release
that
just
needs
to
be
a
cig
architecture.
Discussion,
because
we're
talking
about
dependencies
so
be
it
like.
This
was
a
good
forum
to
elevate
that,
but
that
is
still
an
ongoing
conversation
yet
to
be
answered
or
even
started
well,.
H
E
Because,
from
a
case
infrastructure
perspective,
I
was
very
welcoming
and
supportive
of
anything
that
allowed
the
conceal
further
empower
the
kubernetes
community
to
produce
its
own
releases
effectively.
So
as
far
as
I
know,
there
is
nothing
that
is
blocking
people
from
working
on
the
caps
that
Stephan
put
out
there
and
I
look
forward
to
continued
progress
on
this
I.
A
A
Cool
okay,
the
only
real
announcement
we
have
is
the
reminder
of
the
contributor
summit
at
Kubik
on
Barcelona.
Just
as
a
reminder,
you
must
register
through
the
separate
contributors,
contributor
summit
registration
site
also.
As
another
reminder,
there
is
no
real
structured
content
or
current
contributors
outside
of
explicit
sig
face-to-face
meetings.
Right
now
we
have
a
queue
builder,
sub-project,
release
team
meeting
and
then
face
to
face
sessions
for
a
sick,
CLI,
cig
provider,
life
cycle,
sig,
IBM
cloud,
sig,
networking,
Singham,
a
sig
scheduling,
sig
UI,
sig
yum,
where
and
sig
windows.
A
There
is
no
other
content
besides
that
well
and
the
social
on
the
day
before.
If
you
are
interested
or
have
questions
about
that
feel
free
to
hop
into
contributor
summit,
the
Snipe
channel
or
send
an
email
to
community
community
Oh
for
info,
and
that's
it.
Regarding
the
contributor
summit,
when
I
have
shout
outs,
lucky
would
like
to
you
give
hodgepodge
and
lava-lamp
Hashanah
for
providing
awesome,
cig
updates
during
last
week's
community
meeting.