►
From YouTube: Kubernetes Community Meeting 20190613
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
Hello
and
welcome
to
the
weekly
kubernetes
community
meeting
today
is
July.
Actually
it's
not
July,
it
is
June,
it
is
June
13th
2019
and
it
is
a
Thursday
I'm
coming
to
you
live
from
my
traveling
handbag.
So
thank
you
all
for
joining
us.
Today.
We
have
a
full
agenda
and
before
we
get
into
the
agenda,
let's
go
through
a
couple
of
housekeeping
items,
just
a
friendly
reminder
that
this
meeting
is
recorded
and
is
currently
being
streamed
to
YouTube.
So
all
conversations
fall
under
the
kubernetes
code
of
conduct,
which
says
be
excellent
people
to
each
other.
A
Keep
that
in
mind
and
a
couple
of
other
housekeeping
issues
that
we
want
to
bring
up.
If
you're,
not
speaking,
please
mute
to
make
sure
that
the
people
that
are
speaking
can
be
heard,
and
one
final
call
is
a
note-taker.
Does
anybody
want
to
volunteer
to
take
notes?
I
will
also
go
and
just
grab
the
link
and
put
it
in
the
chat
fantastic
so
that
you
can
follow
along
with
the
agenda
for
today
as
well,
so
that
bring
gets
us
into
their
first
part
of
the
agenda,
which
is
the
demo
I.
A
B
Thank
you
lucky
excited
to
be
here,
so
let
me
share
and
I'm
gonna
go
into
the
slides
so
like
like.
Like
you
mentioned
what
we,
you
know,
what
we've
been
working
on
and
something
we're
excited
to
present
today
is
a
community's
policy
management
tool
and
I'll
explain
how
kind
of
break
this
down
and
explain
what
we
mean
by
this,
but
just
quickly
to
introduce
myself
I'm
one
of
the
co-founders
at
numata
and
some
of
the
other
folks
from
our
team
who
have
been
working
shooting
Shiv
Dennis.
B
So
if
you
go
to
our
github
repo,
if
you
you
know
file
a
comment
or
ask
questions
or
in
our
forum,
one
of
us
will
be
the
ones
answering
and
interacting
so
breaking
down
what
we
mean
by
kubernetes
Paulo
native
policy
management
right.
So
there
are
other
policy
management
tools
out
there,
of
course,
and
we've
kind
of
interacted
and
looked
at
some
of
these.
But
what
we
wanted
and
what
our
customers
were
asking
for
was
something
where
these
policies
became
in
a
native
in
in
kubernetes.
B
B
One
of
our
features
that
were
working
on
is
events
to
log
policy
enforcement,
so,
as
resource
owners
see
what
what
has
been
happening,
they
can
easily
tell
when
a
policy
has
been
applied
and,
of
course,
we
want
key
burner
to
work
well
with
all
other
kubernetes
configuration
management
tools
which
already
exist
and
with
what
folks
are
familiar
with
just
you
know
briefly
and
policies.
Of
course,
it's
a
fairly
generic
term.
Everyone
pretty
much
should
know
what
it
means,
but
in
this
context,
what
we're
talking
about
is
configuration
that's
required
right.
B
So,
typically,
this
is
configuration
set
by
a
cluster
operator
or
cluster
admin,
and
they
want
this
to
apply.
So
it's
something
that
needs
to
be
enforced
by
definition
right.
So
it's
not
it's
not
the
same
as
using
customize
or
other
configuration
management
tools
where
we're
creating
variations.
But
here
this
is
configuration
which
is
required
for
governance,
for
security
reasons,
etc
by
cluster
operators.
B
So
what
can
keep
our
nardoo
right?
So
the
basic
features
and
one
of
the
things
we
also
saw
lacking
in
some
of
the
other
tools
out
there.
Not
only
did
we
want
to
validate
configurations,
but
we
also
wanted
to
mutate
configurations,
a
change
you
know
set
set
specific
things
in
configurations
as
they're
being
accepted
into
the
cluster
and
then
for
namespaces.
B
One
important
thing
is
to
be
able
to
generate
a
set
of
defaults
right,
so
the
three
features
we're
supporting
is
to
validate
and
generate,
and
you
know
validators,
perhaps
one
of
the
most
interesting
so
we'll
take
a
look
at
that
and
the
demo
more
deeply.
But
just
very
briefly
what
a
policy
is?
It's
a
set
of
ordered
rules.
Each
rule
has
a
name.
B
It
has
a
message
and
rules
can
match
one
or
more
resources,
cavern
or
is
very
flexible
in
terms
of
matching
resources
based
on
you
know,
either
the
kind
the
name
selectors
we
support
while
courting
and
then
each
rule
will
have
either
a
generate
block,
a
mutate
block
or
a
validation
right
so
going
in
a
little
bit
deeper
into
this
and
here's
an
example
awful
policy.
So
in
this
policy,
what
you
see-
and
this
is
you
know
it's
also
straight
from
our
Docs,
so
it's
annotated
with
some
comments.
B
Where
you
see
we
have
a
single
rule
here.
This
matches,
deployment,
stateful
sets
and
demon
sets.
The
selector
is
is
being
matched
by
labels
and
then,
of
course
you
know,
or
you
can
also
use
expressions
and
then
the
rule
would
have
you
know,
logic
to
do
the
validation,
mutation
or
generating
new
configurations.
B
So,
let's,
let's
briefly,
look
at
some
of
the
features
for
each
of
these
and
then
now
I'll
show
a
quick
demo
of
how
this
works
in
action.
So
here's
a
validate
block
and
it's
basically
very
simple
you're
all
we're
saying
is:
we
are
expecting
a
requiring
a
label
called
app
to
be
configured
and,
as
you
see
here,
in
addition
to
an
overlay
style
yamo,
what
we're
also
you
know
supporting
in
Kippur
now
is
the
ability
use.
You
know,
patterns
like
wildcard
patterns,
so
zero
or
more
alphanumeric
characters.
B
One
or
more
is
the
question
mark
and
then,
of
course,
also
operators,
for
you
know,
for
arithmetic
or
other
logical
expressions
that
you
want
to
build
mutate
blocks
are
a
little
bit
more
complex
and
we
support
to
two
different
styles
here
in
Kerberos.
So
one
is
a
JSON
patch
which
lets
you
do
very
precise,
updates
to
yeah
bolts
to
configurations
as
required.
B
So
here's
where
you
know
if
I
have
a
new
namespace
I
want
to
have
a
deny
all
Network
policy
by
default.
Pretty
simple
to
do
your
you
know:
I
I
can
just
inline
that,
or
we
also
support
a
copy
from
rules
and
the
generate
itself
all
right.
So
with
that,
let
me
you
know
here
are
some
of
the
features
on
our
roadmap,
but
let
me
switch
to
the
live
demo,
so
we
can
see
this
I'm
gonna
go
to
my
command
line,
but
also
just
from
the
docks
itself.
B
What
I'll
do
is
I'm
gonna,
pull
the
and
will
install
key
brno
to
start
out
with
and
from
there
it'll
very
quickly.
Just
look
at
some
examples
of
policies
right,
so
I'm
gonna
go
to
getting
started,
and
here
I'm
gonna
use
this
one.
You
know
to
install
the
CR
D
and
also
so
now
I
see
it
pulled
down
the
CRD
it
installed
that
it
has
the
namespace
created.
So
if
you
do
n
you
ever
know
and
get
pods,
we
should
see
the
controller
running
okay
and.
A
B
Some
resources
over
here
already
I'm
gonna,
show
like
what
these
policies
look
like
so
here,
I
have
some
policies
already
defined,
checking
labels
checking
namespace
checking
for
root
and
then
there's
also
one
resource,
we'll
use
as
a
test
example
right.
So
this
is
a
deployment
with
nginx.
So
let's
go
ahead
and
import
the
policies,
so
I'm
gonna
say
create
minus.
A
B
And
we'll
go
ahead
and
import
all
the
policies,
and
now
what
I'm
gonna
try
and
do
is,
you
know,
create
my
use
the
resource
and
just
apply
that
right
so
immediately.
What
I'll
see
is
because
of
the
policies
that
we
we
had.
You
know
specified
I,
see,
there's
some
success
and
there's
one
failure
right
and
if
you
look
at
my
my
llamo,
so
your
what
I
have
already
is
I
have
this
owner
label.
B
So
actually,
let's
remove
that,
just
as
an
example
right
so
because
I
have
a
policy
which
is
checking
your
to
say
that
it
requires
a
label
owner
if
I
remove
that
and
if
I
rerun
the
import.
It's
gonna
tell
me
that
now
that
policy
you
failed
as
well,
but
if
you
go
back,
if
we
put
back
that
label
and
the
other
thing
it's
asking
for
in
the
policy,
is
it's
asking
for
the
namespace?
So
really
all
we
need
to
do
to
make
this
pass.
B
Because
it
passed
all
the
policies,
certainly
that's
a
very,
very
brief
demo,
there's
a
lot
of
other
examples
and
documents
available
on
our
you
know.
If
you
go
to
our
github
repo,
we
have
some
links,
of
course,
in
the
community
dock
as
well,
just
to
quickly
mention
some
of
the
things
we're
working
on
before
we
go
to
1.0
release,
so
we're
working
on
events
and
policy.
B
Few
years,
do
you
have
events
in
each
resource
where
policies
have
been
applied,
so
you
can
just
look
at
it
from
cout
cuddle
describe,
violations
will
also
show
up,
and
but
if
and
the
idea
here
is
to
check
for
existing
resources.
So
if
you
do
go
ahead
and
change
a
policy,
it
will
scan
existing
resources
and
tell
you
what
the
exertions
are
and
then
we're
also
working
on
a
CLI
to
be
able
to.
You
know,
do
out-of-band
policy
testing
before
you
commit
your
changes.
B
A
Okay,
fantastic.
Thank
you
very
much
for
that
demo
Jim.
It
was
fantastic
and
very
informational
insightful
look
forward
to
taking
a
look.
The
links
are
posted
there
and
the
deck
is
in
the
agenda,
along
with
the
link
to
the
repo
that
he
has
on
screen
now.
Thank
you
Jim.
Thank
you
that
moves
us
to
the
next
section
of
our
agenda,
which
is
the
release
upstate
section.
Do
we
have
Claire
on
the
line
to
provide
a
release,
update
thanks
for
yesterday.
C
We
are
in,
hopefully
the
last
week
of
the
115
release
cycle
this
week.
We
have
had
our
first
RC
cut,
so
that
should
be
available
for
anyone
to
test
and
play
with.
We
also
had
our
docks
final
PR
milestone,
so
all
of
our
documentation
is
getting
finished
up
and
ready
we're
wrapping
up
all
of
the
release,
notes
as
well
and
getting
interviews
lined
up
from
the
media
for
next
week.
C
A
D
Here,
hello,
hello,
so
there
are
kind
of
two
parts
to
the
contributor
tip
of
the
week.
The
first
one
was
discussing
the
emeritus
field,
I'm,
not
sure
if
anyone
has
seen
this
going
around,
but
we
opened
it
up
so
in
owners
files
you
can
actually
set
emeritus
approvers,
and
what
that
will
do
is
allow
someone
to
gracefully
step
down.
They
can
still
be
a
point
of
reference,
however,
they
cannot
actually
approve
code
and
they
will
not
be
assigned,
but
even
they
still
might
have.
You
know
in-depth
knowledge
in
the
code
base
this.
D
D
However,
the
second
thing
is
as
part
of
the
release
team
and
sig
release.
We
are
actually
going
to
start
putting
release
notes
into
a
new
dedicated
website.
We
will
link
it
in
the
notes
and
in
the
stream,
but
it
is
rel.
Note
states
do
so
for
115
we're
going
to
keep
the
entire
change
log
in
the
markdown
file
and
also
have
the
rel
note
site
live
and
then
eventually
start
moving
more
and
more
things
into
the
realm
note
site.
D
E
I
just
shared:
can
you
see
that?
Yes,
we
see
it?
Thank
you.
Hi
I'm
Steve
Wang,
co-chair
of
the
sig
VMware,
which
is
charged
with
the
cloud
provider
for
running
kubernetes
on
top
of
the
vSphere
hypervisor,
as
well
as
supporting
other
VMware
infrastructure
that
might
come
into
play
when
running
kubernetes.
E
We
updated
the
out
of
tree
cloud
provider
were
now
up
to
a
version
0.20
its
beta,
but
as
far
as
we
know,
this
is
now
stable
so
that
you
know,
if
you
were
to
contemplate
using
it
for
something
you
care
about.
It's
not
insane.
The
last
update
had
some
CI
CSI
integration
related
changes.
I
think
this
is
true
of
all
out
of
three
cloud
providers
that
they're
moving
to
use
the
out
of
tree
storage
CSI
as
well,
and
we've
got
test
improvements
compared
to
the
prior
version.
E
E
E
Finally,
on
the
cluster
API
front,
we've
had
a
working
cluster
API
provider
for
vSphere,
but
a
decision
was
made
to
make
this
align
with
the
cluster
API
for
AWS,
and
the
last
release
was
March
29,
but
there's
activity
going
on
so
for
upcoming
cycles,
we're
planning
and
bringing
that
out
of
tree
cloud
provider
to
stable
release.
We've
got
upcoming
updates
to
the
CSI
for
vSphere
storage
driver,
including
some.
Some
of
these
are
probably
distant,
meaning
second
half
of
the
year,
but
we're
intending
to
pursue
the
snapshot,
features
that
are
underway
in
cig
storage.
E
Now
we
think
that
we'll
move
this
up
to
a
beta
release
in
the
next
couple
of
weeks
and
in
queue
we've
got
support
planned
for
resizing
of
already
provisioned
volume,
snapshot,
support,
implementing
a
plugin
for
the
Valero,
open
source,
backup
project
supporting
readwrite
many
and
volume
cloning
I
already
mentioned
on
the
cluster
API
front,
we're
planning
on
aligning
more
closely
with
the
AWS
cluster
API
provider.
Oops
something
happened
there.
E
Okay,
related
caps,
I'm,
not
gonna,
read
these
to
you,
but
go
back
in
this
deck
and
you
can
find
the
caps
for
the
background
on
of
this
activity.
That's
going
on
now
related
working
group
status.
We've
got
two
working
groups
under
the
sake
right
now,
one
for
the
cloud
provider
and
at
those
cloud
provider
meetings
we're
also
covering
the
CSI
storage
driver.
It
meets
whens
first
Wednesday
of
every
month.
The
recorded
meetings
are
up
on
YouTube
and
the
notes
are
there
at
that
link
then.
E
Similarly,
for
the
cluster
API
provider,
those
meetings
are
every
two
weeks.
Next,
one
June
7th
and
we've
got
meeting
notes
online
as
well
as
YouTube
recordings
of
the
meetings.
We've
got
one
issue,
that's
tagged
as
Help
Wanted
and
it's
here.
If
anyone
is
interested
where
to
find
the
say,
go
over
all
our
meeting
is
Thursdays.
11:00
a.m.
it's
right
after
today's
meeting
and
we've
got
links
there.
E
The
time
frame
is
second
half
of
the
year,
and
another
thing
that
I'll
I
want
to
mention
is
that
those
cloud
provider
working
groups
folded
under
cloud
provider
will
cover
developer
activity.
But
in
the
VMware
cig
we
are
fielding
user
requests
for
support
and
guidance
and
we're
intending
to
try
to
I
get
some
users
to
put
forth
a
proposal
to
start
what
might
be
an
inaugural
user
group
to
pick
up
some
of
the
activity
that
has
been
going
on
underneath
the
vmware
saying
in
the
form
of
a
user
group
anyway.
That's
it
any
questions.
A
F
F
You
so
I'm
where's
the
gaps,
and
the
first
update
is
that
we
have
a
new
chair
and
it's
me
I'm
going
to
be
the
new,
see,
gaps
chair
and
then
what
we
did
last
cycle.
We
add
some
more
support
for
PDB
PDB
now
supports
custom
resources
and
also
PDB
is
now
mutable
so
and
it
will
be
and
merged
into
one
15.
F
So
before
this
we
know
that
PDB
might
have
a
race
condition
between
a
fiction
to
be
called.
But
then
we
figured
that
users
can
easily
delete
and
recreate
pdbs.
So
to
make
it
easier
for
users
to
use,
we
decide
to
relax
the
immutability.
F
So
now
you
can
and
modify
PDP's,
and
then
we
recently
found
an
issue
in
which
is
introduced
in
112.
It's
an
incorrect
defaulting
of
a
field
in
hot
template,
actually
pops
back,
which
is
part
of
PUD
template,
and
then
it
causes
an
unexpected
rollouts
after
you
upgrade
to
112.
So
just
take
a
look
at
the
fixed
there's,
a
charter
peg
into
the
oneself
and
the
all
the
following
releases.
So
before
you
upgrade
just
make
sure
you
have
the
new
fix
and
then
our
ongoing
work.
F
So
most
on
workloads,
AP
is
our
GA
and
the
remaining
one
is
cron
job,
we're
moving
it
to
GA
and
the
PDB
the
hot
disruption
budget
API
that
I
just
mentioned
is
currently
beta
and
we're
moving
it
to
GA
as
well,
and
then
we
have
several
interesting
caps
and,
for
example,
the
site,
cart
containers.
If
you
are
familiar
with
it,
it's
a
it's,
a
site
up,
set
car
container
concept
that
you
can
run
in
a
sidecar
and
in
a
in
a
cycle
container
in
the
same
job
in
the
same
pod.
F
But
then
an
issue
that
we
found
is
that
when
you
are
running
a
job
with
sidecar
container
in
it,
then
when
your
part
terminates
the
main
container
terminates
the
psyche.
Heart
won't
be
terminated
with
it.
So
there's
a
fix
for
that
and
then
also
a
cap
for
stay
full
volume
expansion.
So
with
this
you
can
resize
your
stay
full
set
and
then
the
maximum
available
for
stateful
sets
and
then
the
last
one
is
the
safe
whole
application.
F
Data
management
is
a
it's
a
proposal
for
introducing
a
bunch
of
new
CR
DS
for
you
to
and
do
data
management
for
your
stable
applications.
For
example,
you
can
do
snapshot
or
your
apps
and
then
you
can
do
restore
and
you
can
go
to
those
caps
to
find
out
so
I'll
update
these
slides
to
include
the
links
to
those
caps
and
then
for
cron
job
to
GA
and
the
API
is
our.
F
The
API
is
pretty
much
stable
and
we're
just
trying
to
update
the
controller
to
make
it
more
scalable,
because
right
now,
it's
not
using
Informer
framework
like
other
controllers,
so
it
has
to
list,
do
a
less
call
to
the
API
server
for
getting
for
listing
other
jobs,
and
it's
not
a
very
efficient.
If
you
have
a
lot
of
cron
jobs
or
jobs
in
your
cluster,
so
we're
trying
to
improve
that
and
then
have
job
to
GA
and
then
for
application
controller
is
another
sub
project
that
we
have
in
sick
apps.
F
You
can
find
a
link
here
and
then
and
it's
about
a
CRD
that
groups
a
bunch
of
resource
together
as
a
as
a
group
for
you
to
as
a
unit
for
you
to
do.
For
example,
garbage
collection
or
you
can
show
them
in
the
UI
or
in
CLI,
and
then
you
can
also
use
the
controller
to
get
the
status
of
the
whole
aid.
The
whole
app,
and
it
also
supports
adoption
from
using
the
labels
electric
feature.
F
And
then
this
is
the
last
light.
How
you
can
how
you
can
contribute
contribute,
and
we
are
looking
for
people
to
help
us
and
contribute
to
our
close
API.
Is
that
cron
job
to
GA
plan?
We
have
a
cap
out
we're
still
looking
for
people
to
help
us
implement
it,
and
then
the
Mex
elbow
for
a
full
set
and
the
safe
will
serve
all
the
expansion.
F
A
F
A
Moving
on
to
the
announcement
section
of
the
agenda,
so
I'm
gonna
go
through
all
the
announcements
that
we
have
and
then
open
it
up
at
the
end.
If
there's
anything
not
written
down
that
people
would
like
to
add
so.
First
announcement,
we
have
congrats
to
Bob
killin
as
Bob
on
the
line
Congrats
Bob
for
joining
the
github
admin
team,
and
that
is
from
Arin
Creek
and
Berger
Thank
You
Aaron
for
calling
that
out
and
thanks
yeah
go
ahead.
Some.
G
A
Thank
you,
Thank
You
Aaron
and
thank
you
Jeff
office
hours
next
week.
So
there
is
a
link
to
the
live
stream.
Click
the
bell
for
reminder
so
that
you
get
that
up
on
your
YouTube
when
the
live
stream
is
live,
help
by
retweeting.
So
there's
a
link
in
there
that
you
can
retweet
so
help
raise
awareness
to
the
live
stream
for
office
hours
is
going
on
and
we're
looking
for
a
West
Coast
streamer
so
that
we
can
do
a
western
session
ping
Jorge
Castro.
A
A
There
are
lots
of
great
feedback
where
people
are
congratulating
each
other
for
what
that
they've
been
doing
in
the
community
I'm
going
to
call
out
some
highlights,
so
at
Vince,
Prai
huge,
shout
out
to
Dee
Hellman
and
D
watt
for
taking
1
plus
hours
today
to
give
great
feedback
about
the
cluster
API
bootstrap
proposal
and
helping
move
the
project
forward.
Thank
you
at
algea,
big
shout
out
to
bend
the
elder
for
having
a
working.
I
p
v6
CI
in
kind.
Thank
you.
A
@J
burkas
huge
shout
out
to
catherine
for
automating,
a
release
team
roll
out
of
existence,
plus
all
the
other
tests,
infra
folks
who
helped
so.
Thank
you,
catherine,
and
thanks
josh
for
calling
that
out
and
j
d
Tibor
shout
out
to
justin
SB
for
cutting
the
cluster,
a
PIV
0.12
bug
fix
release.
So
thank
you
for
your
hard
work
there.
That
brings
us
to
the
end
of
our
stated
agenda.
I
will
now
open
up
the
floor
is.
Do
we
have
any
other
announcements
that
need
to
be
made.