►
From YouTube: Kubernetes Community Meeting 20200220
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
Like what you see here? Continue the conversation on https://discuss.kubernetes.io
A
So
please
be
mindful
of
what
you
say
is
being
recorded,
so
we
have
a
code
of
conduct
so
be
excellent
to
each
other
and
and
hear
myself,
this
Vamshi
so
Motorola,
the
iBook
for
American
Airlines
and
from
last
three
or
four
months,
I'm,
just
working
for
humanists,
open
source
community-
and
this
is
my
first
meeting
hosting
today
and
I
work
for
seek
on
topics
and
release.
Yay
signal
I'm
just
getting
into
it
and
that's
about
myself.
A
C
Yeah,
sweet
awesome,
hi
everyone,
my
name
is
Jorge
Larkin
I
am
the
released
lead
for
kubernetes
118
and
today,
I
come
with
a
couple
updates,
so
we
are
currently
at
week,
seven
out
of
12
right
now
we
have
about
50
and
enhancements
being
tracked.
Eighteen
of
them
are
alpha,
1600
16,
beta
16
stable,
so
we
have
a
nice
distribution
and
hopefully
a
bunch
of
really
cool
features
are
going
to
be
released,
which
one
a
team.
C
C
Along
with
that,
we,
if
you
go
in
kubernetes
kubernetes,
you
shall
now
be
able
to
see
the
118th
release
branch.
All
they
all
set
up
and
ready.
They
all
set
up
and
ready
to
go
with
that
on
the
CSU
Noland.
We
finally
remove
the
ones
the
114
CI
jobs
and
we
are
going
to
start
looking
into
118
and
the
master
branch
separate
a
separate
link
as
we
move
forward
and
prepare
for
code
freeze,
which
is
scheduled
for
Thursday
Thursday,
March
a5,
so
I'm
going
to
stop
right
there.
C
That
is
a
Daddy
I,
think
that
is
the
most
important
thing
that
I'm
going
to
say
it
that
I'm
going
to
say
today,
code
freeze
is
coming
a.
We
have
a.
We
have
a
couple
more
weeks,
a
we
have
a
couple
more
weeks,
so
please
make
sure
to
get
all
your
PRS
ready
reviewed
in
Murch
other
than
that
couple.
Other
deadlines,
equally
important
is
Monday,
March
16,
that's
gonna,
be
week,
11
it
talks
must
be
completed
and
reviewed,
and
the
kubernetes
118
zero
release
is
planned
for
to
a
Tuesday
March
24
week.
C
12
and
those
are
the
main
update,
also
the
main
updates
that
I
have
from
the
release
team
or
also
a
important
what
they
it
last
week.
I
believe
that
work.
There
were
some
patch
releases
for
kubernetes.
So
now
we
have
one
fifteen
ten
one.
Sixteen
seven
and
one
17.3
available
for
everyone
to
consume
and
that's
a
those
are
my
updates.
A
A
Yeah
thanks.
Thanks
already
thank
sorry
for
the
update
and
yeah.
Let's
now
roll
on
to
the
next
singer
piece
we
have
as
we
discussed,
we
have
sig
windows,
cig,
multi,
cluster
and
say
god
will
be
presenting
yesterday
and
let's
start
with
the
first
thing:
sig
windows
we
have
for
Michael
and
Patrick
and
debris
so
from
Sigrid
knows
I'm
here
guys.
Thank.
D
You
hi
everybody
so
we'll
be
providing
you
with
an
update
on
sick
windows
and
what
we've
been
doing
in
the
last
four
months
since
our
last
community
update
on
the
call
here,
we
have
several
members
from
sick
windows
so
we'll
be
presenting
today
myself
on
one
of
the
co-chairs.
Patrick
Lang,
also
a
co-chair
of
sick
windows
and
deep
deploy.
Who
is
one
of
our
technical
leads?
Also
on
the
call
who
have
a
few
more
members
from
our
team
like
mark
Rosetti
and
others.
D
So
the
first
thing
we
want
to
talk
about
is
some
of
the
themes
and
investments
that
we'll
be
making
for
118
and
kind
of
highlighting
some
of
the
efforts
which
haven't
really
started
in
118,
but
to
study
some
these
investments,
one
or
two
releases
back,
because
there
are
big
chunk
items
that
really
take
a
long
time
to
stabilize
and
kind
of
bring
to
fruition.
So
deep,
you
want
to
talk
a
little
about
Windows
identity,
certainly.
E
Thanks
Michael,
so
in
the
area
of
Windows
identity,
you're
going
to
go
from
beta
to
stable
or
graduate
from
beta
to
stable
a
couple
of
features.
The
first
one
is
around
Active
Directory
and
a
mechanism
called
group
managed
service
accounts.
This
allows
the
containers
to
get
their
identity
from
Active
Directory
server
and
basically
pass
it
over
to
the
apps.
E
It's
been
in
beta
since
about
116,
and
we
have
had
some
pretty
successful
deployments
and
planned
to
take
it
to
stable
in
118,
so
remove
the
feature
flags
entirely
run
as
user
name
is
another
field
in
the
pots
bag
that
we
are
planning
to
take
from
beta
to
stable
mark
has
been
working
on
that
and
it's
looking
pretty
good.
Thank.
D
D
So
we're
working
with
some
of
the
security
folks
in
the
kubernetes
community
to
identify
the
right
safeguards
to
put
in
place,
think
of
port
security
policies,
possibly
using
some
admission
controllers
and
we're
also
looking
to
fork
winds
to
reduce
some
of
the
functionality
that
comes
out
of
the
box.
So
all
of
that's
gonna
happen
and
it's
gonna
enable
us
to
run
QbD
em
join
workflows
without
wrapper
scripts,
and
it's
gonna
give
us
this
consistent
operating
model,
that's
very
aligned
to
how
Linux
works.
So
you
go
to
the
node.
D
You
install
some
services
in
this
case
you're
gonna
install
flannel
as
your
CNI.
You
can
install
Q
proxy,
you
can
install
the
cubelet
and
then
all
you
have
to
do
is
run
join
and
in
our
case
the
CNI
flannel,
for
example,
is
gonna
run
as
a
daemon
set,
and
so
we'll
keep
proxy
and
the
Q
proxy
thing
is
a
little
bit
different
than
Linux,
because
Q
proxies
installed
by
Q,
BDM
and
Linux
on
Windows.
D
You
have
to
deploy
it
as
a
daemon
set
were
also
working
on
a
wide
variety
of
documentation
to
help
users
in
this
endeavor
and
we're
hoping
that
if
this
goes
well
we'll
be
able
to
graduate
q
BDM
supported
in
the
next
release
or
two
we're
also
looking
into
upgrade
support,
so
will
enable
you
to
upgrade
your
notes
from
our
release
of
kubernetes
to
the
next.
As
a
byproduct
of
this
cube,
ADM
work
that
we're
doing
we're
also
straight
trying
to
snap
into
the
cluster
api
work
and
we're
going
to
likely
provide
experimental
support.
D
B
Thank
you,
yeah,
and
so
the
other
thing
we've
been
working
on
doing
over
the
last
couple
releases
is
we've
been
working
closely
with
some
maintainer
x'
working
on
container
D.
So
that
way
we
can
provide
a
path
to
be
able
to
build
nodes
that
run
with
with
cry
container
D
instead
of
docker,
and
this
is
something
that
aligns
pretty
well
with
some
of
the
investments
that
we're
the
windows
team
has
been
contributing
as
well,
because
they've
got
people
that
are
working
both
on
the
container
frameworks,
as
well
as
on
container
d
itself.
B
So
we've
got
tests
in
place
place
running
in
multiple
environments.
Those
are
all
linked
into
slides
and
they're
up
on
test
grid
and
then
the
kief
work,
that's
needed
to
enable
running
containers
through
through
cry
is
linked,
a
tracking
issue
there
and
it's
expected
to
release
with
container
D
1.4
and
so
for
118.
It's
going
to
be
an
alpha.
E
So
I
think
Patrick.
So
one
of
the
major
initiatives
we
are
driving
around
the
persistent
storage
side
of
things
for
Windows
is
developing
an
entity
called
the
CSI
proxy.
The
idea
is,
it's
a
host
or
a
service
that
you
spin
up
within
the
windows
nodes
so
that
CSI
plugins,
which
are
continuous
and
deployed
for
Windows,
can
make
use
of
the
proxy
to
make
privileged
calls
into
the
host
operating
system.
E
The
set
of
API
is,
we
are
targeting
right
now
is
mainly
targeting
operations
around
desks
volumes
and
the
remote
file
system,
capabilities
for
Windows
and
we're
also
developing
support
in
the
azure
disk,
and
the
gcpd
see
a
side
note
against
to
do
CSI
proxy
as
sort
of
the
first
set
of
CSI
plugins
that
we
deliver
that
use
a
CSI
proxy.
So
CSI
proxy
work
is
being
done
all
out
of
tree
and
the
idea
is
to
enable
the
CSI
migration
initiative
to
move
out
a
lot
of
the
entry
plugins
from
kubernetes
kubernetes
to
the
individual.
B
So
now
the
limits
are
honored
accurately,
even
when
the
system
is
not
under
full
load.
And
so
those
are
a
couple,
your
feedback
issues
we've
had
that
we
were
able
to
group
together
and
get
done
for
18,
and
if
it
makes
sense,
we
may
back
port
those
to
you
know
16
or
17,
once
we
get
more
feedback
on
them.
B
The
other
thing
I
wanted
to
call
out
is
that,
if
anyone's
been
watching
the
test
stuff,
there's
been
this
trend
where,
when
test
images
are
updated,
the
windows
tests
tend
to
fail
for
a
couple
days
and
that's
because
we've
got
a
manual
process
that
we
were
working
on
getting
automated
in
partnership
with
cig
testing
and
so
going
forward.
B
All
the
test
images
are
something
that
can
be
built
automatically,
but
we
need
help
from
testing
network
node
and
multiple
and
storage
and
multiple
SIG's
to
get
these
remaining
PRS
reviewed
emerged,
and
so
you
know,
we've
got
a
long
list
here,
but
you
know
we're
asking
all
the
SIG's
for
help
to
look
at
these
PR.
So
that
way
we
can
get
rid
of
this
manual
tech
debt
and
make
sure
that
tests
don't
flake,
because
the
Windows
images
are
lagging.
Those
used
used
all
etics
all
the
automations
in
place.
B
So
there
one
of
the
things
I
want
to
call
briefly
was:
we've
also
been
working
on
another
set
of
issues
making
it
easier
to
for
people
to
take
existing
Windows
apps
and
get
the
logs
out
into
kubernetes.
So
there's
a
log
monitor
tool
that
was
open
source
by
Microsoft
and
I've
actually
got
a
demo
and
some
more
details
on
how
to
use
that.
But
this
helps
bridge
the
gap
between
some
of
the
applications
that
were
logging
to
some
Windows
specific
locations
rather
than
standard
out
and
actually
getting
those
to
standard
out.
B
D
Absolutely
by
the
way
on
the
lock,
monitor
super
important
tool.
It
basically
enables
a
key
aspect
of
observability
for
for
Windows,
and
thank
you
to
Microsoft
for
open
sourcing
that
now
a
lot
of
the
folks
in
the
community
are
starting
to
use
it.
So
a
couple
of
notables
from
117
since
who
haven't
had
an
update
in
in
a
while.
We
introduced
runtime
class
code
for
for
Windows.
D
That's
gonna
make
it
a
lot
simpler
for
Windows
developers
to
target
Windows
workloads
on
Windows
nodes
in
the
past,
who
kind
of
relied
on
pains
and
Toleration
to
prevent
accidental
deployment
of
Linux
workloads
on
Windows
nodes,
and
that
made
things
a
little
bit
clunky.
Now,
with
the
runtime
class,
we
made
it
a
lot
easier
for
you
to
define
the
aspects
of
the
workload
that
you're
working
on,
for
example,
define
its
architecture,
the
OS
and
then
put
it
all
in
the
runtime
class,
and
then
in
your
pod
spec.
D
All
you
have
to
define
is
the
runtime
class
name
and
it
will
appropriately
be
scheduled
on
the
write
node
along
the
same
lines.
We
also
added
new
labels
for
Windows
nodes.
So
now
you
have
a
new
label
called
Windows,
build
that
reflects
the
windows,
major/minor
and
patch
revision
for
your
windows,
build
so
that
it
could.
Aid
in
the
compatibility
of
windows
were
close
to
Windows
host
OS
in
terms
of
plans
for
upcoming
cycles.
We're
gonna
continue
some
of
the
major
investments
we've
just
talked
about
today,
cube
ADM
as
well
as
cluster
API.
D
It
is
going
to
be
important
for
lifecycle
management
of
Windows
clusters.
We
want
to
continue
working
on
the
CSI
work
that
deep
mentioned,
because
that's
important
for
us
to
enable
windows
and
windows
were
closed
to
have
a
multitude
of
storage
options,
we're
going
to
continue
investing
in
CRI
container
D
and
the
runtime
class
to
make
sure
that
we
give
our
developers
an
equivalent
architecture
at
the
runtime
level
as
with
Linux.
D
So
thank
you
to
all
of
the
other
six
that
are
working
with
us
in
collaboration
of
all
these
features
and
we'll
keep
at
it
for
the
forthcoming
feature
feature
future
sorry,
the
next
slides
there
you
guys
can
you
know
anytime?
You
want
to
have
weekly
meetings
who
have
recordings
of
all
those
meetings
who
have
a
parrot
eyes,
project
port.
If
you
want
to
see
what
you're
working
on
next
and
then,
if
you
want
to
know
where
to
find
us,
we
have
links
to
all
of
that.
Thank
you.
A
F
F
In
our
last
cycle,
we
have
been
basically
discussing
the
feature
of
the
sig
and
determining
like
areas
where
we
can
collaborate,
the
one
that
all
direct
folks
attention
to
is
multi
cluster
or
service
API
proposal.
The
link
is
in
in
the
slide
and
finally,
we
are
seeking
folks
that
are
interested
in
maintaining
cube,
fed
the
the
the
folks
that
have
been
working
on
it
are,
are
and
have
moved
on
to
other
things,
so
there's
sort
of
an
open
need
there.
F
If,
if
we
want
to
carry
that
project
on,
could
we
advance
the
slide
so
things
we
need
from
you?
Honestly,
participation
is
key
to
determining
what
the
right
problems
to
solve
in
the
community
in
this
area
are.
So
we
want
to
hear
what
people
are
looking
for
the
community
to
solve.
We
want
to
hear
what
what
folks
may
be
working
on
outside
the
community.
F
One
of
the
things
that
that
we
uncovered
with
a
survey
in
the
last
part
of
the
year
in
2019
is
that
some
folks
have
have
decided
that,
like
they
were
waiting
for
X
or
Y
thing
from
the
sig
and
didn't
occur
on
a
time
frame
that
worked
for
them.
So
they
did
their
own
thing
and
we
want
to
hear
what
those
things
are
that
people
did.
F
So,
if
you're
working
in
this
area
outside
the
community
we'd
love
to
see
your
demos,
we've
had
a
few
really
interesting
demos
from
from
projects
like
that
that
aren't
in
in
the
community
necessarily
but
they're
open-source
projects
in
that
functional
area
could
just
move
the
slide
on
so
cube,
fed
status
as
I
said
we're
seeking
maintainer
x'.
If
you
are
interested
in
cube
fed,
especially
if
you
are
currently
using
it
and
your
interest
helping
to
maintain
it,
you
can,
please
feel
welcome
to
reach
out
to
me.
F
That's
PMO
RIE
on
slack,
and
we
can
talk
about
what
you
might
be
interested
in.
Doing
how
you
can
contribute.
I
should
have
put
a
little
bit
more
complete
information
on
here.
I
will
update
these
slides
after
after
we
have
bi-weekly
meetings
that
you
can
find
in
the
community.
Repo
coordinates
for
those,
but
we
want.
We
need
your
help
to
help
us
decide
what
problems
solved,
show
us
what
you're
building
and,
if
you're
interested
in
maintaining
cube
fed.
Please
give
me
a
ping
on
slack.
Thank
you.
G
G
Yep
good
awesome
all
right,
well,
I'm,
Mike
I
am
one
of
the
three
chairs
of
sig
off
I
had
to
give
you
the
community
update.
So
I
have
a
couple
cool
things
to
highlight
from
our
last
cycle.
First
one
is:
we
adopted
a
new
sub
project
of
Sagat,
it's
the
secret
or
CSI
driver.
It
was
donated
to
our
organization
by
a
day
slaps
and
basically,
what
it
is
is
a
framework
for
building
CSI
drivers
that
integrate
with
external
secret
stores.
It
supports
falton
passion,
corp
and
integrates
using
the
CSI
mechanism.
G
So
it's
an
alternative
to
the
secret
volumes,
so
secrets
have
I
kubernetes
secrets
have
some
issues.
I
have
been
pointed
out
over
the
over
time
and
there's
a
lot
of
awesome
tools
that
support
really
good
management
of
secrets
such
as
vault,
and
we
heard
a
lot
of
users
wanted
a
better,
deeper
integration,
so
we're
hoping
that
this
project
can
serve
as
a
building
block
for
people
who
want
to
integrate
kubernetes
secrets
that
are
secret
style
volumes
with
their
secret
manager,
choice.
G
We've
been
doing
a
lot
of
work
around
certificates,
so
the
certificates
API
has
been
in
beta
for
a
really
long
time.
I
think
it's
been
around
for
three
years,
at
least
in
beta,
and
has
it
made
much
progress.
Although
a
lot
of
people
are
using
it,
we
want
to
change
the
status.
We
want
to
migrate
it
to
GA
and
there's
part
of
that
work.
We
retroactively
kept
I,
don't
even
know
if
a
real
design
doc
existed
for
this
API,
because
they're
so
old
and
we're
using
that
kepta
orgonite
as
our
our
journey
to
GA.
G
So
since
that
retroactively
kept
was
merged
kind
of
outlining
the
current
state
of
the
API.
We
have
made
some
updates
to
it
to
include
support
for
multiple
signers,
so
this
was
something
that
certain
external
projects
like
jet
snack,
has
a
project
called
cert
manager
which
a
lot
of
people
are
using.
They
have
basically
they
they
need
to
re-implement
the
current
certificates
API
using
CR
IDs,
because
the
the
APS
kind
of
assumed
a
single
certificate
authority
backing
it.
So
we
added
support
for
we.
G
We've
designed
support
for
multiple
signers
I'll
talk
about
we're
actually
working
on
implementing
those
changes,
this
release
and
we
are
slowly
but
surely
migrating
all
all
clients
that
use
that
research.
It
goes
from
disk
to
support
dynamic
rotation
of
asserts.
So
this
this
happens
transparently.
It's
changed
changes
to
the
client
library
and
what
that
allows
is,
for
example,
if
an
API
server
is
provisioned
with
a
cert
on
disk.
You
can
just
change
that
certificate
and
the
API
server
will
periodically
reload
that
certificate
from
disk.
G
Another
big
focus
over
the
past
six
to
nine
months
has
been
performance,
so
layers
sit
in
between
every
single
API
request.
We've
made
a
bunch
of
improvements
here.
Some
of
the
interesting
ones
are:
we've
had
really
major
speed.
Ups.
In
our
token
caching
there's
a
lot
of
cool
and
interesting
work
that
went
into
that
and-
and
we
made
the
note
authorizer,
which
is
the
probably
one
of
the
busiest
authorizers-
much
faster-
to
be
able
to
keep
up
with
high
churn
clusters.
G
So
no
authorizer
is
responsible
for
doing
fine-grained
authorization
for
nodes,
which
is
one
which
are
probably
the
most
numerous
client
in
lettuce
clusters.
So
it's
pretty
critical
to
keep
it
keep
it
fast.
We've
also
added
a
bunch
of
monitoring
on
both
authentication
and
authorization,
monitoring
around
latency
monitoring
around
cache
performance
monitoring
about
which
authenticators
are
used,
and
we've
also
added
to
Kas
scalability
prowl
scalability,
test
testing
target
scalability
limits
of
authentication,
so
I
think
all
these
PRS
are
super
interesting
to
go
through
and
see
the
work
that
was
done
here.
G
G
So
we
are
discussing
ways
to
a
better
surface
constraints
as
like
security
profiles
for
different
container
runtimes,
so
there's
a
proposal
that
is
linked
here
about
what
we
would
define
those
profiles
as
how
we
would
version
them
and
we
are
still
kind
of
on
security
policy-
is
still
in
limbo
and
we
haven't
fully
figured
out
what
we're
gonna
do
with
it.
So
the
fourth
item
is.
G
G
G
So
hopefully
the
plans
only
affect
you
in
very
good
ways:
better
performance
and
better
security.
If
they
do
end
up
breaking
you,
please
let
us
know
we
want
to
know.
We
do
not
intend
to
break
anyone,
and
we
are
very
conscious
and
careful
about
improving
in
ways
that
have
minimal
customer
impact,
bad
customer
and
good.
G
So
how
can
you
contribute
so
join
our
meetings?
We
meet
every
other
week
on
Wednesdays
at
11
a.m.
Pacific
Standard
Time
file,
bugs
if
you
hit
anything
or
if
you
have
any
issues,
we
always
want
to
hear
about
them.
We,
if
we
have
time
we
set
aside
some
time
to
at
the
end
of
every
meeting,
to
review
that
backlog
and
we
actually
talked
them
over
improved
monitoring.
If
there's
anything
you
want
to
monitor
in
any
of
our
sub
projects,
let
us
know
and
send
PRS.
G
We
are
very
supportive
of
any
measures
to
improve
the
liability,
and
this
is
a
list
of
good
first
issues,
and
that
comment
we
have
a.
We
would
like
for
all
kubernetes
clients
to
reload
tokens
so
that
we
can
rotate
them.
So
there's
a
list
of
clients
that
are
in
the
kubernetes
client
github
board
that
have
yet
to
change
to
reloading
tokens.
So
if
you
use
any
of
those
clients
like
Java
or
Python,
please
take
a
look
and
contribute.
That
would
be
great.
G
A
Awesome.
Thank
you.
Thanks
for
that
update
and
let's
move
on
to
the
next
announcement
section
and
what
we
have
for
the
announcements
we
have
contemplated
summit
coming
for
Amsterdam,
scheduled
announced
for
contributor
summit
in
Amsterdam
and
if
somebody's
planning
and
the
somebody
not
yet
registered
firkin
you
contributed
so
much
and
for
this
summit
and
the
absalom
please
go
ahead
and
register
right
there
and
the
next
one
sig
updates
a
cig,
instrumentation
storage,
Service,
Catalog
steering
committee
and
hopefully
the
code
of
conduct
there.
A
That's
it
on
the
shoutouts
and
yeah,
then
this
community
meeting
notes
will
be
posted
on
the
dev
mailing
list.
As
soon
as
the
meeting
is
done,
we'll
post
it
out
and
the
recording
will
be
possible
during
this
I
think
we
have
yeah.
We
have
pretty
much
30
minutes
of
your
time,
just
giving
it
away
back
all
of
you
and
that's
it.
That's
it
from
our
site
for
this
community.
Thank
you
all
for
who
presented
it,
and
thanks
George
and
Laura
for
taking
the
notes
and
thanks
to
all
the
six
that's
presented
today.