►
From YouTube: Kubernetes Community Meeting 20200116
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
Like what you see here? Continue the conversation on https://discuss.kubernetes.io
A
All
right
so
welcome
to
the
kubernetes
monthly
community
meeting
for
January
2020
I
have
a
couple
of
reminders
for
you
before
we
get
started
first
things.
First,
we
are
now
being
recorded,
I
believe
I,
don't
see
the
little
recording
button,
but
I
am
assuming
it's
working.
So
just
as
a
reminder
we
are
being
recorded.
If
you
do
not
want
to
be
on
the
recording.
Please
do
mute
your
video
mute,
your
sound
and
hopefully
I.
Don't
sound
too
much
like
the
corporate
policy.
A
That
I'm,
quoting
also
a
code
of
conduct,
is
in
effect
for
this
meeting
in
short,
be
awesome
to
each
other.
So
with
that
I
do
want
to
say
that
my
name
is
Laura
Santa
Maria
I
am
your
host
for
today's
community
meeting
I'm
from
Long
DNA
I'm,
a
developer
advocate
and
I
also
work
with
the
Contra
Becks
sake,
which
we
do
contributor
experience.
If
someone
would
be
so
kind
as
to
be
my
note-taker.
If
Google
Docs
wants
to
work
with
us
today,
we
are
having
a
little
bit
of
issues
with
Google
Docs.
B
I
was
just
dropping
a
link
in
the
chat:
hey
I'm,
Bob
Filner,
mr.
Bobby
tables.
I
am
a
118
release,
lead
shadow.
The
big
thing
to
keep
in
mind
with
regards
to
release
is
enhancements.
Freeze,
it's
not
too
far
away,
it's
just
under
two
weeks
at
January
28th.
So
that
was
a
time
to
make
sure
you
know
your
caps
meet
all
the
criteria,
so
they
have
to
be
like
an
implementable
state,
have
a
test
plan,
and
you
know
nice
and
happy
beyond
that.
I
think
most
of
our
deadlines
are
a
bit
far
out
for
now.
A
All
right
and
I
believe
there
were
some
patch
release,
update
notes,
so
just
some
notes
that
were
left
for
us:
the
1.17
dot
one
release.
It
was
out
January
14th,
the
one
that
16.5
release
is
coming
today:
January
16th,
the
1.15.
The
eighth
release
is
also
coming
out
today
on
January
16th,
also
the
1.14
that
11
release
coming
today,
January
16th,
to
fix
an
upgrade
scenario
for
1.15.
A
series
of
bugs
have
been
identified
in
how
the
next
beta
tag
is
applied
on
these
branches,
for
example,
when
v1
that
17.1
is
tagged
and
released.
A
We
also
mark
the
branch
root
of
tag,
v71
dot,
17.2,
beta
dot,
0
the
bugs
root
cause
goes
back
many
years
in
the
design
and
implementation
of
the
tool
used
to
build
a
release,
but
they
are
partially
corrected.
Now
a
complete
fix
will
likely
come
first
and
the
next
patch
target
releases
is
February
11th.
So
that's
about
it
for
the
updates
from
the
release
team.
Thank
you
so
much
Bob
and
now
we're
gonna
start
with
our
cig
updates.
The
first
person
up
is
cig
cloud
provider
with
Walter
Walter.
Are
you
on
the
call
I'm.
C
C
All
right,
perfect,
I,
gotcha
awesome
all
right,
so
we
promoted
the
node
zone
region.
Topology
labels
to
GA
personally
I.
Think
you
for
him
to
do
this.
I
might
do
it
slightly
differently.
As
a
result
of
this,
we
had
to
deprecate
the
old
labels,
so
you'll
notice
the
failure
domain
beta,
kubernetes,
I
use
own
and
failure
domain
beta,
kubernetes
IO
region
have
been
deprecated.
The
new
versions
do
not
contain
the
term
beta,
so
just
an
FYI.
We
also
got
the
API
server
Network
proxy
code
in
conjunction
with
the
API
machinery
T
saying
to
alpha.
C
The
plan
is
to
use
this
to
replace
the
SSH
tunnel
code,
which
was
deprecated
over
a
year
ago,
and
we've
been
working
with
the
sig
storage
team.
We
have
a
plan
in
conjunction
with
six
storage,
to
attempt
to
get
cloud
provider
extraction
and
the
CSI
timelines
to
align
with
being
done
in
the
1.21
relates.
C
So
plans
for
the
upcoming
cycles.
These
meetings
are
now
you
will
be
hearing
us
again
for
like
six
months.
We
would
like
to
bring
the
network
proxy
to
GA
and
remove
the
SSH
tunnels.
We
would
like
to
extract
the
cloud
provider
dependency
portion
of
the
credential
provider
from
cubelet,
so
we
still
need
to
get
alignment
with
sig
note
and
sig
off
for
that.
C
We're
trying
to
generate
controller
migration,
lock
mechanism
to
allow
primarily
for
h.a
environments,
to
allow
controllers
to
migrate
from
one
controller
manager
to
another,
our
use
cases
to
go
from
the
KCM
to
the
CCM.
That's
the
cube
controller
manager
to
the
cloud
controller
manager,
but
I
know
a
lot
of
cloud
providers
run
additional
controller
managers
and
so
we're
trying
to
do
it
in
a
generic
way.
That's
good
for
everyone.
C
With
some
help
from
the
release
team,
we
built
support
for
fertile
provider.
'less
builds.
This
is
primarily
the
ability
to
generate
the
cube,
API
server,
the
cube
controller
manager
in
the
cubelet
without
any
cloud
provider
to
khlo
dating
at
all.
So
as
an
example,
you
would
no
longer
see
if
you
don't
have
the
AWS
storage
mechanisms
compiled
as
something
that
you've
built.
On
top
of
you
no
longer
get
complaints
that
they
weren't
there
in
the
cube
API
server.
C
So
how
do
these
plans
affect
you?
Well,
the
big
thing
is
that
in
the
121
release,
we'd
like
to
be
able
to
remove
the
all
of
the
cloud
provider
code,
one
of
the
effects
of
that
is
that
most
of
the
IDI
tests
are
around
cloud
provider
and
when
the
cloud
providers
are
gone,
that
means
that
that
presubmit
signal
is
going
to
disappear.
C
So
we
need
to
come
up
with
a
good
plan
on
how
we
work
in
that
new
world.
So
if
there
are
no
cloud
provider,
beauty
tests
running
or
at
least
they're
all
running
post
submit,
then
we
need
to
be
able
to
answer
quick
questions
like
how
do
we
aggregate
the
test
results?
How
do
I
identify
breaking
changes?
C
C
C
All
right,
I've
got
a
list
of
all
the
caps.
Predominantly
these
caps
are
about
doing
the
cloud
provider
extraction.
So
if
you
have
questions
on
any
of
the
implementation
details,
I
strongly
recommend
taking
a
look
at
these
caps.
Most
of
them
are
under
cloud
provider.
Although
Network
proxy
is
under
API
machinery,
there
are
some
caps
coming,
provide
the
credential
provider
extraction,
effort
which
I
think
we're
on
our
third
run-through
with
node
and
off
the
CCM.
Migration
is
a
little
more
solid.
C
C
Our
related
working
group
status,
so
cloud
provider
extraction
is
the
great
working
group
we're
running
right
now
we're
trying
to
get
the
121
effort.
The
CCM
migration
is
green.
The
network
proxy
right
now
is
green,
the
credential
provider
and
marked
as
yellow
just
because
we
still
hadn't
finalized
on
an
alpha
version
of
the
cap
and
we're
running
out
of
time.
C
How
can
you
contribute
so?
The
best
thing
to
do
is
to
probably
go
to
the
cloud
provider
issues
if
you
have
particular
interest
in
the
API
in
the
API,
a
proxy
server
that
has
its
own
and
I
definitely
would
recommend
taking
a
look
at
that
that's
kind
of
specialized
networking,
but
if
you're
interested
please
help
us
out
and
other
than
that,
we
are
happy
to
help
teach
you
about
cloud
provider,
cloud
provider,
issues,
cloud
provider,
interfaces
and
how
we
try
to
maintain
uniformity
of
cloud
providers
for
kubernetes,
Andrew
psyche.
Him
and
myself
are
monitoring
slack.
C
Please
just
come
in
ask
we
have
a
very
welcoming
group.
We
have
a
lot
of
other
people,
Chris
ho
you're,
seeing
Steve
Wong,
who
also
drop
in
and
we're
happy
to
help.
Some
please
reach
out
to
us
and
as
far
as
finding
us
Andrew
is
Andrew
psyche
him
on
just
about
everything.
He
said:
VMware,
I'm,
chef,
taco
and
most
things
I
work
for
Google.
We
have
a
slick
cloud
provider
page.
We
have
three
slack
channels.
We
have
the
general
cloud
providers
like
general.
C
If
your
cloud
provider
questions,
if
you're
interested
in
API
server
network
proxy
go
ahead
and
on
that
slack,
it's
pretty
small
but
very
welcoming
team
and
if
you're
interested
in
anything
to
do
with
helping
get
the
entry
cloud
providers
out
of
tree.
We
have
a
specific
slack
channel
cloud
provider
extraction
for
that.
So
please
reach
out
and
chat
with
us
and
that's
what
I
have
awesome.
A
D
I
don't
have
Solaris
and
but
I
will
try
to
be
quick
and
so
just
a
quick
reminder.
What
C
code
is
telling
us.
We
are
responsible
for
cascada,
which
adds
introduce
your
thoughts
according
to
the
traffic
for
a
particular
path,
autoscaler,
which
makes
your
thoughts
bigger
and
smaller,
depending
on
their
CPU
and
memory
usage
and
cluster
of
the
SCADA
would
add
and
remove
stones
to
your
customer.
So
in
the
last
half
a
year
in
faster
autoscaler,
we
have
been
switching
to
scheduling
framework
which
we'll
call
life
we
switch,
which
is
called
life
in
180.
D
It
will
improve
a
lot
of
corner
cases
where
the
serratus
trader
has
not
been
doing
a
great
job
like
ante
affinity
and
some
specific
storage.
We
also
added
support
for
a
very
sustained
note
conditions
and
we
improved
performance
and
scalability.
We
also
imported
who
yet
another
cloud
provider
which
is
packet
in
VP
a
we
have
been
mostly
working
on
bug
fixes
and
we
are
graduating
the
API
to
GA
in
HPA.
The
biggest
change
is
the
expansion
of
the
API.
D
We
are
adding
a
way
to
control
how
fast
and
how
much
you
scale
up
and
down,
and
this
changes
should
get
by
p1
18.
Moreover,
recently
we
also
added
a
support
for
a
scale
to
0,
which
is
flood
gated,
as
it
is
a
breaking
change.
So
if
you
want
scale
your
diploma
to
zero,
you
need
to
set
an
excelent.
Apart
from
that,
we
did
quite
a
number
of
bug
fixes
recently
and
that
will
be
all
from
the
updated.
A
E
Yeah,
my
name
is
Abdullah
co-chair
of
SIG's
scheduling
and
here's
our
update.
So
what
we
did
in
our
last
cycle
is
the
following.
The
first
thing
is
some
of
you
might
know:
we've
been
working
on
refactoring
the
course
scheduler
around
what
we
call
a
scheduling
framework,
which
is
basically
an
execution
engine
with
a
predefined
number
or
extension
points.
Those
extension
points
allows
you
to
implement
new
behavior
in
this
scheduler
out
of
tree.
So
those
extension
points
you
can
in
the
in
these
extension
points
you
can
register.
E
We
will
have
a
bunch
of
its
implemented
at
multiple
extension
points,
the
filtering
as
well
as
the
scoring
and
so
on
so
the
schedule
framework
who
has
proposed
about
a
year
ago
in
the
last
cycle.
We
finished
the
implementation
of
the
framework
itself,
but
we
didn't
have
any
plugins
implemented
in
it.
So
in
the
last
cycle,
what
we
did
we
wrap
existing
predicates
and
priorities
as
plugins.
So
in
this
schedule
we
will
deprecated
we're
duplicating
the
concepts
Pritikin
priority.
E
There
will
be
trans
transforming
into
plugins,
and
so
all
the
predicates
a
priority
has
been
wrapped
around
around
plugins
that
are
now
being
executed
in
the
India
framework,
and
what
we
did
was
we
maintained.
The
old
execution
path
were
critical
and
priorities
there.
It's
just
for
like
as
a
back-up
plan
in
case
things
didn't
go,
go
well
and
what
we
did
also.
We
already
have
what
we
call
like
a
policy
API
propagation
API,
it's
a
v1
API,
so
we
can't.
We
can't
remove
that.
So
we
build
a
translation
layer.
E
We've
also
done
a
bunch
of
performance
improvements.
The
most
important
one
was
improving
pod
scheduling,
latency
by
approximately
2x,
excluding
the
binding
on
on
large-scale
clusters,
and
the
other
improvement
was
focused
on
affinity.
We've
been,
we
know
that
affinity
is
not
as
as
performant
as
we
would
hope
for
partially
because
of
the
flexible
API
that
we
have
where
we
always
need
to
look
at
global
state
to
make
a
decision
where
the
Birchwood
land,
what
we
but
we've
discovered
a
number
of
low-hanging
fruits.
E
Plus
we
changed
a
little
bit
the
data
structure.
That's
used
to
implement
this
this
feature
and
managed
to
get
around
4x
improvement
in
in
the
preferred
pod
affinity.
We
also
spent
a
lot
of
effort
in
improving
observability
in
the
scheduler.
We
added
new
metrics,
around
scheduling,
latency,
also
traffic,
how
many
parts,
for
example,
being
cute
and
DQ'd
from
our
from
the
scheduling
queue
a
saturation
matrix,
for
example,
how
many
go
binding,
goat-cheese
being
spawned
at
a
time,
etc.
E
Other
we
have
a
bun
couple
of
features
that
graduated
to
GA
in
1.17
schedule.
Demon
said
the
demons
apology.
In
the
past,
the
demon
said,
controller
was
actually
doing
the
scheduling
of
demon
seed
pods.
Now
the
demon
set
controller
is
just
creating
the
part,
and
the
skater
is
the
one
that
actually
scheduling
them.
So
that
makes
it
a
little
bit
more
consistent
and
how
we
assign
pods
to
nodes.
The
other
feature
is
taint
nodes,
by
condition
again,
this
features
gradual,
TGA
and
basically,
what
this
is
denote.
E
Lifecycle
controller
monitors
the
nodes
and
paints
them,
and
then
the
scaler
decides
whether
or
not
to
schedule
pods
and
those
know
based
on
those
things,
instead
of
actually
looking
at
specific
status
of
the
node,
like
memory
pressure,
etc.
Also,
we
we
gave
a
couple
of
Kip
con
talks.
The
an
introduction
by
way
and
and
bravi
and
I
gave
a
deep
dive
on
on
the
scheduler,
so
plans
for
upcoming
cycles.
Again,
as
I
mentioned,
we
wrap
existing
predicates
and
priorities
around
plugins.
E
Basically
now
they
just
need
to
create
a
what
we
call
a
framework
and
they
can
specify
the
list
of
plugins
that
they
want
to
execute
in
the
autoscaler.
To
make
decisions
on
will
have
to
auto
scale
down
scale.
It's
basically
probably
is
going
to
be
the
list
of
filter.
Plugins
we've
also
cleaned
up
the
dependency
on
the
team
estate
controller
partially,
because
we
graduated
no
taint
conditions
to
GA.
We
still
have
one
dependency
in
Cuba
that
we
would
like
to
clean
up
as
well.
E
We
are
hoping
that,
probably
in
1.19,
we
will
declare
the
policy
api
duplicated.
Once
we
have
component
conflict
graduated
to
GA,
so
component
config
will
have
the
plugins
api,
where
we
can
specify
and
define
the
new
behavior
of
the
of
the
of
this
scheduler.
As
I
mentioned,
the
policy
API
is
centered
around
critical
central
T's,
which
now
do
not
exist
basically
in
in
the
in
decor
scheduler.
E
So
a
new
feature
that
would
be
coming
in
in
the
next
in
the
the
next
cycles
is
what
we
call
multi
config
schedulers.
So
now
right
now,
we
can
only
provide
a
single
configuration
of
how
the
scheduler
behaves.
So
what
we
would
like
to
do
is
to
allocating
a
scheduler
with
more
than
one
plugin
configuration,
so
you
can
specify
in
the
pod
spec
the
scheduler
name.
The
schedule
name
will
be
mapped,
a
specific
configuration
and
that
would
allow
you,
for
example,
to
create
a
scheduler
that
caters
for
a
mix
of
workloads.
E
For
example,
if
you
have
within
the
same
cluster,
you
have
workloads
that
are
like
batch
oriented
and
service
oriented,
so
you
can,
it
can
tune
the
conflict.
You
have
two
different
configurations
that
are
tuned
for
to
these
two
types
of
workloads.
As
I
mentioned.
We
are
hoping
also
to
finish
the
integration
with
the
autoscaler
with
a
new
interface,
and
then
we
also
would
like
to
further
improve
the
scaler
performance.
We
continue
to
do
profiling.
E
We
have
new
benchmarks
that
would
be
posted
on
perf
dashboard,
to
monitor
the
scheduler
performance
and
and
last
but
not
least,
would
like
to
graduate
pod
topology
spread
to
two
beta
one
law,
one
one
more
thing:
leadership
position
changes
so
last
time:
I
guess
when,
when
we
give
an
update,
it
was,
it
was
Bobby.
Bobby
stepped
down
as
a
co-chair
of
the
sig
and
me
up
tomorrow,
bi
I
took
I
took
over
where
to
find
us
so
co-chairs
are
myself
and
Klaus
man.
A
F
A
F
C
F
Basically,
ensuring
that
pubertus
scales,
we
extended
our
load
tests
to
exercise
more
direct,
the
resources
like
the
onset
state,
who
sets
persistent
volume
sequence
as
of
today.
There
are
continuous
crested,
both
in
pre,
submit
and
also
in
continuous
test
in
a
clusters
up
to
5000
nose
when
it
comes
to
cluster
loader,
which
is
our
flagship
tool
for
running
load
test,
we
improved
a
bit
implemented
a
few
new
features
like
test
Suites.
We
now
detector
looking
components
and
we
are
working
on
adding
a
tree
support
so
supporting
cluster
with
multiple
masters
and
this
partially
implemented.
F
F
We
did
we
added
more
tests
in
rich
branches,
because
privet
we
realize
in
progress
they
didn't
test,
run
like
a
full
set
of
our
tester.
Our
tester
mostly
run
on
the
master
branch.
Now
we
run
exactly
the
same
sets
awesome,
reddish-brown,
trees
and,
last
but
not
least,
we've
been
experimenting
with
some
pods
throughput
test,
comparing
container
B
versus
docker
right.
F
F
It
committed
some
performance
regressions,
but
together
with
the
gun
pods,
we
managed
to
figure
out
everything
and
accelerating
so
as
of
Q.
Burnett
is
117,
Blanc
113
is
used,
and
also
there
is
a
link
to
this
document
with
all
the
old
all
scenes,
I
think
November.
That's
when
we
started
maintaining
this
document,
because
the
regression
in
Bucks
yeah,
so
we're
not
interested.
You
can
take
a
look
I
think
that's.
We
have
like
over
ton,
regression
stir
yeah
and
when
it
comes
to
improvements,
we've
made
to
the
kubernetes.
F
F
So
now
only
one
sterilization
happens
and
in
the
I
think
the
peer
we
can
find
more
details
about
exact
numbers,
and
but
it
was
a
biggie.
Other
thing
is
current
commands
improvement.
So
we
take
a
look
at
them.
An
improve,
a
lot
examples,
your
node
bicycle
controller,
garbage
collector
controller
and
paint
manager,
and
basically
all
of
these
changes
translate
to
much
more
stable
clusters,
especially
during
a
price
and
also
when
skiing
and
up
or
down
some
of
the
changes.
F
I
think
that
was
the
time
another
was
was
really
good
and
we
traffic
that
into
one
14
and
115
other
thing.
What
bookmarks
are
in
GA
I
think
during
the
last
update,
we
said
that
they
were
better
and
we
opened
a
cap
for
immutable
secrets.
There
is
40
progress
on
that
and
last
but
not
least,
even
more,
cheaper,
not
heartbeats.
So
as
of
117
I.
Believe
me
reduce
the
frequency
of
note,
object,
updates
from
five
minutes
to
one
minutes,
yep,
sometimes
for
upcoming
cycles.
F
We
hope
to
work
more
on
improving
the
scalability
definition,
so
so
finalizing
existing
or
in
progress.
Bts
life
slows
things
like
network
programming
that
you
can
see
network
latency.
We
already
measured
that
in
tests,
but
we
need
to
come
up
with
the
SLO
definition,
so
grantee
some
threshold
and
so
analyze
the
data.
F
We
have
been
busy
to
come
up
with
some
threshold
that
will
guarantee
a
big
things:
committee,
envelop
so
having
more
dimensions,
they're
updating
the
thresholds,
some
of
them
are
out
of
date
a
bit,
for
example,
like
number
of
services
number
of
posts
per
service
that
you
can
have
that
change
dramatically.
We
can
post
Isis
and
stuff
like
that
and
internal
work
on
hardening
and
extending
this
definition
when
it
comes
to
scalability
and
performance
tests,
and
we
would
like
to
cover
to
cover
even
more
humanities
resources
different
to
what
I
listed
before.
C
F
Make
it
even
more
cheaper
and
better
more
more
close
to
what
real
custard
are
and
you'd
like
to
invest
a
bit
more
in
other
types
of
tests
and
test
scenarios
that
we
are
creating
not
exercising
so,
for
example,
hie
testing,
some
every
test
in
chaos,
testing
and
some
pathology
does
not
existing.
For
example,
like
you
know,
one
master
goes
down
and
what
happens
in
that
case
and
stuff
like
that,
and
when
it
comes
to
bottleneck,
detection
and
performance
improvements,
so
basically
finishing
existing
stuff.
F
F
There
is
also
kept
from
Exedy
folks
to
use
progress,
notify
feature
and
implement
consistent
reads
from
cache.
We
are
working
on
scale
testing
that
and
if
we
manage
to
get
it
in
118,
that
will
be
a
biggie
for
the
scalability
and
performance
right
and
how
these
things
affect
you.
So
when
it
comes
to
scalability
approval
process,
this
is
still
like
an
experimental
phase.
F
We
will
be
looking
for
kept
orders
to
help
with
publication
and
we
will
be
reaching
out
also
to
more
people
in
the
community
and
then
user
securities
to
understand
what
they
really
want
in
terms
of
extending
our
scabby
case
arising
us
allows
what
is
important
to
them
and
one
works
out,
and
we
had
a
big
regression
in
117
and
117.
Zero
is
vulnerable
to
that.
That's
the
regression
that
if
you
have
a
large
enough
cluster
and
something
happens
to
master
that
results
in
restarting
API
server,
for
example,
you
do
upgrade
of
the
master.
F
The
cluster
will
break
so
it's
fixed
in
171,
and
if
you
want
to
use
third
cluster,
we
recommend
not
using
170
zero
starting
from
171,
and
if
you
want
to
help
us
your
model
more
than
welcome.
We
have
some
help
on
that
lists
where
we
talk
with
first
issues,
both
on
earth
tests
and
on
kubernetes
repos.
F
If
you
need
anything
just
ping
us
on
6
club,
it
is
luck,
Channel,
yeah,
that's
the
list
of
current
choice
and
tears
and
links
our
home
page
slack,
channel
mailing
list,
and
we
also
had
public
meetings.
I
encourage
you
to
join.
If
you
want
to
learn
about
scar,
eating
they're
happening
quite
weekly,
every
Thursday
we
had
one
today,
so
next
one
will
be
in
two
weeks
and
that's
everything.
Thank
you.
A
A
E
A
All
right,
okay,
thank
you.
So
much
for
posting
that
the
link
is
in
the
zoom
group
chat
and
also
there
is
a
contributor
summit.
Supposedly
there's
this
thing
going
on
for
e,
you
go
ahead
and
take
a
look
at
the
dev
channel
for
all
of
the
information
about
that
alright
and
with
that,
just
as
a
note
also
because
it
is
an
entire
month's
worth
of
shoutouts,
we're
not
gonna,
read
the
shoutouts
anymore
out
on
this,
so
I'm
actually
gonna
give
you
guys
about
25
minutes
back
of
your
time.