►
From YouTube: Kubernetes Community Meeting 20180719
Description
This is our weekly community meeting, for more information check this page: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
A
As
with
everything
in
this
community
we
are
open
and
sharing
and
collaborative
would
request
everyone
to
keep
that
in
mind
and
be
constructive
and
thoughtful
courteous
and
give
that
the
type
of
courteous
that
you
would
personally
respect
to
everybody,
who's,
presenting
and
speaking
today,
and
also
this
will
be
recorded.
We're
actually
streaming
live
to
YouTube
right
now,
so
between
that
and
the
recording
that
will
be
posted
later
to
the
Internet
as
well.
Your
actions
and
comments
are
public
and
reflect
on
you
and
our
community
as
a
whole.
A
So
please
keep
that
in
mind
and
keep
it
positive
and
constructive.
I
am
Tim
pepper,
I
work
in
the
sig
contributor
experience
and
sig
release
and
also
happened
to
work
for
VMware
as
well.
I'm
your
moderator
for
the
meeting
today
we've
got
a
volunteer
note-taker
Sally
Ross,
if
you'd
like
to
contribute
to
note-taking
as
well.
The
Google
Doc
is
linked
off
of
the
meeting
invite
and
if
you're
interested
in
volunteering
in
the
future
for
moderating
this
meeting,
please
reach
out
to
sig
contrib
ex
and
slack.
A
B
Everyone
thanks
Tim,
thanks
for
having
me
here.
I
just
want
to
briefly
walk
through
a
project.
That's
spun
up
a
month
or
two
ago
it's
called
micro,
Kate's,
the
oh
man,
I,
hope
the
zoom
interfaces
and
show
off
my
screen
share.
It
probably
does
because
this
is
Linux.
That's
enough,
all
right,
so
micro,
Kate's,
microts,
thought
I/o,
there's
also
a
slack
channel
pound
micro
gates
and
slack.
The
the
genesis
of
this
whole
project
was
to
build
a
really
lightweight,
desktop
installable
kubernetes.
B
That
was
what
you
would
expect
from
a
fully
conforming
kubernetes
system,
but
in
the
lightest
most
isolated
way
possible.
So
this
project
is
driven
by
technology
called
snaps,
which
is
a
kind
of
like
a
package
style,
containerization
format,
so
think
of
a
lot
of
the
primitives
that
are
leveraged
by
containers
today,
strip
away
a
lot
of
things
that
you
don't
need,
like
you
know,
isolation
and
name
spacing
of
the
tcp/ip
stack
and
other
pieces,
we
kind
of
run
one
copy
of
one
software
per
machine.
B
That's
really
with
snap
packages
are
so
I'm
just
going
to
quickly
install
this
snap
on
my
computer
kind
of
show
how
quick
it
is
to
get
a
an
isolated
kubernetes
cluster
running
and
how
easy
way
it
is
the
throw
away
or
reset
that
cluster,
so
I've
got
terminal
I'm
on
an
abuti
machine.
One
of
the
benefits
of
snaps
is
that
it's
really
a
cross
distro
package
format.
B
So
what
the
team
does
is
they
publish
various
releases
at
various
channels,
so
snaps
kind
of
come
with
this
idea
of
a
channel
where
there's
an
edge
channel.
That's
currently
one
11.1,
there's
a
bita
channel,
which
is
you
know,
kind
of
beta
releases.
There's
a
release
or
candidate
channel,
then
there's
a
stable
channel.
This
is
really
a
beta
score
at
the
moment.
There's
still
some
things
to
be
worked
through.
It's
still
not
like
totally
100%
ready
to
be
used,
we're
kind
of
limiting
it
to
the
edge
and
the
beta
channels.
B
B
It
goes
and
sets
up
all
the
confinement
and
security
profiles
and
I've
got
a
111
one.
So
what
this
does
is
it
adds
a
micro
Kate's
command
with
a
bunch
of
prefixes
or
suffixes.
Rather
so,
for
example,
I
can
run
my
croqueted
scoop
cuddle
and
we'll
see
that
it's
a
totally
empty
kubera's
cluster,
that's
still
booting
up
there.
We
go
no
resources
found
so
now
that
all
the
demons
are
running,
we
can
poke
and
see
that
they
are
just
system.
V,
game
ins
running
and
I
highly
can
find
a
constricted
fashion.
B
Booting
up,
I,
just
think
a
few
seconds
of
course,
and
I
will
have
kind
of
those
additional
services.
You'd
expect
running
an
in
kubernetes
cluster
from
here.
There's
really
no
limit
the
micro
Cade
stuff,
coop
cuddle
command
allows
me
to
interact
with
it,
as
if
I
was
using
coop
cuddle
on
my
host
system.
It
also
doesn't
interfere
with
my
host
coop
cuddle.
So
I
have
my
own
coop
cuddle
if
I
run
get
foes
right.
B
Now,
it's
a
totally
different
deployment
off
somewhere
else
in
the
world,
but
if
I
use
the
namespaced
micro
Kate's,
one
I
get
just
the
access
to
there
and
if
you
feel
really
ambitious,
you
don't
have
a
separate
coop
cuddle
installed.
You
can
do
things
like
snap,
alias
micro,
Kate's,
coop,
cuddle
2
to
your
host
system,
so
your
host
coop
cuddle
will
be
that
most
of
you
probably
already
have
a
coop
control
installed,
sent
really
necessary
I'm
going
to
wait
for
these
things
to
boot.
B
Up
almost
done,
I'm
gonna
get
my
services
as
well,
and
then
we're
just
gonna
browse
over
to
the
dash.
Once
it's
running
it's
a
little
smaller
and
I'll
be
able
to
pull
up
the
dashboard
from
here,
because
it's
on
my
host
system,
B
cluster
IP,
is
totally
accessible
to
me
on
my
browser.
If
you're
doing
this
in
a
VM,
you
can
just
forward
this
cluster
IP
address
out
to
your
host
system
as
well.
Let's
see
DNS
is
still
creating.
B
B
B
All
right
no
respond
in
a
few
moments,
while
it's
waiting
that's
effectively
the
project,
it
really
is
there
to
aim
to
be
a
super
functional
lightweight
kubernetes,
when
I'm
done
with
this
I
can
run
the
micro
Kate's
that
reset
command,
which
every
set
everything
back
to
a
blank
state.
So
if
I
went
and
tweaked
command-line
arguments
or
if
I
went
and
modified,
the
pods
running,
I
can
kind
of
go
back
to
a
reset
at
state
there
and
then
also,
if
I'm
done
with
it
completely
I
could
just
snap
remove
and
the
entire
system
goes
away.
B
It's
nice
because
it
won't
touch
or
modify
any
services
running
on
disk.
It
won't
modify
any
of
the
actual
files
on
my
host
system,
they're
super
isolated,
but
again
without
needing
kind
of
an
extra
VM
layer
on
top
of
there,
where
that
might
be
heavy,
especially
for
my
laptop,
which
is
a
poor,
little
t,
430
running
VMs,
even
just
one,
tends
to
overtax
it
a
bit
too
much.
So
let's
try
this
one
more
time,
if
not
sleeping
at
the
API
address
at
least
or
I
can
just
silently
weep
to
myself
for
using
edge
stuff.
B
In
a
demo
without
super
testing
it,
oh
you
know
what
pewter.
What's
our
super
importance?
Oh
my
gosh
hey,
look
at
that
their
supports
it's
almost
as
if
I
fooled
myself,
so
yeah
ports
are
super
important
as
well.
There's
a
myriad
of
additional
add-ons
that
teams
working
on
things
like
enabling
cert
generation
storage,
enablement,
ingress
controllers,
if
you're
into
that,
so
all
those
things
are
kind
of
coming,
but
at
its
core.
This
is
what
you
get
just
a
super
lightweight
kubernetes,
that's
micro
gates,.
A
All
right,
Marco,
I
yep,
it
looks
like
we've
already
got
a
link
in
the
the
minutes.
So
awesome,
thank
you,
and
if
anybody
has
questions
and
wants
to
follow
up
mark
is
also
on
slack.
So
alright,
thank
you
very
much
and
we
will
carry
on
like
he
did
so
next
on
our
standing
agenda,
we
go
through
a
little
bit
of
an
update
on
current
release
happening,
so
I
also
happen
to
be
the
release
lead
for
this
current
cycle
1.12
and
the
main
thing
that
I
would
note
right
now
is
that
future
collection
is
underway.
A
I've
got
a
couple
links
in
the
the
meeting
minutes
there
to
existing
issues
that
have
been
labeled
for
the
milestone,
112
and
then
the
features
repo
as
well.
We're
asking
SIG's
right
now
to
be
actively
getting
that
information
up
to
date
and
our
our
features
lead,
Steven
Augustus,
is
is
interacting
with.
B
A
Also
I
didn't
I
didn't
mention
at
the
top
of
the
meeting,
but
I
think
this
is
a
good
opportunity
to
remind
about
it.
If
you
are
not
speaking,
please
mute
your
microphone.
The
other
thing
to
note
for
112
the
feature:
freeze,
July
31st.
So
that's
what
is
that
12
days
away?
That's
our
our
next
major
milestone
in
the
release,
so
just
a
reminder
that
that
coming
up
and
if
we
have
Anirudh
on
the
line
I'd
like
to
get
an
update
on
1.11,
we
have
1.11
one
just
came
out.
Basically,
yesterday
yeah.
C
Here
so
on
1.11,
it
was
a
bit
of
a
struggle
when
we
got
it
out,
it
was
planned
for
the
16th,
which
is
Monday,
but
then
we
ran
into
some
bugs
in
well
a
cherubic
that
had
to
be
reverted
because
it
was
causing
a
bunch
of
failures
and
qadian
blocking
tests.
So
Tim
Sinclair
from
happy
Oh
helped
us
out
a
lot
there
and
then
once
we
got
that
out,
we
had
a
bunch
of
other
issues
with
the
release
itself,
especially
around
like
pushing
images,
etc.
C
There's
a
new
new
staging
bucket
for
the
GC,
our
images
that
we
ran
into
which
ate
up
about
a
day
but
now
I
think
since
yesterday,
all
the
images
are
usable,
people
are
creating
clusters
and
it's
all
working
fine.
The
major
features
that
went
out
that
I
noted
were
removing
the
defaulting
of
the
CSI
file
system
type
to
ext4,
which
should
be
in
no
op
the
cube
api
server.
C
The
priority
admission
plugin
is
now
enabled,
by
default
and
system
node,
critical
and
system
cluster
of
critical
priority
classes
are
limited
to
the
cube
system,
namespace,
which
is
way
safer
and
then,
among
the
bug
fixes
there
was
image.
Garbage
collection
was
disabled
by
mistake
in
the
cubelet
and
that's
been
fixed.
C
A
Being
the
cap
owner
remind
them
that
the
community
meeting
has
this
slot
for
a
bit
of
discussion
on
caps
and
it's
a
great
opportunity
to
get
a
little
bit
broader
audience
on
IEPs
'its,
that's
up
and
coming
so
next
up
is
our
sig
updates.
We
typically
have
three
SIG's
who
provide
an
update
on
the
latest
happenings
and
then
do
that
quarterly.
So,
if
you're
attending
this
meeting
on
a
quarterly
basis,
you
get
a
little
insight
into
what's
going
on
in
each
of
the
SIG's.
A
C
So
because
we
actually
did
an
update
pretty
recently,
so
this
is
very
much
incremental,
so
I
don't
have
slides
for
this,
but
I'll
speak
through
it.
So
sig
big
data,
we
deal
with
big
data
work
fields
on
kubernetes,
so
there's
three
or
I
guess.
Four
different
projects
that
we've
been
focused
on
first
is
SPARC
and
putting
that
over
to
kubernetes,
making
it
more
container
native,
which
is
like
changes
both
on
this
powered
side
and
on
the
community
side,
there's
a
patchy
airflow,
which
is
a
tag
scheduler
that
is
pretty
popular.
C
So
there's
work
on
that
there's
a
spark
operator
which
is
a
kubernetes
tile
operator
that
we've
built
on
top
of
SPARC
and
there's
HDFS
support.
So
on
the
spark
front,
there's
this
part
2.4
release.
There
is
a
code
freeze
for
the
2.4
release.
This
is
on
the
Apache
repo
on
the
first
of
August,
so
we're
all
heads
down
working
towards
that.
The
major
features
to
expect
this
time
around
our
support
for
Python.
This
has
been
a
huge
ask
from
the
community
pretty
much
everyone
that
we
met
at
sparked
summit
was
asking
about
this.
C
There's
client
mode
support.
So
this
is
enabling
notebooks
like
Jupiter
and
Zeppelin,
to
run
on
top
of
kubernetes
and
like
talk
to
spark
directly
lots
of
testings
gone
in
this
time,
we've
merged
in,
like
a
bunch
of
integration
tests,
I.
Think
at
this
point
we
could
confidently
say
that
we're
it's
good,
stable
functionality,
we
removed
a
bunch
of
things
like
init
containers
were
used
to
download
dependencies
before
we're
using
more
spark
native
sparked
native
constructs
a
whole
lot
of
stability
fixes,
especially
around
like
the
controller
logic.
C
So
we've
tried
to
emulate
the
way
kubernetes
does
controllers
so
trying
to
remove
all
the
places
where
we
were
edge,
triggered
and
making
it
more
level
triggered
and
then
pretty
much
similar
to
how
stateful
set
or
deployment
operate
and
then,
of
course,
beyond
2.4.
There's
a
whole
lot
of
work
lined
up
as
well.
People
want
to
customize
their
pod
templates
as
they're
submitting
jobs
on
to
spark
on
kubernetes
there's
dynamic
allocation
elasticity.
C
This
highly
available
driver
where
people
want
their
spark
jobs
to
failover
and
then
restart
back,
that's
something
they
haven't
addressed
yet
and
might
actually
need
features
on
the
kubernetes
side
and,
finally,
there's
a
support
for
our
and
kerberized
HDFS.
So
this
is
future
work
beyond
2.4.
On
the
airflow
front,
we
have
a
blog
posts
that
went
out
on
the
28th
of
June,
which
describes
how
to
run
air
flow
and
kubernetes
we're
still
waiting
for
the
official
release
on
arm
of
airflow,
which
has
all
the
canaries
bits
baked
into
it.
C
E
Yes
operator
side,
so
we
actually
sentai
added
you
hitting
animation
back
for
pod
customization,
like
you
know,
so,
basically,
it's
the
first
things
that
spark
yourself
doesn't
support.
Yet,
like
you
know,
mounting
volumes
or
setting
affinity
or,
like
you
know,
settings
part
security
contact,
those
kind
of
skin,
so
it's
not
actually
replacing
initializer.
That
operator
was
using
before.
So
all
the
reason
there's
like
no,
the
Nepean
webhook
is
beta.
E
Initializer
is
alpha,
so
that's
one
of
the
main
thing
that
we
have
been
adding
to
the
operators
in
the
past
couple
months
and
yeah
so
and
also
we
are
adding
support
for
Python
into
the
support
proper
years
as
well.
So,
which
is
you
know,
coming
in
spark
2.4
yeah.
Those
are
the
major
new
things
that
we've
been
adding
to
it
operator.
C
A
F
F
F
Can
everybody
see
yep,
awesome,
okay,
there's
gonna,
be
a
lightning
tour
of
signal
to
cluster
and
where
we
are
today
pretty
high
speed.
So
please
excuse
that.
So
we're
just
a
reminder.
We
focused
on
solving
common
challenges
related
to
the
management
of
multiple
kubernetes
clusters
and
applications
that
run
across
multiple
kubernetes
clusters.
F
Two
main
drivers
behind
this
are
outage
cluster
outage
resiliency.
So
if
you
have
your
applications
running
across
multiple
clusters,
you
can
be
resilient
to
cluster
outages
and
also
the
ability
to
span
across
multiple
cloud
providers,
so-called
hybrid
cloud.
We
have
three
main
projects
at
sub
projects
at
the
moment,
the
biggest
of
which
is
cluster
Federation.
Currently,
on
version
2,
we
also
have
a
cluster
registry
and
multi
cluster
ingress
project,
quick
reminder
to
those
of
you
who
may
be
new
to
the
show
or
not
know
what
cluster
Federation
is
that's
one
of
our
bigger
projects.
F
It
basically
gives
you
the
ability
to
interact
via
a
single
API
with
multiple
clusters,
either
for
the
purpose
of
managing
those
clusters
or
for
managing
applications
that
run
across
them,
and
the
clusters
themselves
can
either
be
all
on
the
same
cloud
provider
or
across
multiple
cloud
providers,
the
status
of
Federation.
So
we
did
a
b1
a
while
back.
It
was
mainly
a
proof
of
concept.
F
It
was
API
is
strictly
consistent
with
kubernetes
with
annotations
to
add
the
additional
bits
that
are
necessary
for
multi
cluster
management,
and
it's
been
forked
and
used
fairly
extensively
by
CERN,
eBay
and
others.
But
we
don't
plan
to
develop
that
any
further.
It
was
mainly
used
as
a
proof
of
concept
to
you
know
just
prove
out.
F
For
that
we
did
release
an
alpha
version
in
just
a
few
weeks
ago,
which
has
full
feature:
parity
with
v1
borrowing,
a
few
minor
disparities,
but
otherwise
it
is
essentially
does
everything
that
b1
did,
but
with
our
new
architecture,
we
plan
to
have
people
use
that
in
the
next
couple
of
months
finalize
the
API
is
declare
it
beta
and
hopefully
move
to
GA,
either
at
the
end
of
this
year
or
beginning
dear.
The
main
code
contributors
at
the
moment
are
Red,
Hat,
10,
hallway
and
soon
will
include
IBM
highlights
of
the
b2
alpha.
F
So
we
have
a
control
based
plane
based
on
custom
resource
definitions,
so
it
is
no
longer
necessary
to
actually
build
a
completely
separate
Federation
control
plane.
The
custom
resource
definitions
are
installed
into
an
existing
kubernetes
cluster,
so
you
don't
need
to
run
a
separate
API
server
at
CD,
etc.
F
There's
also
a
single
generic
implementation
for
all
kubernetes
types
to
provide
basic,
simple
Federation.
So
if
you
need
to
propagate
any
of
the
kubernetes
types
into
more
than
one
cluster,
you
can
do
that.
The
standard
implementation
that
covers
all
types
with
simple
per
cluster
customization.
So,
for
example,
if
you
wanted
to
deploy
a
replica
set
across
multiple
clusters-
and
all
you
wanted
to
do-
was
change
number
of
replicas
in
each
cluster-
it's
pretty
straightforward
to
do
that.
F
It
also
has
a
bunch
of
higher-level
controllers
that
sit
on
top
of
this
basic
infrastructure
to
provide
more
sophisticated,
active
management
to
prepare
across
multiple
clusters.
So
you
can
do
active
migration
of
replicas
set
and
deployment
replicas
between
clusters,
for
example,
to
handle
individual
cluster
failures.
You
can
do
active
management
of
federated
DNS
records.
So
if
you
have
the
same
service
deployed
across
multiple
clusters,
you
there's
a
controller
that
will
automatically
configure
DNS
for
you
not
only
to
expose
all
of
those
services
but
also
to
manage
our
teachers.
F
So
if
a
particular
cluster
goes
down
or
the
service
becomes
unavailable,
let's
say
all
the
pods
crash.
The
DNS
records
will
automatically
be
updated
to
direct
traffic
away
from
those
clusters.
There's
also
active
management
of
jobs
across
clusters,
for
example,
to
make
sure
that
jobs
can
be
preferentially
directed
to
clusters
that
have
available
capacity
to
minimize
the
completion
time
and
things
like
HP.
A
horizontal,
auto
scaling
be
able
to
manage
global
limits
of
HP
a
across
multiple
clusters,
and
all
of
this
also
uses
one
of
our
other
sub
projects,
a
cluster
registry.
F
Just
to
give
you
a
flavor
of
the
kind
of
stuff
we're
working
on
in
the
next
few
steps,
we
don't
yet
have
federated
status,
so
it's
not
possible
at
the
moment
to
get
a
consolidated
view
of
the
status
of
all
of
your
objects
across
multiple
clusters
which
we're
working
on.
It's
also
not
possible
to
get
a
single
consolidated
view,
just
the
actual,
the
ability
to
read
all
of
the
components,
or
rather
API
objects
across
multiple
clusters
and
get
a
consolidated
list,
for
example.
So
we're
working
on
that.
F
It's
not
currently
easy
to
co-locate
a
bunch
of
different
typed
objects
into
the
same
set
of
clusters.
So
that's
you
can
think
of
that
as
affinity.
If
you
wanted
your
secrets
and
your
resource
and
your
replica
sets
to
land
in
the
same
clusters,
it's
a
little
tricky
to
do
that
today.
There's
also
been
a
requirement
for
partitioning
cluster
selectors
by
namespace,
for
example,
dictating
that
a
particular
namespace
always
lands
in
the
same
set
of
clusters
and
there's
no
rback
enforcement
in
the
Federation
layer.
F
Yet
it
relies
on
our
back
enforcement
in
the
underlying
clusters,
but
we
don't
yet
have
and
we're
busy
working
on
enforcing
our
back
in
the
control
plane
of
the
Federation.
How
can
you
help
so?
We
would
obviously
like
to
have
as
many
people
as
possible
use
the
alpha
release
that
we've
just
put
out
this
month
give
us
some
feedback.
F
Let
us
know
if
we're
heading
in
the
right
direction,
if
you
think
there
are
any
useful
changes
we
could
make
to
the
API
is
now
is
the
time
to
do
it
before
we
declare
it
beta
there
there's
a
link
to
the
release
there.
You
can
go
and
find
out
where
all
the
bits
and
pieces
are
that
you
need.
As
I
mentioned
earlier.
One
of
our
core
goals
has
been
not
only
to
create
a
consolidated
Federation
control
plane
that
works
as
a
whole,
but
also
to
create
reusable
components.
So
we
have
a
cluster
propagator.
F
We
have
per
cluster
resource
customization.
We
have
DNS
order,
configuration
tool
and
and
a
variety
of
others,
so
you
can
actually
take
those
individually
and
use
them
in
in
other
use
cases
as
well,
and
we
would
welcome
people
to
do
that
and,
of
course,
as
for
everybody,
we
would
love
people
to
contribute
further
designs
than
code.
If
you
want
to
pick
some
things
off
a
road
map
or
add
some
new
items,
the
road
map
that
you
would
like
to
work
on,
we
would
definitely
welcome
that.
F
As
I
mentioned
earlier,
we
have
a
couple
of
other
sub
projects:
cluster
registry.
This
is
reasonably
stable
and
complete
at
the
moment
it
is
used
in
Federation
b2
and
the
main
code
contributors
there
at
Google
and
Red
Hat
Multi
cluster
ingress
is
another
project
primarily
being
worked
on
by
Google.
That
is
reasonably
struggling
to
get
a
more
up-to-date
status
report
there,
but
basically
that
that
exists
and
you
can
go
and
look
in
the
repository
color
and
that's
about
it.
For
me,
I
think
my
time
is
probably
almost
up
now
any
questions.
D
F
D
D
A
D
A
G
Yes,
so
I
have
a
few
updates
about
the
secret
scheduling
and
I
assume.
This
is
supposed
to
be
the
updates
for
112
right,
because
we
also
have
learned
one.
A
quick
update
about
like
111
I
can
actually
say
that
so
that's
yeah,
so
one
of
the
our
biggest
features
in
111
is
priority
and
preemption,
which
is
moved
to
beta
in
111
and
is
available
by
default.
G
We
have
improved
like
we've
improved
the
feature
made
it
a
little
bit
more
restrictive
in
terms
of
potential
at
using
the
clusters,
basically
preventing
users
from
creating
very
high
priority
parts
anywhere
they
like,
and
we
basically
change
that
policy
to
let
users
create
high
priority
or
system
level
priority
parts
only
in
cube
system
namespace
in
111
point
one.
So
it's
gonna
be
available,
then,
and
with
that
we
are
hoping
that
does
kind
of
like
back
to
work
it's
closed
now.
People
cannot
really
create
verify.
G
Private
untrusted
users
cannot
create
very
high
priority
pods
in
any
cluster
date.
They
desire.
So
with
that,
I
would
like
to
go
and
give
you
an
update
about
items
that
you
are
working
on
for
112
part
of
our
focus
in
112
improving
performance.
We
are.
We
have
like
a
couple
of
interesting
features
that
we
are
working
on.
We
have
improved
our
equivalents,
cache,
which
is
away
or
to
improve
performance
of
the
scheduler.
The
idea
is
that
once
we.
G
D
G
On
the
on
that
note,
as
we
had
evaluated
so
forth,
this
feature
existed
in
the
scheduler.
What
the
implementation
was
not
ideal,
so
we
change
the
implementation
and
the
password
before
changing
the
implementation.
We
were
not
saying
much
improvements,
but
now
that
we
have
changed
the
implementation,
we
see
significant
performance
improvement.
So
far,
we
are
seeing
about
3x
performance
improvement
in
clusters,
where
a
lot
of
similar
parts
are
created,
and
this
is
actually
something.
G
A
lot
in
many
clusters
when
you
create,
for
example,
a
largely
replica
set
or
similar
collections.
There
are
lots
of
parts
with
similar
space
that
are
created
in
a
cluster,
and
this
can
improve
performance
of
the
scheduler.
Another
area
that
we
are
working
on
is
gang
scheduling.
We
are
still
working
on
designing
this
feature
recently,
how's
the
audience
thing
better.
Singly
has
created
this
proposal,
you're
still
trying
to
repoint
the
proposal.
It's
out
there.
If
you're
interested,
you
can
take
a
look
and
comment
on.
D
G
One
other
area
that
I
have
been
working
on
was
creating
a
scheduling
framework.
The
proposal
is
out,
but
I
am
thinking
how
to
change
the
directions
a
bit.
So
in
that
proposal,
I
have
proposed
to
build
another
scheduler
from
scratch
and
make
it
like
this
like
a
framework,
but
now
that
I
am
thinking
more
about,
you
know
rolling
it
out,
building
the
scheduler
building
tests
and
everything
for
it.
G
I'm
thinking
that
we
may
be
able
to
bring
some
of
those
ideas
from
the
scheduling
framework
or
hand
to
scheduler
and
if
I
can
prove
I
can
just
get
the
right
design
instead
of
building
something
from
scratch.
So
that's
and
I.
That's
another
area
that
I'm
contemplating
there
are
a
couple
of
features
that
we
are
moving
to
beta.
One
of
them
are
already
talked
about:
that's
equivalence,
cache
which
is
improving
performance
of
the
scheduler
you're,
also
trying
to
move
tainting
note
by
condition
and
tank
based
eviction.
G
D
G
G
These
are
pretty
much
most
of
the
items.
There
is
one
incubator
project,
the
scheduler,
which
is
actually
helpful
in
rebalancing
resources
in
a
cluster.
We
would
like
to
that's
the
plan,
basically
that
we
would
like
to
move
it
to
as
a
standard
component
and
graduating
it
from
an
incubator
in
112
as
well.
We
haven't
seen
a
whole
lot
of
Argos.
Unless,
probably
we'll
see
it
can
be
graduated
to
a
standard
component
with
112
on.
A
All
right,
then
we
will
carry
on
the
next
section
in
our
standing
agenda
is
announcements
where
you
start
with
shout
out
so
sort
of
thank
yous
from
community
members
to
community
members
and
if
you
haven't
seen
it
there,
this
is
a
actual
channel
and
slack
on
the
kubernetes
slack.
It's
hash,
shoutouts,
pretty
straightforward,
and
each
week
we
collate
the
the
shoutouts
that
have
been
given
and
we
just
kind
of
give
them
another
a
bit
of
amplification
here
in
the
community
meeting.
A
A
The
first
is
to
Matt
hikes,
who
has
been
very
active
in
test
infrastructure
recently
and
has
been
making
a
number
of
different
contributions
from
fixing
bugs
adding
new
features
and
automation
he's
been
an
eager
to
help
and
has
stuck
with
some
of
the
more
complex
changes
that
require
many
comments
and
interactions,
a
reference
to
some
bike
shedding
that
was
happening
there
as
well.
I
guess
so.
Thank
you
there
and
a
second
from
Chris
blacker
to
Nikita.
I
could
easily
stop
right
there
as
her.
Many
contributions
to
the
project
really
speak
for
themselves.
A
I
want
to
call
out,
through
the
little
chopping,
wood
and
carrying
water
Cassie.
Does
that
may
not
be
as
obvious
like
ensuring
that
stale
issues
are
reviewed
either
closed
or
marked
as
still
relevant
or
welcoming
new
contributors
with
an
emoji
or
two,
these
kinds
of
things
exemplify
what
the
kubernetes
community
is
all
about.
Thank
you
very
much.
There
Ben
elder
has
a
shout
out
to
kwang
when
for
continuing
to
send
tests
and
fro
fixes
and
pushes
and
flushing
out,
PR
status,
page
long
after
his
internship.
Hopefully
we
start
using
the
pr
saddest
page
more
widely.
A
A
11.1
images
fixing
a
symptom
there,
but
also
driving
beyond
just
the
symptom,
getting
the
appropriate
folks
within
Google
involved
to
ensure
there's
now
a
team
owning
a
better
solution
to
the
problem,
actually
fixing
the
problem.
This
is
the
continuation
of
the
progress
towards
decoupling
Google
calm
as
a
requirement
for
release,
as
mentioned
earlier
in
the
112
and
1.11
release,
update
section.
So
thank
you
very
much
there.
This
is
a
quite
a
big
deal
and
then
we
have
one
other
announcement.
A
That's
been
tossed
into
the
the
meeting
notes
in
just
the
last
hour
at
OSCON,
which
is
happening
right
now
in
Portland
Oregon.
They
do
apparently
something
sort
of
like
in
Oscars
Awards
and
there
was
an
award
for
most
impact
and
that
was
given
to
kubernetes
this
year.
So
a
big
shout
out
there
to
the
community
and
thanks
for
everybody,
who's
who's,
making
impact
in
the
cloud
native
space
by
contributing
to
the
kubernetes
Pras
project,
and
with
that
we
are
at
the
end
of
our
agenda
and
we
have
about
20
minutes
left.
A
A
shoe
a
huge
shout-out
back
to
Sally
there
if
you
haven't
been
following
along
in
the
Google
dr.
in
the
the
meeting
here,
he's
described
four
pages
of
notes
on
what
was
discussed
here
so
awesome
awesome
recording
there.
Thank
you
very
much
that
that's
hugely
valuable
to
the
community
folks
who
weren't
able
to
attend
to
come
back
and
glanced
through
those
minutes
or
watch
the
video
to
get
a
little
more
context.