►
From YouTube: Kubernetes Community Meeting 20181101
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 5pm UTC.
See this page for more information: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
A
A
Welcome
to
the
November
1st
2018
kubernetes
community
meeting
I
am
Tim
pepper
from
VMware
I
work
in
sick
release
and
contributes
primarily
at
the
moment.
I
will
be
hosting
and
moderating
the
discussion.
Today
we
could
use
a
new
note-taker
today
we
don't
have
a
volunteer
just
yet.
The
Google
Doc
is
accessible
to
all
for
editing,
so
anybody
who's
willing
to
please
drop
your
name,
and
there
is
a
primary
note-taker,
but
it
also
anybody
else.
That's
a
class
well
doc.
The
volunteer
help
would
be
greatly
appreciated.
A
This
is
a
recorded
meeting
and
we
are
also
streaming
live.
I
believe
today
as
well.
So
please
act
in
accordance
with
our
code
of
conduct
and
know
that
this
meeting
will
be
out
there
for
posterity
on
the
Internet.
So,
first
off
today
we
have
a
demo
on
automation,
broker
from
Michel
Rav
neck
at
Red,
Hat,
comm,
ok,
just
following
the
email
address,
that's
posted
there.
B
D
B
B
On
so
you're
provisioning-
something
in
forgive
me
for
going
through
this
pretty
quick
but
I
think
you'll
all
follow
along
we're
approaching
something
there's
kind
of
three
levels
of
complexity
we
can
operate
at
so
there's
for
one
thing:
just
creating
your
full
stack
of
lacrosse
cluster
resources.
This
is
the
y
amyl
that
you're
going
to
create
the
actual
kubernetes
resources.
You
just
gotta
figure
out
what
deployments
and
services
and
physical
volumes
of
stuff
you
need
to
create.
These
are
the
kind
of
things
you
create,
those
for,
and
so
second
thing
you
do.
B
This
all
more
interesting
is
integrated
with
external
services.
Those
kind
of
things
might
be
legacy,
applications
or
a
traditional
database
cluster,
maybe
or
perhaps
the
pines
of
some
kind
and
then
to
get
even
more
exciting.
You
might
need
to
do
some
kind
of
post
installation
bootstrapping
of
your
application,
so
maybe
you're
gonna
initialize
a
database,
maybe
you're
gonna,
restore
from
a
backup.
We
should
all
probably
do
that
more
often
and
maybe
you're
gonna
create.
B
That
you
just
so
own
I,
don't
know
what
physical
stuff
do
you
need.
This
is
just
a
list
of
all
the
things
you
need
to
have
in
one
place
when
it
is
time
to
actually
do
your
provisioning.
So
what
kind
of
technology
do
we
know
about?
What
kind
of
like
packaging
format
might
we
have
at
our
disposal
that
you
could
use
to
bundle
up
all
of
this
stuff,
or
at
least
most
of
this
stuff
into
one
immutable
thing
so
that
you
can
have
it
at
provision
time?
B
B
B
It
can
do
all
the
Service
Catalog
II
kind
of
things
we're
gonna
look
at,
and
here
we
are,
the
Service
Catalog.
Probably
a
lot
of
you
familiar
with
the
Service
Catalog.
Here's
a
graphical
representation
from
the
OpenShift
user
interface,
at
least
the
criminais
Service
Catalog-
is
really
about
enabling
you
to
self-service
vision.
B
Whatever
services
you
want
to
make
available
inside
your
clusters,
you
could,
for
example,
give
dev
teams
the
ability
to
provision
their
own,
my
sequel
database
anytime.
They
want.
Maybe
it
even
has
your
special
configuration
for
your
corporate
environment
baked
into
it.
Perhaps,
and
that's
what
service
catalogs
all
about
in
a
nutshell-
and
this
is
the
pattern
and
we'll
see
how
this
is
all
going
to
fit
together
in
just
a
moment.
B
But
you
have
this
client
on
the
left
of
the
smiley
face
and
they
talk
to
the
service
catalog
through
some
kind
of
user
interface
and
the
service
catalog
is
really
just
a
middleman
or
no
one
that
has
some
number
of
brokers
that
tell
it
hey.
I,
have
this
service
or
these
services
to
offer?
Can
you
make
that
advertising
inside
the
cluster
in
what
we
did?
Is
we
made
one
broker
that
is
actually
pluggable
on
and
we
called
it
Alka
testing
the
observer's
broker
API?
B
This
is
the
protocol
that
we
use
for
the
Service
Catalog
and
the
brokers
to
communicate
with
each
other.
These
are
a
bunch
of
the
companies
that
have
been
heavily
involved
in
shepherding
that
particular
spec.
It's
just
a
nice
like
open-source
story
to
see
them
getting
together
every
week
or
two
and
hammering
that
stuff
out.
So
it's
very
well
defined
protocol.
B
Now
the
automation
broker
is
one
of
those
brokers,
but
rather
than
being
hard-coded
to
be
like
a
Postgres
broker
or
a
my
sequel
broker.
But
all
it
knows
to
do
is
is
make
one
service
available
and
manage
that
this
can
take
any
one
of
those
service,
bundles
or
many
of
those
service,
bundles
and
advertise
them
as
a
provisional
service
in
the
Service
Catalog,
and
so
it's
source
of
data
is
just
any
old
container
registry.
You
can
build
these
service
bundles.
B
We
have
tooling
we'll
look
at
and
help
you
do
that
and
customize
them
to
do
whatever
kind
of
special
things
you
need
them
to
do,
deploy
your
services,
or
even
common
services,
in
particular,
ways
to
meet
your
needs
and
then
advertise
them
in
the
service
account
in
the
service
bundle.
We
basically
talked
about
this,
but
we
do
have
a
tool
called
APB.
The
name
will
become
clear
in
a
moment
that
can
help
you
get
one
off
the
ground
and
and
then
help
you
with
the
development
process
of
it.
B
B
This
is
a
similar
kind
of
metadata
file.
You
give
it
a
description,
the
name
and
as
parameters
list
at
the
bottom
here
is
where
some
of
the
real
magic
happens.
That's
where
you
can
express
in
a
rich
way
what
parameters
your
service
wants
to
receive
at
provision
time
and
then
a
service
catalog
user
interface
can
render
like
a
rich
form,
wizard
style
and
have
the
end
user
is
trying
to
present
something
fill
out
that
form
with
whatever
information
you
need.
B
It
really
turns
in
a
very
powerful
experience,
so
the
ansible
playbook
bundle
is
what
we
really
specialized
in
and
that's
where
the
APB
acronym
comes
from.
We
found
that
ansible
works
exceptionally
well
in
this
role.
B
It
really
is,
for
one
there's
a
very
easy
way
to
manage
workloads
on
kubernetes
to
begin
with,
but
in
this
role
we
basically
use
ansible
inside
this
meta
container
as
the
thing
that
runs
and
does
stuff
and
each
of
the
actions
that
you
can
perform
through
the
service
catalog
just
Maps
one
to
one
to
a
playbook,
so
you
can
make
a
ansible
play
book
called
provision
and
whatever
it
does,
is
what
will
happen,
and
you
have
at
your
disposal
the
full
power
of
ansible
at
this
point.
B
So
it's
not
just
being
able
to
create
and
manage
resources
in
your
cluster,
but
you
could
integrate
with
those
off
cluster
resources,
for
example,
or
do
more
advanced
things
or
you
could,
for
example,
connect
to
your
database
after
you've,
created
it
and
create
user
accounts
inside
the
database
or
create
an
actual
database
in
Postgres
or
do
whatever
other
kind
of
applications
to
specific
operations.
You
want
to
do
that
provision
time,
but
that
said,
ansible
is
not
the
only
thing
you
can
run
in
here.
It's
a
containers.
B
B
Ok,
the
user
experience
kind
of
areas
don't
squint
part
of
this
isn't
more
just
there
is
a
command-line
interface
I
want
to
make
you
aware
of
it.
This
is
kind
of
what
it
looks
like
you
know
it
works.
You
can
get
real
stuff
done
with
it,
it's
not
the
most
beautiful,
but
it
works,
and
for
for
what
it
does,
it
can
be
very
handy.
B
This
is
cube
apps,
which
some
of
you
may
be
familiar
with
more
as
a
from
the
home
world
as
a
helmet,
art
catalog,
but
they've
also
added
some
functionality
to
support
the
kubernetes
Service
Catalog.
My
strands
I
had
to
look
at
it
was
not
fully
baked,
but
there's
working
on
it
and
then
the
OpenShift
user
experience
on
is
is
the
full,
very
rich
user
experience
in
terms
of
offering
all
the
rich
form,
support
and
rich
parameterization
of
things.
B
Ok,
so
that
is
that's
the
current
state
of
affairs
of
what
the
automation
broker
is
is
kind
of
a
whirlwind,
so
I'm
gonna
spend
my
last
minute
talking
about
the
future.
The
next
thing
that
we're
working
on
that's
available
now
and
not
quite
as
mature
Oh
state
as
the
automation
broker,
is,
but
it's
taking
ansible
or
similar
things
and
putting
it
into
the
operator
pattern
very
quickly.
The
operator
pattern
is
another
way
to
self-service
provision,
but
instead
of
using
this
open
service
broker
API
you
use
CR
DS.
B
You
create
a
CR
D
for
your
application
and
then
instances
of
that
custom
resource.
For
example,
you
might
make
a
my
sequel
resource
type
and
I
could
create
one
and
in
that
record,
I
create,
might
describe
what
I
want
my
deployment
to
look
like,
and
then
the
operator
sees
that
coming
to
life
and
the
operators
really
just
a
controller.
It's
a
special
kind
of
controller.
It
will
then
go.
B
Do
whatever
is
necessary
in
its
reconciliation
loop
to
deploy
your
application,
but
then
actively
manage
it
and
do
full
day
to
manage,
and
so
we
we
created
an
operator
that
has
ansible
inside
it.
So
you
can
do
anything
you
want
inside
your
antivirals
or
playbooks,
and
that
becomes
the
logic
for
your
operator.
B
This
is
how
easy
it
is
to
do.
Anta
Bowl,
to
use
ansible
to
manage
the
workloads
and
kubernetes
on
the
left
is
just
a
config
map
as
you'd
stuff
in
a
cute
cuddle,
for
example,
on
the
right
is
the
same
thing
as
you
would
see
it
written
out
an
ant
ball.
I
templatized
one
thing
just
for
fun,
but
you
want
to
highlight
that
it
is
really
really
easy
and
feels
really
natural
to
manage
that
stuff
in
the
ansible,
and
that's
my
10
minute
whirlwind.
So
what
questions
you
guys
have.
A
B
Great
okay
see
one
thing
and
okay
great,
so
my
I
have
to
get
off
the
call
and
then
your
future,
so
I
won't
be
here
for
the
full
meeting
to
answer
questions.
But
if
anybody
would
like
to
start
asking
questions
in
the
chat,
I'm
very
happy
to
answer
all
the
questions.
I
see
there
and
before
I
leave
otherwise
on
in
the
chat.
There's
our
website
and
our
IRC
channel
on
freenode.
A
A
A
D
So
quick
updates
on
113,
we
got
our
third
alpha.
Yesterday,
we
had
quite
a
few
alphas
this
time,
mainly
to
give
our
branch
manager
shadows
a
run
at
the
tools
and
Bill's.
So
we
had
a
third
alpha
cut.
Yesterday
you
can
find
a
link
to
the
bits
there
we
as
of
upcoming
dates.
We
are
quickly
rolling
into
the
second
half
of
the
release.
This
is
when
this
whole
short
release
gets
to
e
gets
much
a
little
bit
so
next
week.
Next
Tuesday,
we
are
cutting
our
beta
0
for
113,
which
means
our
release.
D
Branch
will
be
ready
and
the
branch
manager
will
start
daily
fast
forwards
to
the
release
branch.
This
is
highly
dependent
on
clean
CI
signal
which
I'll
get
to
in
a
bit,
and
the
other
upcoming
date
is
code
slush,
which
is
also
coming
up
next
Friday,
which
is
7-9
so
call
out
to
all
the
enhancement
owners
to
please
evaluate
all
the
pending
work
for
your
enhancements,
and
this
is
code
test
and
Docs.
D
If
you
feel
it's
comfortable
enough
to
land
in
113
that
screed,
because
code
freeze
is
just
two
weeks
away,
if
you
feel
you
need
the
enhancement
adjusted,
please
work
with
the
release
team
to
push
it
out
to
a
future
milestones.
If
there's
a
need
on
the
same
lines,
please
ensure
that
the
PRS
are
up
to
date
in
terms
of
labels,
specifically
sake
kind
priority
and
milestone
starting
next
Friday.
Once
code,
slash
kicks
in
tide
will
start
enforcing
these
merge
labels
as
a
requirement
to
get
your
PR
much.
D
So
it
will
be
great
if
you
can
update
your
PRS
too,
with
the
labels
and
also
the
corresponding
issues,
so
that
they
are
they
match
up.
Finally,
on
CI
signal
I
have
a
link
to
this
week's
report
that
Josh
sent
out.
On
the
whole,
we
are
kind
of
looking
good.
We
have
new
issues
cropping
up,
but
we
also
had
a
couple
of
long-standing
issues
being
closed
thanks
to
auto
scaling
and
cluster
lifecycle
for
closing
out
a
bunch
of
the
couple
of
those
issues.
So
we
do
have.
D
We
do
have
one
beta
blocker
at
this
point,
which
is
scheduled
a
priority
test.
That's
been
failing
for
a
couple
of
weeks
now,
but
a
fix
just
went
in
and
we
are
hopeful
that
it's
going
to
turn
green,
but
if
it
doesn't,
then
we
might
have
to
block
beta
or
take
other
actions.
Accordingly,
we
also
had
GK
upgrade
failures
across
the
pool,
but
now
it's
determined
that
it
is
GK
specific
issues.
So
it's
not
a
blocker.
The
final
piece
of
information
that
I
really
would
like
to
get
out
there
is.
D
We
are
looking
into
shuffling
the
jobs
in
the
release
blocking
dashboard.
Historically,
the
release
team
has
been
looking
at
and
relying
on
CI
signals
from
a
lot
of
flaky,
long-running
and
seemingly
unmaintained
tests
and
making
Conoco
decisions
based
on
that
towards
that
in
113,
erin
helped
us
tighten
these
criteria
for
getting
jobs
in
to
release
logging
dashboard,
there's
a
link
to
the
PR
there
and
that
went
in
last
week.
So
as
next
steps,
we
will
be
moving
out
some
of
those
jobs
that
we
think
fall
into
non-blocking
category
into
newer
votes.
D
So
we
also
will
be
looking
at
the
other
dashboards,
just
more
so
of
to
gather
data
as
to
are
we
getting
any
useful
information
from
those
jobs
and
see
how
we
want
to
proceed
with
them
in
the
future
Ashley
circles,
so
yeah
more
job
be
moving
around
in
the
next
few
weeks,
and
we
also
had
a
couple
of
patch
releases,
one
11.4
and
one
12.2
went
out
last
week.
That's
it.
E
A
E
D
A
C
C
So
there's
a
link
there
to
a
dev
discussion.
If
you
want
more
information
about
how
that
impacts
you,
but
the
TLDR
is
that
we
are
going
to
try
and
get
as
much
content
merged
as
possible,
so
trying
to
have
small
merges
that
document
consensus
rather
than
try
to
have
a
fully
complete
idea
before
you
can
merge
the
polar
blast.
There's
will
be
more
updates
to
follow
there,
but
that's
the
TLDR
for
folks
today
try
and
get
your
content
merged
as
we
as
possible.
C
A
A
A
A
F
A
A
F
F
Yeah,
so
what
had
to
talk
through
the
broad
description
of
sagaie
ws
goals,
as
we
listed
at
the
Charter,
which
is
expected
to
close
this
week,
we,
the
sigue
WS
Steve,
is
expected
to
be
responsible
for
all
the
integrations.
Any
type
of
interfaces
like
the
CICS.
I
CRI
work
that
we
would
glad
to
do.
Any
libraries
or
tools
that
allows
cuba
ready
is
to
integrate
with
AWS
services
will
also
take
a
cod.
F
The
work
of
prow
desk
read
at
birth
dashboard
integrations
so
that
we
could
show
proper
CI
signal
for
edit
reasons
that
happened
from
our
side
for
our
tools.
We
also
provide
user
group
support
for
multiple
kubernetes
issues
and
any
feature
requests
that
specifically
involve
AWS
at
kubernetes
per
se.
It
also
will
take
'god
the
work
of
fulfilling
any
additional
documentation
requests
for
kubernetes
relevant
to
AWS.
That's
missing
today
from
kubernetes
at
I/o.
So
currently
we
have
five
sub
projects.
F
It's
a
kws
of
these
were
truly
saying
three
alpha
releases
at
kubernetes
1.13,
and
these
include
AWS
lb
interests,
controller
AWS,
eb,
b,
SCSI
driver
and
the
out
of
treat
cloud
provider
CCM
binary,
so
we're
hoping
we
could
do
the
alpha
releases
for
all
three
sub
projects
and
1.43,
as
ash
pointed
out
we're
working
towards
the
CI
signal
so
that
all
the
tests
are
visible
or
desperate
right
now,
as
of
august
this
quarter,
there
was
a
leadership
change.
I
was
voted
as
one
of
the
sig
leads.
F
F
F
So
what
this
particular
project
does
is
for
any
ingress
creation
and
ews
alb
is
automatically
created
and
internal
or
external
HTTP
traffic
can
be
routed
into
the
cluster
so
other
than
that.
We
have
also
refactored
the
complete
code
base.
We
use
the
controller
runtime
library,
which
was,
which
was
a
library,
go
library,
introduced
by
sake,
API
machinery,
and
we
also
added
a
feature
that
allows
EWS
SDK
level
cached.
F
We
essentially
made
sure
that
this
caching
allows
the
ingress
controller
to
not
make
multiple
calls
to
AWS
services,
rather
restore
the
information
and
be
more
optimal
when
processing
increases
in
the
cluster.
What
we
plan
to
do
in
the
future
after
the
alpha
release
happens,
we've
got
to
add
more
docks.
Of
course,
we
will
add
sub
docks
and
also
CI
signal
for
this
current
alpha
sig
alpha
release
for
the
egress
controller,
but
going
forward
also
we've
automated
more
docks
and
make
sure
that
all
our
users
have
enough
information
about
the
ingress
controller.
F
Additionally,
there
has
been
a
lot
of
requests
from
suppose
to
actually
share
1
lb
across
multiple
increases
that
might
be
running
in
different
namespaces.
Today
there
is
a
one-to-one
mapping
between
one
ingress
and
1
lb,
so
we
hope
to
add
that
feature
support
it
you
what
it
she,
what
treaded
I
team-
and
we
are
also
thinking
about
proposing
the
interest
controller
as
as
a
binary
that
can
sit
inside
the
out
of
tree
CCM
for
cloud
provider
AWS,
but
that's
still
in
a
discussion
mode
and
that's
something
we
wanted
resolve
going
forward.
F
The
second
project.
What
a
highlight
is
the
AWS
EBS
CSI
driver
we're
working
with
Saad
a
David
from
Google
in
order
to
align
with
the
spec
and
make
it,
and
we
are
also
working
with
yarn
and
Fabia
from
that
hard
to
make
sure
that
the
design
is
well
done.
Mostly,
the
work
is
big
dad
by
Cheng
from
EWS
we've
added
features
like
storage
class
parameters
into
the
CSI
driver,
different
FS
type,
or
you
type
it
cryptid
volume
they've.
F
Also
added
support
for
volume,
scheduling,
we've
done
the
initial
integration,
so
that
CS
signal
is
visible
to
the
cig
release
team
and
the
next
step
is
we're
going
to
complete
basic
integration
testing
so
that
the
alpha
release
can
be
completed.
We
are
also
working
with
David
and
D
in
docker
in
order
to
enable
the
CSI
migration.
F
Basically,
what
this
means
is
today
there
is
a
volume
controller
which
is
in
which
is
part
of
the
entry
cloud
cloud
controller
manager,
and
when
we
do
introduce
the
CSI
driver
for
EBS,
we
want
to
make
sure
that
users
have
a
seamless
way
to
migrate
from
that
volume,
controller
implementation
to
the
CSI
driver.
So
we
will
be
adding
a
library
in
order
to
make
that
migration
seamless-
and
this
is
the
work
that
we
are
jointly
doing-
the
Google
and
docker.
F
The
third
sub
project
is
the
external
cloud
provider
CCM
binary
today.
The
entry
CCM
binary
does
not
have
any
end-to-end
test
for
AWS
we're
working
towards
changing
that,
so
that
we
can
be
seamless
and
we
can
have
parity
with
all
the
other
cloud
providers
and
our
plan
is
to
maintain
this
entry
CCM
until
the
out
of
tree
CCM
is
GA
and
q3
2019.
So
we
expected
application
period
of
two
releases,
we're
also
working
on
the
out
of
tree
CCM.
F
We
expect
to
extract
that
and
essentially
do
an
alpha
release
and
1.13
we're
also
helping
basically
move
all
the
cloud
provider
dependencies
from
kubernetes,
/
kubernetes
to
kubernetes,
utils
and
staging.
This
is
the
joint
work
that
we're
doing
with
all
our
cloud
provider.
Friends
and
then
there
are.
There
is
a
lot
of
work
to
be
done
to
scope,
end-to-end
testing
for
out
of
tree
CCM
and
also
integrate
that
CI
signal
to
test
grid.
That's
also
work
in
progress
and
we
plan
to
make
good
progress.
This
quarter
before
we
move
forward
in
q1
2019.
F
Apart
from
that,
we've
added
we've
realized
that
for
cig
AWS,
all
the
sub
projects,
as
well
as
the
e2e
test
signal,
is
completely
it's
fully
missing
and
we've
had
multiple
requests
from
people
to
do
something
about
this.
So
for
now,
what
we've
done
is
we've
added
an
AWS
tester
plug-in
into
test
infra.
It's
basically
a
cube
test,
deploy
our
interface
and
what
it
does
is.
It
creates
an
ephemeral,
ETS
cluster
to
run
kubernetes
e
to
e
tests,
all
as
periodic
jobs.
F
A
A
E
All
right,
you
know,
I'm
gonna
go
over
the
items
that
we
delivered
in
112
and
what
we're
gonna
deliver
in
130,
not
what
was
working
these
days.
One
of
the
major
things
that
we
did
in
112
was
to
improve
scheduler
performance.
Basically,
I
mean
you
increase.
The
scheduler
throughput
to
major
features
was
delivered
to
in
so
one
is
the
person
that
you
know
it's
that
we
score.
Basically,
the
scheduler
used
to
score
all
the
nodes
of
the
cluster
every
time
for
all
the
odd
that
it
schedules
this
concur.
E
E
Truth
of
the
schedule
has
increased
in
order
to
do
this,
we
have
implemented
a
fairly
sophisticated
iterator
that
goes
over
nodes
in
various
zones
or
regions
and
failure
domain
space
to
ensure
that
we
give
all
these
nodes
in
various
failure,
domains
a
fair
chance
of
getting
considered
for
each
part.
Another
important
improvement
performance.
E
Implemented
in
112
is
affinity,
Aneta
affinity.
We
have
seen
over
a
hundred
x
currents
in
movement
after
these
algorithmic
improvements.
This
was
one
of
the
major
performance
issues
that
the
scheduler
had
and
with
these
new
additions.
One
of
these
players
hurdle
in
improving
performance
of
the
scheduler
or
using.
E
We're
specifically
talking
about,
inter
part
and
event,
is
now
removed.
We
graduated
painting
note
by
condition
to
bed
beta,
so
this
is
a
feature
that
allows
nose
to
get
painted
when
certain
conditions
happen.
For
example,
when
did
not
ready
or
there's
a
network
issue,
and
so
on
so
we
add
to
denote
so
this
helps
us
make
the
logic
similar.
D
E
And
odds
can
have
toleration
or
not
solid
empty,
but
it
this
helps
managing
odds
more
efficiently.
Another
feature
that
we
have
delivered
in
112
is
image
locality
function.
So
what
it
does
is
that
basically
it
prefers
nodes
which
already
have
the
images
for
a
pod.
Let's
say
that
it
had
arrived.
The
scheduler
considers
all
the
nodes
in
the
cluster
and
some
of
the
nodes
are
visible
among
these
nodes
and
note
that
has
that
already
has
some
of
the
images
or
a
certain
percentage
of
the
images
at
the
pod
profile.
E
This
is
actually
a
kind
of
like
a
controversial
feature,
because
some
people
believe
that,
if
they're
or
let's
say
nodes
that
in
other
that
are
in
the
cluster-
and
let's
say
you-
you
schedule
the
first
part
of
a
replica
set
on
that
node
they're,
not
notice,
gonna
get
the
images
of
part
and
then
all
the
other
parts
make
it
automatically
attracted
to
that
node.
We
ensure
that
you
properly
set
the
weight
of
this
priority
function
versus
other
spreading
priority
homes,
ensure
that
all
the
nodes
of
reasonable
chance.
E
We
also
point
lies
the
design
of
the
scheduling
framework.
I,
don't
know
people
are
familiar
with
this
effort
or
not,
but
the
idea
of
the
scheduling
framework
is
that
they're
making
changes
to
the
scheduler
architecture
so
that
most
of
it
doesn't
become
like
plugin.
For
now
we
have
focused
our
process
plug-in.
Basically,
these
are
the
plugins
that
are
placed
in
seven
directories
and
the
scheduler
code
and
are
compiled
the
scheduler.
E
But
the
idea
here
is
that
for
those
people
to
want
to
customize
the
scheduler,
they
don't
need
to
worry
about
the
rest
or
the
interpret
in
these
five
years
and
the
rest
of
the
scheduler.
Hopefully
they
don't
need
to
do
much
really
when
they
are,
they
want
American
changes
in
the
schedule,
their
cadres.
E
E
All
right
or
113
they're,
trying
to
finalize
the
design
of
gang
scheduling
and
having
an
early
prototype,
the
gang
pedaling
is
basic
or,
as
we
call
it,
these
disco
scheduling
is
a
feature
that
allows
us
to
schedule
backdrops
more
efficiently.
A
lot
of
bad
jobs
need
to
get
scheduled
together.
So,
for
example,
there
are
some
machine
learning
workloads
that
if
they
are
not
all
scheduled
together,
they
cannot
progress.
So
you
either
have
a
relative
and
pod
at
the
same
time,
or
if
you
schedule
nine
they're,
not
gonna
progress,
these
it's
consumer
resources
for
no
reason.
E
So
that's
the
idea
behind
gang
scheduling,
we're
trying
to
basically
implement
this
so
that
either
all
need
part
of
the
require
schedule
together
or
none
of
them
they're
also
finalizing,
participating,
poly
teams.
The
scheduling
policy
is
our
set
of
policies
that
allows
administrators
to
that,
allow
administrators
to
define
some
politics
or
the
placement
of
parts
unknown.
E
For
example,
an
admin
might
want
to
say
that
part
in
this
particular
namespace
should
never
get
scheduled
on
a
particular
zone
and
things
of
that
sort
or
they
may
they
may
want
to
prevent
you
this
from
setting
7
toleration
on
their
parts
and
so
forth.
So
these
are.
These
are
some
of
the
stuff
that
are
we
working
on.
This
is
actually
we
were
working
with
the
sig
policy
workgroup
to
finalize
this
design.
We've
been
able
actually
I
like
reading
over
most
page
designs
and
we
haven't
finalized
ideas,
but
yours
are
out.
E
There
is
a
get
out.
That's
the
word.
We
are
duplicating
critical
thought
annotation
in
113
critical
thought.
Annotation
was
an
astral
feature
that
was
enabled
by
default
in
opinion,
probably
come
on
face.
It
was
used
for
marking
some
pods
as
pitiable,
so
most
mostly
our
system
part.
Basically,
we
are
getting
marked
as
critical
in
a
parts
with
this
particular
annotation,
but
now
that
we
have
priority
and
preemption,
we
no
longer
need
that
future
as
a
result
with
deprecating
it,
and
we
are
hoping
that
we
can
completely
remove
it
in
140.
E
We
are
enabling
another
priority
function
for
this
edge
way.
That
allows
allows
the
scheduler
to
prefer
notes
that
can
hardly
limit
those
scheduled.
He
cares
about
how
to
request
so
odd
request.
It's
on
a
node
to
know
it.
It
consider
the
feasible
overall,
assuming
that
the
pod
doesn't
have
any
other
scheduling
requirements,
but
now
we
are
also
adding
this
priority
function
that
prefers
nodes
and
also
it
deploys
limits.
E
We
are
also
implementing
a
couple
of
extension
point
as
a
part
of
implementing
the
scheduling
framework
in
113.
These
are
just
incrementing.
The
extension
points
who
do
you
know
and
I
have
any
plugins
for
these
extension
on,
given
that
113
is
supposed
to
be
more
costly,
like
released,
we
are
working
on
also
attending
the
equivalent,
as
we
had
its
equivalents
cash
equivalent
cash
in
this
scheduler,
which
was
an
app
or
features,
but
our
indications
show
that
it
was
not
in
too
the
scheduler
group
with
much
with
adding
major
complexities.
A
G
And
let
me
know
if
you
see
it
yep,
we
good
all
right,
I'm,
actually
not
gonna,
go
into
presentation
mode,
I'm,
gonna
toggle,
with
some
with
my
tabs
at
the
top
all
right,
hi
everyone
as
Tim,
and
my
name
is
Paris
I
work
at
Google.
I
am
here
today
to
do
the
contributor
experience
community
update
first
things.
First,
I'm
gonna
start
with
what
we
did
last
cycle
and
what
we've
been
up
to
in
general.
G
One
of
our
major
active
themes
is
making
your
life
easier,
whether
that's
a
chair
tech
lead,
a
casual
contributor,
first-time
contributor,
etc,
and
we
do
that
by
automating
things
where
we
can
document
he
documenting
them,
where
we
can
automate
them
mentoring
in
many
different
areas
and
events
and
all
kinds
of
other
fun
stuff
that
you're
gonna
hear
about
in
a
second.
The
one
major
thing
that
we
did
at
last
cycle
was
create,
distribute
and
ultimately
complete
the
survey.
The
survey
was
very
extensive.
G
We
got
a
lot
of
feedback
that
it
was
very
long
hopefully
in
the
future
will
not
be
as
long,
but
we
wanted
to
get
as
much
comprehensive
information
that
we
possibly
could.
And
yes,
pretty
graphs
are
coming
they're
on
the
way
and
I
do
have
a
little
snapshot
for
you,
and
this
is
one
graph.
One
of
the
questions
was
what
are
blockers
for
you
in
the
campaign,
the
contribution
process
and
we
broke
them
down
by
level
because
we
asked
people
what
their
level
was
as
well
for
the
survey.
G
So
you
can
see
from
this
that
approvers,
for
instance,
think
that
debugging
test
failures
is
their
biggest,
the
biggest
blocker
that
they
have
in
the
contribution
process,
and
then
also
you
can
see
that,
let's
see,
let's
pick
on
orga
members
also
probably
feel
the
same
as
well.
I'm
sure
that
everybody
loves
a
good
test
test
of
bugging
session
every
now
and
then
but
yeah.
These
are
the
charts
that
will
eventually
come
out.
They
have
a
lot
to
do.
G
They
have
a
lot
to
go
with
design,
but
they
are
coming
just
some
quick
bits
of
information
that
we
learned
from
from
the
survey
a
meetup
serratus
code.
For
us,
we
did
get
a
lot
of
people
who
said
that
meetups,
you
need
a
need,
some
attention,
guess
what
that's
the
ncf,
but
we
will
take
that
information
to
CNCs.
G
Another
thing
that
we
saw
a
lot
of
was
that
people
said
that
they
do
not
use
the
good
first
issue
or
Help
Wanted
labels,
because
they
didn't
set
the
issue
and
they
weren't
the
author
of
the
issue.
Yes,
you
can
apply
both
of
those
and
they
are
extremely
helpful
to
us.
If
you
do
do
that,
we
did
have
a
session
last
I
think
was
either
last
community
meeting
or
the
meeting
before
that,
where
Aaron
demoed
both
of
those
issues,
please
use
those.
We
also
found
out
that
slack
is
extremely
welcomed.
By
our
survey
takers.
G
G
New
contributors
really
like
the
release
section
of
the
community
meeting
and
current
contributors
really
like
the
announcements.
So,
congratulations
for
the
release
team
for
always
doing
a
an
awesome
snippet
for
us,
and
this
can
meaning
and
then
also
we
had
folks
who
said
that
we
should
fix
the
technical
problems
of
kubernetes.
We
are
not
that
Saigal,
so
I
just
wanted
to
let
you
know
I
tweeted
that
recently
to
them
it
was
such
a
funny
that
I
was
like
so
there's
plenty
of
other
funnies.
G
The
data
is
there
in
that
link,
feel
free
to
do
what
you
like
with
the
data
it
has
been
scrubbed
have
at
it
and
yes,
when
the
pretty
graphs
are
official,
we
will
do
a
blog
post,
who
also
do
an
announcement
on
qdubs.
So
please
look
out
for
that
all
right.
We
did
a
steering
committee
election,
we're
not
going
to
go
into
that
just
because
of
time.
We
make
github
management,
an
official
sub
project
click
there
for
the
complete
list
of
services.
We
created
a
team
of
admins.
G
That's
awesome
thanks
to
Christoph
Rose,
hard
work
here:
Erin
Bob,
Killian
stuff
that
Steve
and
so
many
folks
that
have
been
helping
us
with
that
we've
created
a
project
board.
We've
cut
our
meetings
down
by
20
to
30
minutes
less
because
of
the
way
we've
been
doing
a
little
bit
more
project
management,
stuff
and
we've
also
created
a
new
meeting
time.
That's
friendly
Tory
Asia,
not
locations
and
contributors,
and
that
worked
out
really
well.
We
had
about
13
people
join
us
at
8
p.m.
Pacific
on
the
fourth
Wednesday,
which
is
really
awesome.
G
We've
kicked
off
planning
for
Shanghai
new
contributor
workshop
and
the
Seattle
contribute
or
contributors
on
it,
which
I'll
get
into
more
detail
in
a
second.
We've
also
made
major
major
changes:
how
we
moderate
and
set
up
our
communication
platforms.
Yes,
we've
had
actors,
yes,
we
have
a
lot
of
spam
and
yes,
we
do
know
that.
However,
all
the
stuff
that
we're
do,
that
we're
progressing
forward
with
this
makes
our
community
as
safe
as
possible,
because
we
are
a
public
open
source
project,
so
we
should
be
public
as
possible.
G
However,
we
do
need
to
take
in
take
into
consideration
spam
and
things
like
that.
So
we've
applied
a
lot
of
our
kubernetes
dev
mailing
list,
moderation,
processes
and
procedures
to
all
of
the
mailing
list,
processes
and
procedures.
If
you're
a
chair,
please
get
up
to
speed
with
this
stuff.
If
you
do
not
know
how
to
kick
a
bad
actor
off
your
Zune
call
or
if
you
do
not
know
how
to
moderate
your
mailing
lists,
please
reach
out
to
us.
G
Other
things
that
we've
done
launched
regional
boards
for
discussed
our
grantees
that
I
oh
this
means
we
now
have
a
forum
that
can
be
located
anywhere
in
the
world
without
firewalls
or
other
garden
walls.
If
you
will
so
please
check
that
out.
That's
really
awesome
and
get
the
word
out
about
that
feel
free
to
post
there
as
well.
And
yes,
when
you
post
those
posts,
get
picked
up
by
Google
search
results.
We
continue
to
improve
our
regular
programs.
G
Those
are
things
like
office
hours
meet
our
contributors,
the
community
meeting
for
meet
our
contributors,
we've
added
a
steering
committee
session.
That's
an
AMA,
that's
really
wonderful!
We
did
a
code
based
tour
with
stuff
and
with
communities.
Kubernetes
time,
that's
already
have
like
five
five
hundred
plus
hits
on
YouTube.
It's
doing
awesome
community
meeting
this
one
here,
we've
done
an
announcement
session
and
kept
session
and
contributor
tips
section
so
constantly
improving
this
meeting.
G
We've
kicked
off
the
outreach
e
mentorship
initiative,
that's
Nikita
and
I
and
Brendan
burns
we'll
have
more
updates
in
a
second
about
that
Google
Summer
of
Code
started
and
ended.
Since
the
last
time
we
presented
Marco
presented
his
project
at
a
previous
community
meeting
Chuck
how
awesome
that
is
out.
G
This
is
the
kind
of
work
that
we
can
get
out
of:
Google
Summer
of
Code
students,
dead,
stats,
we've
cleaned
up
dashboards,
added,
more
definition,
so
the
bottom
of
the
charts
and
created
a
start
of
a
readme
and
also
regular
audits
of
our
communication
platforms
and
I
mentioned
this
earlier
with
like
the
bad
actors,
or
things
like
that.
This
takes
a
lot
of
time,
a
lot
just
to
audit
the
Xoom
accounts
that
we
have,
which
we
have
about
45.
It
takes
about
40
hours
at
least
of
work.
G
So
if
you
are
a
chair
in
a
Dora
technically,
that's
taking
advantage
of
some
of
the
some
of
the
infrastructure
that
we
have
please
get
up,
just
eat
there
and
we're
starting
to
audit
one
of
the
reasons
why
we
do
not
have
a
public
calendar
right
now
is
because
of
the
bad
actors
and
because
bad
actors,
comb
Twitter,
for
open,
zoom
links.
This
is
a
known
zoom
issue.
We've
actually
been
working
with
scene,
distinguished
engineers
on
this
issue
to
fast-track
some
security
related
protocols.
G
If
you
see
people
tweeting
our
zoom
links,
please
tell
them
not
to.
There
are
other
ways
that
people
can
get
into
our
meetings
if
they
really
want
to
get
into
them.
But
tweeting
them
is
me
not
a
not
the
best
bet,
they're,
all
right
and
I
know
I'm
talking
so
fast,
but
our
team
does
so
much
work.
Y'all
I,
just
look
like
I'm,
so
excited
about
it
upcoming
cycles
by
sub-project,
general
SIG's
stuff.
We
hope
to
have
our
Turner
merge
this
week.
G
We
are
continuing
to
build
out
project
management
aspects
of
our
roots
and
we're
going
to
be
using
a
lot
of
that
survey.
Data
to
do
so
a
survey
Tina's
in
use
and
then
we're
also,
as
Caleb
mentioned,
for
taking
tests
and
design
proposals
out
of
the
community
repo
and
we're
going
to
be
doing
an
intro
and
deep
dive
session
to
cube
con
Seattle
as
far
as
contributor
documentation,
which
is
one
of
our
sub
projects,
we're
going
to
be
revamping
the
developer
guide,
which
we
are
hiring
and
outreach
and
trying
to
do
so.
G
So
for
that
continuous
improvements
to
the
contributor
guide,
as
always,
we
are
launching
a
contributor
sites
and
CF
is
hiring
us
a
contractor
to
apply
the
theme.
The
back
end
is
largely
complete
and
then
more
code
based
for
videos
which
I've
included
a
link
in
there
and
I
would
love
to
get
everybody's
feedback
on
what
once
you
do
code
based
hordes
of
because
they
are
so
popular
event,
sub
project.
We
worked
hard
on
contributor
summits.
This
contributor
summit
in
Seattle
is
already
wait-listed.
G
That
means
we
have
over
400
people
that
are
interested
and
I'm
going
to
talk
to
tell
you
all
in
a
minute
what
we
need
from
you
there,
but
this
is
going
to
be
a
really
awesome
one.
We
have
many
tracks
hands
on
track,
Spurs
with
feather
new
contributor
workshop,
etc.
We've
also
added
on
a
night
beforehand,
because
everybody
said
hallway
track
is
the
best
track
so
that
we
we
like
to
we
like
to
hang
out
with
y'all.
So
that's
an
edit
Edition
this
time
and
then
we've
also
added
a
new
contributor
workshop
to
coupon
Shanghai.
G
That's
now
sold
out
as
well
plus
there's
a
current
contributor
get
together
afterwards.
Please
check
qdubs
mailing
list
for
anything
related
to
you,
contributor
workshop
there
and
then
communication
stuff
that
we've
got
going
on.
We
are
creating
a
guide
to
communicating
with
kubernetes.
This
is
going
to
be
an
update,
I,
not
created
communications,
MD
doc.
We've
been
doing
a
ton
of
work
on
discovery
of
all
of
our
communications.
G
We
have
over
200
google
groups,
that's
just
to
give
you
a
tip
of
the
iceberg
or
you
have
all
intentions
of
improving
care
and
techniques,
eight
processes
which
is
managing
your
micro
communities.
This
is
updating
your
dots.
This
is
making
sure
that
you
are
not
that
you
do
not
have
as
much
as
administrivia
as
you
do
currently
we're
currently
finishing
and
testing
a
new
zoom
to
youtube
automation
process.
So
you
don't
have
to
manually
update
those
things,
also
testing
a
new
calendar
and
share
dot
dot
process
and
so
much
more.
G
G
G
But
of
course,
we
plan
to
continue
to
continue
to
migrate
towards
automation,
for
the
new
members
of
the
organization
and
again,
big
thanks
to
folks
who
volunteered
to
sit
on
that
team
mentoring,
bragging
akita
to
to
the
owners
file
there,
because
she's
awesome
and
she
does
so
much
work
for
us
and
we're
gonna
be
continuing
to
think
about
ways
to
grow
people
into
owners
files.
There's
a
ton
of
the
issues
would
love
to
get
a
lot
of
your
support
and
feedback
here
and
then
planning
on
launching
the
MVP
of
the
one-on-one,
our
improved
mentoring.
G
All
of
those
issues
are
there
to
get
everybody
on
board
with
that.
This
is
the
stuff
on
how
we
grow
people
y'all.
So
it's
very
important
that
we
have
all
hands
on
deck.
If
you
are
not
mentoring
right
now,
please
see
me
so
that
we
can
get
you
involved
in
a
program.
A
lot
of
the
programs
that
we
have
established
take
consideration
into
your
time.
The
survey
didn't
come
back
with
a
lot
of
you.
G
That
said
that
you
just
don't
have
the
time
to
mentor
guess
what
a
lot
of
this
stuff
means
only
one
hour
of
your
life,
every
quarter,
it's
very
doable
and
if
all
hands
are
on
deck.
So
if
everyone
for
instance
that
was
eligible
to
votes,
all
700
people
or
a
mentor,
then
we
would
have
triple
that
in
people
who
can
help
review
and
approve
TRS.
So
please,
let's
try
to
do
better
with
mentoring.
So
how
are
these
plans
affect
you
again,
I,
just
I
kind
of
just
hit
on
the
first
bullet.
G
All
of
these
mentoring
programs
will
affect
how
contributors
are
moving
up.
The
ladder
give
us
feedback.
Help
us
recruit
mentors
sign
up
to
mentor
here,
requirements
of
being
a
mentor.
You
only
need
one
merge,
pull
request.
Why
only
one,
because
anyone
and
all
should
be
mentors
so
if
you've
only
merged
one,
that
means
you
can
mentor
a
new
contributor.
G
Please
sign
up.
That
was
another
thing
that
came
back
on
the
survey
that
will
that
I'll
provide
the
data
on.
Is
there
were
so
many
people
that
said
they
didn't
know
enough
to
mentor
if
you've
had
one
merged
PR.
Here
you
know
enough
to
mentor
we're
also
going
to
communicate
even
more.
We
learned
from
the
survey
by
there
was
plenty
of
you
that
didn't
know
about
certain
things
that
we
feel
like.
We
can
unit
eight
a
lot
about.
Yes,
we
know
that
we
have
a
lot
of
communication
channels.
G
So
if
we,
if
you
feel
like
we're
being
spammy,
please
let
us
know,
and
but
these
are
sort
of
our
main
communication
arteries.
So
if
you
feel
like
you're,
not
hearing
enough
about
contributor
experience
check
to
see
if
you're
on
these
channels,
if
you're
not
then
get
on
them,
if
you're
a
chair,
a
tech
lead
or
a
sub-project
owner,
the
is
super
critical
for
you.
If
you
plan
on
going
to
the
contributor
some
in
Seattle,
please
reach
out
to
us
at
unity,
I
could
burn
any
CIO
as
soon
as
possible.
G
We
only
have
a
limited
number
of
spots
left
and
those
are
saved
for
those
folks
and
then
also
when
we
do
more
exercises
like
new
contributor
workshops,
outreach
applications,
etc
that
are
bringing
in
an
influx
of
new
folks.
Make
sure
that
you
are
using
your
good
first
issue
labels
and
your
Help
Wanted
labels
there
poking
around
and
looking
for
things
to
work
on,
and
this
is
how
you
do
it.
G
So
how
can
you
help
us?
We
have
a
good
first
issue
in
health.
Wanted
labels
in
our
repo,
which
is
in
the
community
repo
file
issues
when
something
isn't
right
for
you
or
you
have
a
suggestion
and
also
use
the
role
Lord
for
any
roles
that
your
stig
might
have
that
are
outside
of
health.
Wanted
good.
First
issue
labels,
things
like
someone
to
make
graphics
for
you
for
Dex
been
cut
for
meetings
for
cigs.
G
It
did
it's
really
just
the
it's
endless
feel
free
to
use
that
there's
plenty
of
people
like
17,000
to
the
exact
who
are
looking
for
work
within
kubernetes
chairs.
Please
upload
your
meetings
to
you
to
fix
your
zoom
settings
and
don't
forget
about
your
community
meeting
updates
where
to
find
us.
We
have
an
open
mic.
Every
Wednesday
at
10
a.m.
I
didn't
fill
in
the
UTC
number
there
with
the
xxx
and
then
fourth
Wednesday
of
the
month
is
now
8
p.m.
Pacific.
We
have
a
slide
channel.
G
A
You
Paris
I
know
that's
a
lot
of
information
there.
We
really
appreciate
what
you
and
the
sig
do
to
help
grow
the
community.
So
next
on
the
agenda,
we
only
have
a
couple
minutes:
we've
hit,
our
one
announcement
recorded
caps
are
moving.
We
talked
about
that
earlier
with
Caleb.
That
brings
us
to
shoutouts.
A
Nikita
would
like
to
thank
DIMMs
for
making
a
point
to
being
friendly
to
Asia
in
EU
and
setting
the
the
time
for
the
meeting
for
the
Kate
zone
for
Team
Mohammad
a
shout
out
to
mr.
tables
for
self,
with
career
Nettie's
101
in
Bangalore.
It
looked
like
that
was
a
very
successful
event.
I
saw
pictures
with
the
ton
of
people
there,
Josh
burkas,
just
in
Santa
Barbara,
for
continuing
to
be
the
difficult
to
test,
fail,
resolver
and
also
to
live
a
mere
nail
at
one.
Two.
Three
four
fast
turnaround
on
Cube
ATM,
test
failures.
A
I
know
the
release
team
appreciates
everything
that
folks
do
to
triage
issues.
Fada
shout
outs
have
been
the
elder
for
finally
creating
eighty
honkin
emoji
I
haven't
seen
that
yet
I've
got
to
go.
Look
for
it.
Awesome
Liz,
tube
in
the
elder.
For
going
above
and
beyond
the
help,
get
some
K
IND,
COO
eighties
and
dr.
test
working
Paras
thanks
to
Nikita
Roy,
Chi,
Britton
burns
dims
and
many
others
for
answering
questions
for
the
first
time.
A
Contributor
is
an
outreach
East
slack
if
X
Erin,
Audrey
Lim
for
tackling
into
n
test
error
messages
as
a
first
PR
awesome
to
see
new
contributors,
especially
or
something
potentially
super
complex
like
that
and
one
last
one
for
me,
Sally
Ross.
Thank
you
very
much
for
your
note-taking
today
with
that
we
are
at
the
end
of
our
agenda
and
basically
at
the
end
of
our
time
slot.
So
thank
you.