►
From YouTube: Kubernetes Community Meeting 20190221
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 6pm UTC. The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
Check this out for more information: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
All
right,
hi
everyone
and
welcome
to
this
week's
Cuban,
Etta's
community
meeting
I,
hope
you're
all
having
a
lovely
Thursday.
My
name
is
Jonas
Roslin
and
I'll
be
your
host.
For
today,
before
we
start,
we
do
have
a
code
of
conduct
for
these
meetings
and
the
meaning
itself
is
recorded
and
streamed
live
so
make
sure
you
don't
say
or
do
anything
they
don't
want
permanently
recorded.
A
One
quick
note
about
the
notes
dock
that
we
have,
if
you're
in
the
Google
Doc
with
the
community
meeting,
notes
and
you're,
not
taking
notes
right
now.
Please
close.
Please
close
it
because
we're
having
some
issues
with
Google,
Docs
being
slow
for
us
with
a
ton
of
people
in
there
today
we
have
three
six
presenting
we
got
country
backs.
We
got
AWS
and
get
scheduling,
but
first
off
we
have
a
demo
by
Lucas
olace
from
Miranda,
and
he
will
show
us
how
to
run
run
kubernetes
in
kubernetes
lucas.
Are
you
ready
to
go?
I
am.
B
B
No,
it's
what
it
says:
hello,
you
to
run
using
crystal
custard
API
kubernetes
in
in
the
kubernetes
other
other
pods
and
spawn
new
custards
there.
So
the
first
question
is:
why
would
you
do
that
because
you
can
then
reuse
whatever
what
you
already
know
do
not
need
to
learn
anything,
and
if
you
knew
you
know
how
to
manage
cast
kubernetes
cluster,
so
you
can
use.
It
was
use
its
tools
to
spawn
new
clusters.
B
It
allows
you
to
do
multi-tenancy
with
kubernetes
how
to
advances
you
can
each
parent
congrats
on
custard
agency
can
can
do
it
with
it
whatever
he
wants
and
when
it's
done
I
you
can't
delete
it.
So
it's
quite
easy,
quite
nice
for
the
for
testing
and
you
what
you
don't
need
to
be
worried
about
and
leftovers
in
the
question,
because
what
you
did
the
custard
on
screen.
B
So
here
is
the
architecture,
so
basically
you
have
this
base
kubernetes
cluster,
that
you
have
installed
docker
and
will
clot,
which
cut
is
the
same
as
docket,
but
for
Hewlett
one
machine.
So
it
allows
you
to
run
with
machines
as
pots.
So
it
gives
you
another
more
security
and
to
do
to
do
persistent
volumes,
you
can.
We
are
using
ok
off
to
do
load
balancer
we
use
metal
bit
for
to
expose
them
IPR
server.
We
use
nginx,
ingress
and
external
DNS,
so
you
can
access
it
place.
B
B
Here
are
the
links,
so
that's
the
old
from
the
slides
and
let's
see
the
code
to
spawn
new
cluster.
It
takes
around
5
minutes,
but
I
don't
want
to
risk
that
you
do
take
longer
survive
over.
That
is
created
two
clusters
and
to
see
the
crust
layer
just
around
to
TTL
that
cluster
custard
API
works
in
this
way
that
for
one
namespace
you
can
have
one
cluster.
So
then
another
cluster
is
in
the
next
page
demo.
So
I
have
two
clusters:
each
cluster.
B
B
So,
as
you
can
see,
the
the
pot
is
just
standard
notation,
it's
just
standard
components,
pot,
I,
music,
annotations
and
because
it's
a
machine
I'm
using
clouding
it
user
data
to
pass
the
instruction
to
the
mid
promotion,
how
to
spawn
the
new
casters.
So
the
instance
dr.,
a
couplet,
keep
ctrl
and
does
not
use
use,
ascribe
EDM
to
omit
the
custard
and
then
join
the
cluster,
so
nothing
special
he'll,
some
normal,
just
standard
component,
a
spots
to
access
the
new
cluster.
B
B
B
B
B
Well,
it's
still
working
progress
of
the
IP
is
from
the
inside
the
custard,
so
it
is,
will
not
work,
but
the
keys
are
here
so
in
the
in
the
custard.
I
have
the
mouse
h-street,
which
basically
creates
the
example
application
with
my
sequel
and
WordPress
I'm
using
this
one,
because
it
creates
persistent
volume
claims
and
using
balancer.
B
So
I'm,
risking
deployment
services,
claims
and
persistent
volumes
so
deployments
are
in
progress.
Service
is
waiting
for
low
balance
of
IP
and
persistent
volumes
already
bound
using
glue
Caio.
So,
let's
request
one
more
time
and
as
you
can
see,
external
IP
was
assigned
its
load
balancer
the
IP
and
if
I
list.
B
B
A
C
D
Hi
I'm
Aaron
of
cig
beard,
today's
cat
t-shirt
demonstrates
just
how
I
don't
know
if
you
can
see.
This
is
just
how
amazingly
excited
I
am
for
the
release.
Why
am
I
so
excited?
Well,
burned
down
is
coming
next
week
burned
down,
as
some
of
you
may
be
familiar
with
is
the
period
during
which
we
have
more
meetings,
because
that's
how
we
improve
productivity,
the
release
team
will
start
meeting
Monday,
Wednesday
and
Friday
in
addition
to
the
European
time
zone,
friendly,
release
team
meeting
that
we
have
on
Tuesday.
D
So
that
means
that
we
start
to
kind
of
dial
things
down.
Tighten
things
up,
use
whatever
metaphor
or
analogy
works
best
for
you,
but
you're
going
to
start
to
notice
more
activity
on
your
issues
and
pull
requests.
You're
going
to
start
to
see
more
questions
like.
Are
you
really
really
sure
you're
going
to
land
this
in
the
1:14
time
frame?
Are
you
actually
fixing
this
bug?
Do
you
really
want
this
enhancement
things
of
that
nature,
and
this
is
just
generally
where
the
rubber
meets
the
road
in
that
regard.
D
One
of
the
other
prerequisites
to
going
into
burndown
is
that
we
now
actually
have
a
release
branch
that
corresponds
to
1:14
I.
Guess
I
can
share
my
screen
real
briefly,
let's
see
here,
but
basically
thanks
to
the
hard
work
of
on
it
and
San
Liu
and
I
bought
Duvall
I
can't
really
pronounce
your
name
correctly.
D
I'm,
very
sorry:
let's
go
to
test
grid
and,
let's
go
to
use
sake,
release
1:14
all,
and
you
can
see
that
we
have
a
test
grid
dashboard
with
literally
all
of
the
jobs
that
use
the
1:14
release
bridge
and
you
can
see
like
not
all
of
them
are
really
doing
so
great
right.
Now,
that's
okay!
We
have
a
smaller
subset
of
jobs
that
we
use
it's
the
1:14
blocking
bridge
and
again
you
can
see
that
some
of
them
aren't
really
happy.
Some
of
them
are
a
little
flaky.
D
D
Going
to
the
notes,
I'm
not
supposed
to
be
looking
at
I,
just
love
that
everybody's
talking
about
that
okay,
so
part
of
part
of
the
tightening
down,
is
to
really
take
a
good
honest,
genuine
look
at
your
caps.
I
asked
that
everybody
kindly
put
in
things
like
a
test
plan,
things
like
upgrade
downgrade
considerations
and
things
like
a
graduation
criteria
in
the
form
of
a
checklist
that
is
really
consumable
by
the
release
team.
If
we
start
looking
at
and
find
that
those
are
not
there,
we're
gonna
have
that
conversation
about.
D
D
Let's
see
something
else
to
be
aware
of,
because
burn
down
is
coming.
That
also
means
you
should
brace
yourself,
because
code
freeze
is
coming
not
that
long
after
that,
Thursday
March
7th
is
countries
on
the
enhancements
front.
I
just
wanted
to
talk
through
a
couple
enhancements
that
don't
seem
to
have
caps
listed
in
implementable
state,
which
was
one
of
the
requirements
we
kindly
asked
for
prior
to
enhancement,
freeze
and
then
again
prior
to
the
extension
of
the
deadline
of
enhancement
freeze
just
for
caps.
D
So
at
this
point,
if
we
asked
you,
could
you
please
change
it
to
from
the
word
like
draft
or
accepted
or
whatever
to
the
actual
word
implementable?
And
you
haven't
that's
going
to
give
us
pause
that,
are
you
really
being
responsive
here?
Are
you
really
trying
to
push
your
issue
forward,
so
I
thought
I
would
draw
attention
to
a
couple
of
these
here
at
the
community
meeting.
Maybe
the
appropriate
people
here
are
here
and
we'll
take
wind
of
this.
D
If
not,
maybe
you
know
who
they
are
and
can
go
pass
word
on
that
way
as
well
as
us,
you
know
contacting
people
via
slack
and
other
channels,
so
node
level,
user
name.
Space
remapping
is
something
that
just
straight
up
doesn't
seem
to
have
an
actual
cap.
So
we've
asked
about
maybe
getting
a
kept
in
going
through
the
exception
process,
if
at
all
possible
the
updated
plug-in
mechanism
for
cube,
CTL,
I
I,
know,
there's
a
design
proposal
linked
here.
D
D
Plugins
drivers
what-have-you
out
of
tree
that
we
should
find
some
appropriate
forum
to
talk
about
this
stuff,
especially
for
end-users,
who
want
to
know
what
else
has
been
going
on
in
the
world
of
kubernetes
in
between
113
and
114.
So
we'll
find
a
way
to
keep
track
of
this.
My
suggestion
would
be
to
consider
having
this
be
some
content.
We
post
on
the
kubernetes
blog,
because
that
is
certainly
a
forum
for
all
things:
kubernetes,
not
just
kubernetes,
kubernetes,
kubernetes,
kubernetes,
kubernetes
kubernetes.
How
many
more
times
can
I
say
the
word.
D
Finally,
the
couplet
resource,
metrics
and
point
tracking
issue
and
the
persistent
volume
taint
and
toleration
issues
both
went
through
the
exception
process,
but
I
have
yet
to
actually
see
their
caps
land
as
implementable.
So
if
you
are
somebody
involved
with
these
features-
or
you
know
anybody
with
these
features-
please
reach
out
or
expect
more
noise
from
us
or
expect
to
not
land
in
the
kubernetes
release,
I'd
like
to
introduce
you
to
my
favorite
mascot,
the
nope
cat.
D
D
D
A
C
Ready
I
am
going
to
push
this
button
and
hopefully
things
will
work,
awesome,
great,
okay,
so
hi
I'm,
Katherine
and
part
of
sig
testing.
So
this
is
Governator
which
you
may
be
familiar
with.
If
you
have
ever
clicked
on
a
details
link
from
github,
it
is
our
tool
that
shows
you
all
of
your
logs
and
why
your
tests
didn't
work.
C
We
are
intending
to
replace
this
with
a
new
tool,
as
you
can
see
from
our
bright
yellow
banner
at
the
top,
which
will
look
something
like
this
one
featuring
an
exciting
dark
theme.
There.
C
It's
also
I
think
generally
easier
to
look
around.
It
has
all
the
same
details
like
we
have
this
giant
table.
We
had
condensed
into
one
sentence
which
tells
you
the
parts
you
actually
care
about,
but
you
can't
see
the
whole
table
if
you
would
like
to-
and
you
know
what
it
means,
we
still
show
you
your
test
failures,
but
we
do
it
with
a
list
of
tests
and
you
can
expand
them
if
you
are
interested
in
seeing
what
they
have
to
say
and
similarly
we
can
still
see
all
your
logs.
C
So
this
is
basically
the
same
thing,
but
it's
easier
for
us
to
maintain
it's
more
extensible.
If
other
people
wants
to
add
things
with
our
specific
jobs,
and
it
is
somewhat
fitting
I.
C
Think
I
have
a
lot
more
to
say
here,
but
if
you
have
further
opinions
on
this,
please
send
them
to
SiC
testing
on
slack.
Otherwise
we
hope
to
make
this
feeder
false
by
Monday.
I
should
mention
that
the
gubernatorial
ELISA
is
not
going
anywhere
and
will
still
be
bigger.
Binet
zapier
dashboard.
It's
just
this
page
that
we
were
done
this
page,
but
we're
replacing.
A
E
E
We
do
have
a
bunch
of
awesome
volunteers
that
every
time
I
give
us
a
update.
I
always
want
to
start
off
by
thanking
the
people
that
do
tremendous
work
for
us
a
lot
of
the
glue
work
for
this
project
and
it
they
definitely
deserve
shoutouts
on
a
daily
basis.
So
shout
out
so
this
entire
cig
everybody's
awesome.
Thank
you.
So
we
did
a
lot
last
cycle.
You're
gonna
see
that
I
probably
have
content
to
power.
Another
45
minute
meeting,
but
I'm
gonna
talk
in
ten
minutes,
so
really
quick,
some
boring
stuff
that
should
be
highlighted.
E
We've
urged
our
charter
whoo.
This
Charter
is
important
and
I
wanted
to
point
it
out,
because
there
is
a
communication
structure
that
we
follow
there.
A
lot
of
folks
always
ask
oh:
how
do
we
out
more
about
contributor
experience
programs?
How
do
we
find
out
about
automation,
changes,
Catherine,
just
demoed,
and
you
heard
how
we
go
through
automation,
changes
and
dashboard
changes
for
things
like
that
with
testing
it's
a
very
similar
process,
we
email,
Kate,
EV,
etc,
etc.
E
Read
it
that's
how
we
that's
how
we
keep
you
updated
and
if
you
want
to
be
updated,
you
should
get
on
those
communication
platforms.
Nikita
is
a
new
tech
lead
for
us.
Nikita
works,
her
butt
off
and
it's
definitely
well
deserved.
Garrett
has
stepped
down
into
an
emeritus
position.
Garrett
also
did
tremendous
work
for
us
to
kick
this
SIG
off,
so
definitely
shouts
to
Garrett
for
all
of
his
hard
work
and
Nikita
both
future
and
past.
E
We
did
continue
our
apec
friendly
meeting
time
of
APM
once
a
month.
We
actually
have
more
folks
from
the
United
States
like
that
time
zone
than
apex
people,
apparently,
but
we
are
looking
to
spread
the
word
even
further
in
those
time
zones.
So
if
you're
listening
and
you
have
companies
or
friends
or
other
contributors
that
want
to
get
involved
in
those
time
zones,
this
is
a
great
way
to
get
them
into
the
process
and
get
them
on
board
it.
E
As
always,
we
do
our
regular
programs,
regular
programs,
we're
talking
on
this
one,
the
Thursday
community
meeting
office
hours
as
well
as
meet
our
contributors,
which
is
mentors
on-demand.
So
all
of
your
upstream
questions
and
desires,
everyone
loves
to
know
what
they're
with
a
favorite
color
of
our
founders
are
and
by
their
test,
is
flaking
all
types
of
awesome,
fun,
stuff
join
us.
We
need
all
kinds
of
help
and
support
for
those
things.
E
So
if
you
like
how
this
is
run
or
would
like
to
improve,
this
meeting
come
and
help
us
same
with
meet
our
contributors
same
with
office
hours,
and
we
do
take
feedback
very
seriously.
We
make
a
ton
of
different
process
improvements
and
thanks
to
these
programs
based
the
feedback
from
the
community,
we
also
love
cycle
booted,
a
lot
of
spammers,
a
lot
of
folks
of
bad
behavior
and
some
bad
actors.
E
We
moderate
over
five
platforms
that
have
well
over
a
hundred
thousand
people
and
membership,
of
course,
that
that's
not
unique
numbers,
but
these
are
people
that
are
engaging
on
all
these
platforms.
Some
of
those
platforms
include
things
to
go:
we're
on
right
now
on
zoom'
slack,
which
we'll
get
into
in
a
second
and
Google
Groups
and
all
kinds
of
fun
stuff.
E
Other
things
that
we
worked
on
contributor,
documentation,
work,
that's
a
big
part
of
our
sake.
When
last
year
we
were
building
the
contributor
site
and
we
were
using
necklace.I
to
surface
some
docs.
We
realized
our
Doc's
were
not
very
web
friendly,
so
we
picked
up
a
content
strategy
mission,
we'll
talk
about
in
a
second
when
we
go
to
that
sub
project.
But
in
that
statement
we
made
a
ton
of
different
improvements,
already
things
that
created
a
style
guide
thanks
to
mr.
Bobby
tables,
Bob
Killian,
for
helping
us
with
that.
E
That's
mirrored
very
similar
to
the
docs
website
style
guide.
Again,
this
is
going
to
help
us
surface
documentation
that
makes
it
web
friendly
and
makes
it
more
appealable
to
its
consumers.
We're
G
duping
a
ton
of
government
governance,
Doc's
right
now,
taking
technical
stuff
out
of
policy
and
just
generally
making
more
links
to
canonical
sources
instead
of
narrative
type
documents,
we've
made
some
updates
to
the
membership
document
that
are
great
for
everybody,
on
the
call
who's
not
currently
not
a
current
member
or
is
the
current
number
wants
to
go
to
the
next
ladder.
E
We've
included
some
examples
and
they're
Nicki
to
put
that
in
there
thanks.
Nikita
we've
also
made
a
lot
of
moderation
updates,
which
we'll
get
into
the
future
State
in
a
second,
but
these
are
how
we
get
moderators
things
like
that.
George
and
a
ton
of
our
other
admins
have
been
helping
us
out
with
moderation,
platforms.
E
Communication
guideline
updates
again
like
what
I
said
with
spammers
and
stuff
like
that
feed-in
a
ton
of
our
guidelines
and
how
we
interact
with
these
platforms
and
why
we
interact
with
them
so
check
things
like
that
out.
We've
also
created
a
need-to-know
chairs.
Tech
needs
email
out
of
the
just
sheer
amount
of
information
that
comes
out
of
this
project,
and
these
75
plus
people
that
run
their
micro
communities
need
information.
E
E
So
taking
down
the
logistic
logistical
item,
items
and
tactical
item
entitles
their
those.
Those
things
need
to
come
down
and
get
archived
and
things
along
those
lines.
So
George
has
been
helping
us
a
ton
there
thanks
George,
even
more
last
cycle,
I
can't
believe
I'm.
Still
in
laughs
like
oh
yeah,
we
had
a
first
shanghai
contributor
summit,
there's
a
blog
and
I.
Clearly
I
didn't
even
update
the
attendees
I
think
we
had
like
75
new
contributors.
There
Josh
can
probably
yell.
E
If
he's
on
the
line
right
now,
thanks
to
Josh
and
team
for
kicking
that
first
shanghai
contributor
summit
off,
and
then
we
had
our
big
show
in
Seattle
in
December,
we
had
our
first
big
chair,
training
event
that
George
and
I
did
and
we're
going
to
do
another
one
and
put
it
to
YouTube
fifteen
event:
volunteers,
310
people.
It
was
a
show
all
right
now
upcoming
plans
and
how
this
affects
you
by
our
sub
project,
general
SIG's,
stuff,
we're
doing
more
project
management.
We
hope
you
will
too,
if
you're
listening
cigs.
E
This
is
going
to
help
us
tremendously
in
a
couple
of
ways,
one
more
opportunities
for
people
that
come
to
us
or
for
contribution
opportunities.
We're
going
to
be
able
to
see
that
because
we'll
have
different
tasks,
we're
also
creating
more
roles.
This
is
something
that
I'm
imploring
the
community
do
to
think
of
your
SIG's
as
a
small
department.
E
So
that's
up
on
the
on
deck,
for
us
more
events
are
saying:
does
a
ton
of
them
at
this
point,
we've
got
barcelona:
shanghai,
san,
diego,
coming
up
barcelona,
May
Shanghai
in
June,
San
Diego
in
November.
We've
got
tons
of
roles
that
are
open
for
these
events
come
and
see
us
and
I'll.
Have
my
contact
will
have
our
contact
information
at
the
end
for
the
what
come
and
see
us
actually
means
interpreter
documentation.
We
are
constantly
improving
this.
E
This,
of
course,
is
somewhat
of
a
tragedy
of
the
Commons,
where
contributors
will
always
update
this
stuff,
which
does
sometimes
allude
to
duplicate
information
and
end
links
that
might
die,
and
things
like
that.
So
we're
constantly
making
changes
to
these
Doc's
and
always
looking
for
your
suggestions
on
how
to
make
them
better.
E
Of
course,
actually
let
me
go
back
to
one
thing,
which
is
the
contributors
like
one
of
the
things
that
we
are
trying
to
do
with
the
contributor
site
is
make
make
it
so
that
documents
that
need
to
be
discovered
are
discovered
a
lot
of
the
times
we
have
contributors
come
to
us
that
say:
hey
I
need
doggone,
XY
and
Z,
and
we
actually
have
the
dock
when
XY
and
Z
it's
just
buried.
So
if
you
feel
like
there's
dock
sets
out
there
from
a
contributor
from
a
contributor
perspective
that
need
to
be
surfaced.
E
Tell
us
communication.
This
is
again
where
all
of
our
sub
project,
documentation
and
guidelines
live
for
this
area.
The
slack
update
is
yes,
we
had
a
bad
actor
incident
on
February
3rd.
That
incident
included
several
folks,
I
guess,
bombarded
the
slack
inviter
and
spamming
kubernetes
user
with
several
pornographic
images
we
have
reached
out
to
slack
because
we
believe
that
there's
a
vulnerability
we
have
reached
out
to
hacker
one
as
well
and
filed
the
report
and
Donald
the
due
diligence
there.
E
The
inviter
is
still
down
because
we
have
not
received
an
appropriate
word
from
slack
as
to
what's
going
on
and
we
intend
to
keep
it
down
until
we
figure
it
out.
This
is
also
open
the
door
for
us
to
figure
out
who
actually
owns
this
tool,
because
now
that
we
realized
60,000
people
are
on
this
tool
and
even
if
all
26,000
contributors
that
we've
ever
had
or
on
it,
who
actually
owns
it
and
who
actually
should
be
owning
this,
so
we're
going
to
the
end-user
community
on
Tuesday
at
8
a.m.
E
to
ask
those
questions
and
I
know
I'm
like
burning
through
time
here,
y'all
we
do
so
much
automation,
stuff,
kubernetes,
slash,
org.
Has
a
lot
of
issue
templates
now
check
that
out,
all
your
needs
one-stop
shop
mentoring.
The
impact
here
is
great.
Your
Grove
stay
growth
project
growth,
your
health,
being
a
good.
Oh
s,
F
citizen,
we
are
kicking
off
outreach.
E
we've
got
Google
Summer
of
Code
going
on.
I
need
help
from
folks
to
build
out
the
one
on
one,
our
and
group
mentoring,
which
are
ways
that
we
can
scale
rapidly.
E
So
please
reach
out
to
me
if
these
things
are
near
and
dear
to
your
heart,
but
they
definitely
need
some
program
management
help
with
these
all
before
I
go
on
to
another,
if
you're,
if
you're
sick
out
there
is
listening-
and
you
want
a
mentoring,
/
succession
plan,
I
will
in
our
signal,
tailor
one
for
you,
please
reach
out
to
us.
So
how
can
you
contribute
help
us
build
out
our
cig
with
the
roles
that
we've
got
listed
in
the
issue?
That's
linked,
they're
marketing.
E
Folks,
all
types
of
folks
are
needed,
not
just
code,
be
a
moderator
for
us.
This
is
a
thankless
position
and
these
folks
are
behind
many
spammers
not
getting
through
how
about
that
be
a
mentor.
And
yes,
you,
the
one
with
three
months
of
experience.
You
can
definitely
mentor
someone
who
has
zero
experience.
E
A
F
F
We're
also
working
on
improving
the
CI
signal
for
all
the
sub
projects
and
the
releases
that
we
do.
We
obviously
provide
user
group
support
for
any
issues
or
feature
requests
that
they
might
have,
and
then
we
also
try
to
gather
a
documentation
along
with
it.
A
lot
of
improvement
needs
to
be
done
on
documentation,
but
that's
a
work
in
progress
and
in
terms
of
these
sub
projects
that
we
host.
Currently
we
have
five
sub
projects.
We
have
requested
addition
of
two
new
sub
projects
that
are
related
to
the
CSI
drivers
for
EFS
and
FFX.
F
I'll
talk
more
about
that
at
a
later
point.
In
1.13
we
did
alpha
releases
for
the
EOBRs
controller
and
the
ABS
CSI
driver.
I.
Think
Erin
mentioned
that
many
of
the
sake
AWS
projects,
just
because
of
the
nature
of
what
we
do.
Our
other
three
features
in
the
out
of
tree
features.
For
now
we
really
don't
have
a
good
definition
of
what
the
release
cadence
should
be.
F
So
what
we
did
is
we
just
adopted
the
alpha
beta,
ga
release,
cadence
for
entry
features
and
we've
been
using
that
as
we
move
these
sub
projects
from
from
one
release
to
another.
So
last
1.13
we
did
the
Alpha
releases
for
these
two
projects
and
now,
and
also
for
cloud
provider
AWS,
which
is
the
out
of
three
cloud
provider
and
now
we're
moving
these
two
specific
projects
into
a
beta
release
feedback
from
Q
con
Seattle.
So
we
did
get
very
good
feedback
on
the
demos
that
we
did
for
the
Q
Khan
update.
F
We
also
got
feedback
on
better
on
providing
a
platform
where
people
can
collaborate
and
contribute,
and
they
can
actually
be
involved
in
the
issues
more
actively
so
where
we
have
created
way
of
tagging
issues
and
providing
assigning
those
issues
to
new
contributors
that
still
work
in
progress,
but
we're
trying
to
make
some
progress
there
and
improve
it
so
going
into
the
details
of
the
sub-project.
The
first
one
is
the
AWS
ingress
controller.
As
I
mentioned,
we're
gonna
do
a
beta
release
for
this
and
1.14.
F
Obviously,
with
cig
release,
it
seems
like
we
wouldn't
add
the
release
notes
in
the
entry
kubernetes
release
section,
but
we
have
updated
see
our
signal
for
the
ingress
controller.
By
that
what
I
mean
is
initially,
we
had
only
lint
and
unit
testing
and
right
now
we
have
into
end
testing
for
the
ale
being
pressed
controller.
F
In
fact,
it's
there
for
many
of
the
sub
projects
that
we
have
in
sake,
AWS
and
the
reason
we
are
trying
to
follow
that
behavior
or
discipline
is,
is
because
I
users
want
to
know
which
version
we
have
tested
the
sub-project
with,
and
they
also
want
to
know
that
the
code
is
stable
and
that
they
can
use
the
same
code
with
their
production
environments.
And
additionally,
we
have
added
some
features
and
fix
some
bugs
for
this
release
cycle.
So
there's
support
for
multiple
certificates,
auto
discovery
of
certificates
from
Amazon
certificate
manager
as
possible.
F
There
were
some
verbs
that
got
fixed
for
security
group,
detach
and
memory
leak
and
we've
improved
the
documentation
for
the
ingress
controller
and
also
we
are
updating
the
caps
Pirsig
AWS
again.
This
is
something
that
the
entry
release
cadence
follows
when
we
were
just
adopting
it
to
have
better
discipline
in
terms
of
what's
planned
later,
a
lot
of
our
customers
or
user
troops
are
asking
for
one
year,
B
that
can
be
used
across
in
dresses
that
get
created
in
different
namespaces.
F
We
can't
provide
support
for
that,
probably
in
the
next
release.
When
we
plan
to
go
for
GA,
the
code
is
already
there,
but
we
need
to
add
some
more
testing
for
it.
Also,
we
are
considering
proposing
the
controller
as
an
out
of
tree
feature,
but
that's
still
kind
of
in
discussion,
and
we
don't
know
what
that
would
mean,
but
most
of
the
work
is
being
done
from
CAW
us
here.
F
F
We
have
improved
the
documentation
and
the
cap
has
also
been
updated.
Also,
last
in
1.13,
the
CSI
spec
was
updated
to
one
dot,
oh
by
the
Google
team,
and
we
were
on
the
older
spec
of
0.3.
We
just
updated
it
to
1.0
and
that
will
be
live
with
the
beta
release
that
we
took
this
water
good
1.14
in
terms
of
what
what
is
plan
we're
working
with
the
CSI,
the
six
storage
team
for
the
CSI
migration
of
the
entry,
EBS
implementation
and
most
likely
this
work
will
get
completed
in
q4
2019.
F
The
idea
is,
we
have
to
figure
out
how
the
entry
volume
calls
will
seamlessly
translate
into
the
CSI
driver
cause
without
distracting
the
use
of
law,
so
that
work
is
ongoing
with
folks
from
a
despair
and
Google
and
the
courts.
Our
project
is
the
cloud
provider
AWS
project.
We
did
an
alpha
or
the
out
of
tree
ECC
in
1.13,
but
unfortunately
we
won't
be
able
to
hit
the
beta
for
this
particular
project
in
1.14.
F
But
we
have
made
some
progress
in
terms
of
moving
the
cloud
provider
dependency
from
the
core
code
base
to
k,
utils
and
staging,
and
this
is
ongoing
work.
There
is
a
bunch
of
documentation,
work
that
needs
to
happen
and
I
know.
Tim
beats
me
up
for
this,
but
we
haven't
found
in
our
cycle
to
actually
do
it,
and
then
we
also
have
to
add
into
n
testing
coverage
for
this.
F
The
plan
for
the
entry
cloud
provider
code
is
to
maintain
it
until
fully
q3
or
q4
this
year
and
then
maintain
an
application
period
of
two
releases
until
the
out
of
tree
cloud
provider
becomes
ready.
So
that's
the
third
sub
projects
sub
project
in
terms
of
the
other
sub
projects.
In
the
sake,
we
have
an
encryption
provider
which
works
with
kubernetes
1.11,
but
it
doesn't
have
test
coverage
today
and
we
haven't
done
the
discipline
that
we've
had
with
the
other
sub
projects
and
dunces
testing,
documentation
and
cap.
F
So
if
anybody
wants
to
help
with
this
project
and
move
it
to
a
beta
or
a
GA
status,
please
reach
out
Justin
and
sepilok
would
be
great
folks
to
mentor
them.
The
other
sub
project
is
the
aw
Siam
Authenticator,
it's
being
used
in
production
a
lot,
but
it
definitely
needs
better
test
coverage
and
the
back
end
is
currently
implemented
using
a
config
map,
and
we
want
to
change
that
implementation
to
CRV
and
that
work
has
not
been
completed
yet.
So,
if
somebody
wants
to
help
out
again,
you
can
reach
out
to
us.
F
F
Finally,
CI
signal
status.
Last
quarter,
we
added
a
WS
tester,
which
is
essentially
a
cube
test,
employer
interface
and
it
creates
an
ephemeral
ich
es
foster
to
run
Kuban
an
easy
to
me
test.
All
of
this
run
as
periodic
jobs.
They
are
not
blocking
completely,
who
are
using
this
interface,
to
enable
CI
signal
for
all
the
sick
EWS
of
projects
and
all
the
e
Dewey
tests
that
we
integrated
for
our
beta
releases
are
running
as
all
subnets.
F
All
this
work
was
done
by
gear
ho
and
shyam
I'm,
just
the
messenger
in
this
particular
initiative,
but
this
was
actually
a
lot
of
effort
and
it's
turning
out
to
be
quite
fruitful
in
terms
of
the
value
to
the
sub
projects
as
well
as
our
thesis,
and
so
the
chairs
are
me
Justin
and
Kristin
over
at
this
point,
we
are
home
page
and
slack
journal
is
listed
in
case.
You
have
any
comments,
thoughts
or
anything
back,
we'd
love
to
hear
anything
that
you
might
have
to
provide
to
us
and
that's
it.
F
G
G
We've
made
a
lot
of
algorithmic
optimizations
that
has
resulted
into
about
like
3x
performance
improvement,
so
the
scheduler
is
now
capable
of
scheduling
about
a
hundred
parts
per
second
in
5000
node
clusters,
which
is
a
very
large
improvement
compared
to
what
we
had
a
few
months
ago,
which
was
like
35
parts
per
second.
So
this
is
hopefully
good
news
for
those
who
run
larger
clusters
or
who
see
like
huge
spikes
of
pod
creation
in
their
clusters
and
want
to
schedule
all
those
paths
as
quickly
as
possible.
G
We
are
moving
part
by
Orion
preemption
to
stable
version
in
114.
This
was
a
feature
introduced
in
1/8
as
an
alpha
feature
and
then
later
moved
to
beta
in
1:11
hot
priori,
and
preemption
allows
a
user
to
specify
priority
for
their
parts
and
once
high
priority
parts
need
to
be
scheduled,
and
there
is
not
enough
resources
in
the
cluster.
Then
the
preemption
logic
kicks
in
and
removed
some
of
the
lower
priority
parts
from
the
cluster
to
make
room
for
the
higher
priority
parts.
So,
basically,
it
provides
guaranteed
scheduling
for
higher
priority
parts.
G
We
have
done
a
few
improvements
to
improve
fairness
of
the
scheduler
and
also
its
stability.
So
in
the
past
we
had
seen
situations
where
a
higher
priority
pod
could
block
ahead
of
scheduling
in
in
clusters,
which
have
a
lot
of
churn.
Essentially,
what
the
scheduler
does
is
that
it
retries
positives
or
unschedulable,
when
there
is
an
event
in
the
cluster
that
could
potentially
make
these
parts
of
schedule.
For
example,
when
another
pod
terminates
in
a
cluster,
it
could
potentially
make
some
of
our
unschedulable
pods
schedule
a
ball.
G
So
in
these
cases,
imagine
that
there
is,
for
example,
a
lot
of
positing
terminated
in
a
large
cluster,
and
there
are
some
unschedulable
pods
which
have
higher
priority.
In
these
cases,
the
scheduler
will
keep
retrying
and
scheduling
some
of
these
higher
priority
pods
and
those
could
block
head
of
scheduling
essentially
and
depriving
some
of
our
lower
priority
paths
from
getting
scheduled.
We
now
have
a
back
of
mechanism
to
address
this
problem.
G
We
also
have
more
efficient
processing
of
notice
status
updates
in
the
scheduler.
This
improves
efficiency
of
the
scheduling
we
have
added
timers
to
basically
timestamps
to
make
sure
that
parts
which
are
recently
scheduled
puzzle
with
the
same
prior
Twitter
recently
scheduled,
are
scheduled
after
we
have
already.
We
have
visited
other
parts
that
are
not
scheduled
this
and
some
of
these
to
improve
fairness
of
the
scheduler.
We
have
also
focused
on
improving
stability
and
reliability
of
the
scheduler
ethics
abroad,
race
conditions
and
we
have
improved
interactions
with
the
autoscaler.
G
So
these
are
what
we've
prepared
for
114.
With
respect
to
other
plans
that
we
have
for
the
future,
we
are
refining
and
implementing
the
scheduling
framework.
If
you're
not
familiar
with
this
effort,
the
scheduling
framework
is
a
new
design
for
the
scheduler,
which
essentially
makes
the
scheduler
framework
and
makes
it
much
more
pluggable
and
extendable
and
customizable.
So
pretty
much.
All
the
features
of
this
scheduler
become
plugins.
G
For
this
framework,
the
framework
provides
basic
functionalities,
such
as
having
a
cache
and
running
some
filter
functions
for
filtering
out
nodes
that
are
not
appropriate
for
scheduling
apart
and
providing
some
scoring
functions,
but
it
doesn't
go
anything
beyond
that.
All
these
filter
functions
are
these
scoring
functions.
Are
the
other
features
of
the
scheduler
become
framework
for
this
I've
become
plug-ins
for
this
framework,
so
that
work
is
in
progress.
We
have
best
some
part
of
it,
but
we
are
improving
on
the
design
and
we
are
revisiting
some
of
these
implementations.
G
G
The
idea
here
is
that
you
have
a
number
of
pods
that
must
be
scheduled
all
together.
If
they
are
not
scheduled
together,
they
will
not
make
any
progress.
So
if
you
have,
for
example,
10
parts
and
we
cannot
schedule
more
than
5
of
them,
it's
better
to
schedule,
none
of
them,
so
it's
sort
of
like
none
or
all
gang
scheduling,
provides
that
feature.
It's
already
implemented
with
a
new
API
in
one
of
our
incubator
projects
called
queue
patch
feel
free
to
check
it
out.
G
If
you're
interested
we're
working
on
further
optimization
of
the
scheduler
to
improve
its
throughput,
that's
going
to
be
ongoing
for
a
while
we're
working
on
party
scheduling
policies.
These
policies
are
essentially
a
set
of
new
policies
for
clusters
that
allows
admins
to
have
more
finer
control
over
what
can
be
specified
for
pods,
for
example.
G
Today,
if
a
malicious
actor
in
a
cluster
wants
to
prevent
other
pods
from
getting
scheduled
in
one
zone,
they
could
potentially
put
a
and
now
they
could,
they
could
put
a
Anti
affinity,
for
example,
to
pretty
much
everyone
else
in
in
one
zone.
If
they
get
lucky
and
their
pods
is
scheduled
in
that
zone,
then
they
will
prevent
everyone
else
from
getting
scheduling
policies.
Pod
scheduling
policies
are,
are
there
to
address
some
of
these
and
restricts
people
from
putting
some
of
these
scheduling
requirements
on
their
pods?
G
A
All
right,
thank
you
again.
Let's
move
along
over
to
announcements,
we
got
two
announcements
and
then
we
get
a
few
shoutouts
to
wrap
this
up.
So
the
first
announcement
is
from
sig
Docs
requesting
comm
I'm.
Sure
many
of
you
are
already
familiar
with
that
site
requests
been
calm,
has
recently
launched
an
updated,
hosted
version
of
the
tool
with
new
features,
including
private
bins,
with
Google
and
github
authentication
ability
to
pause
the
event
stream
and
then
improve
the
UI.
Thousands
of
developers
are
using
the
new
version
today
and
it's
ready
for
public
distribution.
A
So
if
you're,
if
you're
familiar
with
request
bin
or
have
never
used
it
and
want
to
try
it
out,
they
have
a
new
and
more
feature-rich
UI
and
experience
for
you,
so
yeah
check
it
out.
We
also
have
another
announcement
here:
Aaron,
Berger
and
added
in
the
github
groups.
Kubernetes
maintain
errs
and
kubernetes
release
managers
or
losing
direct
write
access
to
trim,
noticed
enhancements,
and
if
you
want
more
info
about
that,
one
Aaron
also
added
a
link
to
the
issue
there
so
check
it
out
and
for
shout
outs
this
week.
A
So
this
week's
shout
outs
started
kind
of
as
a
thread.
Last
week,
I
was
a
shout
out
from
Duffy
Cooley
on
the
thank
God,
it's
kubernetes
webinar
or
webcast
last
Friday,
and
you
want
to
do
a
shout
out
to
the
the
kind
and
they
are
the
maintainer
zuv
kind,
benjamin
elder
and
james
monolith,
and
then
that
came
with
a
bunch
of
other
shoutouts
to
everyone
helping
out
with
this
all
the
helpful
hands
with
kind
which
is
Luba
Mary,
even
OGG
Fabrizio,
panini,
George,
Alarcon,
Jintao,
Zhang
I'm
it
what
way
San
Liu
and
then
another.
A
Thank
you,
Benjamin
alder
and
San
Liu
for
help
with
the
creation
of
a
new
kind
based
lawyer
for
coop
test
and
another
shout
out
here,
Benjamin
elder
for
sharing
good
insights
on
how
to
publish
and
kubernetes
projects
and
yeah
that
rounds
out
today's
community
meeting.
Thank
you,
everyone
for
attending.