►
From YouTube: Kubernetes Community Meeting 20180913
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
https://contributor.kubernetes.io/events/community-meeting/
A
All
right,
good
morning,
good
afternoon,
good
evening,
wherever
you
are,
this,
is
kubernetes
community
meeting
and
will
be
posted
publicly
on
youtube.
So
please
be
mindful
of
that.
You
will
say
or
fear
or
eat
or
laugh
or
be
recorded
and
will
be
posted.
My
name
is
Aaron
Gupta
I
work
for
Amazon
I
work
in
the
open
source
team
here
at
Amazon
and
I
show
up
in
cig.
Aws
often
you
know
doing
some
fun
discussions
over
there.
A
A
B
Yep
one
sec,
you
sure
screen
so
I'm,
sorry
to
everybody,
that's
been
on
the
contributes
channel.
That
already
knows
all
this
stuff,
but
here
you
go
again
so
so
my
name
is
what,
and
today
I'm
gonna
talk
to
you
about
the
vocal
slack
pot.
It's
a
slack
pod.
That's
installed
on
a
few
of
the
channels
on
the
kubernetes
funk
instance.
So
the
goals
of
this
thing
are
to
improve
user
experience
and
focus
contributor
time.
And
what
does
that
actually
mean
is
a
lot
of
times
when
a
user
joins
the
kubernetes
slack
channels,
they
come
in.
B
Ask
a
question
and
whether
it's
because
they're
asking
it
to
2:00
a.m.
on
a
Saturday
or
because
the
question
is
too
big
or
whatever
a
lot
of
times
the
questions.
Just
don't
get
answered,
and
sometimes
they'll
post
again
still
don't
get
an
answer
and
at
the
same
time,
contributors
a
lot
of
times
spend
time
answering
kind
of
repeat
questions
in
a
in
a
study.
I
found
that
30%
of
the
time
people
feel
like
they're
spending
on
just
repeat
questions
and
20%
of
the
time.
They're.
B
B
And
this
is
within
my
kind
of
test
workspace,
but
imagine
I'm,
a
user
and
I
come
in
and
I
ask
like
hey
how
do
I
run
a
conformance
test,
so
what
happens
is
when
I
hit
enter
without
me?
Having
to
do
anything
else,
Fogle
will
use
AI
to
determine
that
I'm
asking
a
question
find
the
most
useful
answer
and
it
will
send
an
end
and
they'll
send
it
back
to
me
directly.
B
So
what
you'll
notice
here
is
one
I
didn't
in
any
way
invoke
focal
I,
don't
even
have
to
know
that
fact,
whenever
it's
simple
messages
directly
to
me
within
the
channel,
so
I'm
not
overpopulating
the
channel,
I'm
not
spamming
the
channel
with
many
messages
so
we'll
see.
Is
that
there's
a
couple
of
different
answers
that
are,
you
know
potentially
helpful
to
me
and
what
I
could
do
is
I
can
rate
these
channeling.
These
answers,
whether
they're,
actually
helpful
or
not.
B
So
if
I
click
not
helpful,
message
simply
disappears
if
I
click,
helpful
and
I
still
have
a
question,
then
we'll
record
that
feedback.
And
finally,
if
you
click
helpful
and
answer
question,
what
we'll
do
is
we'll
post
the
message
to
the
rest
of
the
channel
notifying
the
channel
like
hey?
This
is
already
answered.
You
don't
have
to
worry
about
answering
together.
B
So
basically,
this
is
a
very
short
kind
of
description
of
like
how
a
user
quickly
interacts
with
local
quickly
gets
their
answers.
Their
questions
answered.
So
let's
take
a
look
at
where
these
questions
are
even
coming
from.
So
a
bunch
of
questions.
Ninety
five
hundred
ninety
plus
ninety
six
hundred
questions
were
actually
downloaded
from
the
kubernetes
tagged
questions
within
Stack
Overflow.
The
next
section
17
closed
1,800
questions
were
downloaded
from
kubernetes
Docs.
B
Now
what
that
means
is
we
we,
when
we
are
showing
somebody
at
one
of
the
sections
of
Doc's,
we
don't
want
to
send
them
to
the
entire
talk
right.
We
want
to
only
show
them
the
most
relevant
section.
So
we
did
was
one
scraping,
the
kubernetes
Doc's.
We
actually
divided
it
into
smaller
sections,
so
that
one
since
one
we're
showing
an
answer
from
a
doc.
We
only
show
a
very
small
relevant
section,
and
this
the
last
part
of
where
these
things
come
from
is
previous
lot
conversations.
B
B
So
at
that
point,
folk
will
not
only
detects
that
Michael
is
asking
a
question
but
are
also
detects
that
K
is
helping
Michael
with
with
a
relevant
instrument,
so
at
that
time
focal
will
send
a
message
again
only
directly
visible
decay
and
ask
her
to
hey.
This
is
very
useful.
Can
we
store
this
for
the
future
and
show
it
to
others?
At
that
point
she
can
ignore
the
question
she
can
edit
the
question
and
answer
to
provide
more
context,
make
it
more
useful.
D
B
B
Eleven
questions
for
which
we're
useful
and
those
four
were
useful
nine
times
so
so,
what's
kind
of
interesting
is
what
is
the
most
helpful
answer?
There's
actually
a
tie
between
the
most
helpful
answers,
but
I
thought
this
one
was
a
little
bit
funnier.
Is
that
somebody's
basically
having
a
problem
with
deleting
something
within
a
pond?
And
obviously
the
answer
is
just
to
you
know,
insert
a
dash
dash
into
the
command
so
we're?
Actually.
B
What
have
we
been
doing
recently
is
talking
to
the
kind
of
the
documentation
community
to
the
docks
people
to
see
how
we
can
work
together
and
then
use
the
kind
of
responses
from
focal
and
the
feedback
loop
to
improve
the
docks
and
and
kind
of
like
improve
the
search
ability
of
the
dogs,
maybe
provide
but
more
examples
and
just
kind
of
see
what
we
can
do
in
the
future.
So
the
other
thing
that's
interesting
is
people
not
only
like
love
it,
but
they
actually
even
expect
it
to
be
there.
B
So
one
person
was
asking
a
question
and
this
users
skarbek
responded
the
focal
bomb
probably
already
spotted
to
you,
but
here's
some
additional
documentation
or
here's.
Some
additional
context-
or
in
this
case
edward
was
asking
you
know-
was
asking
question
and
finally,
at
the
end
of
this
question
kind
of
says:
well,
hopefully,
focal
can
help
with
this.
The
the
sad
part
is
that
the
channel
that
edward
asks,
this
question
on
was
not
was
was
not
already
signed
up
for
focal,
so
we
were
not
able
to
help
him
because
it
wasn't
installed
on
his
channel.
B
So
with
that
in
mind,
the
short
call
to
action
here
is
invite
focal
and
your
kubernetes
channel.
If
you're
in
the
kubernetes
slack
workspace
just
do
slash,
invite
focal
you
could
do
slash
focal
and
type
in
something
and
I'll
show
you
there's
a
bunch
of
different
ways
to
add
a
bot
to
a
channel
talk
to
me
if
you
don't
figure
that
one
out
I'll
help
you
and
we're
asking
you
to
invite
it
to
multiple
places,
so
first
invited
to
your
user
facing
channels.
B
When
you
add
it's
your
user
equation
channels,
we
can
do
this
whole,
like
feedback
loop,
providing
useful
answers
to
people
and,
at
the
same
time,
we're
saying
you
can
add
it
to
your
like
c
channels
and
you
can
store
information
such
as
you
know
or
like
whatever
the
most
common
questions
there
are
such
as
you
know,
when
is
the
meeting
time
and
where
is
the
YouTube
channel
and
then
so
on,
so
to
add
the
questions
you'll
see.
Obviously
this
will
pop
up
and
all
we're
asking
is
maybe
improve
the
context.
B
A
little
bit
and
click
store
so
make
sure
to
add
your
useful
context
and
useful
answers
to
focal.
You
can
also
manually
add
questions
if
you
like,
if
you
hover
over
any
message
in
slack
you'll,
see
the
like
the
ellipsis
button.
Click
on
that
and
you'll
see
add
to
focal
and
you'll
see
a
dialog
about
like
adding
what
is
the
question-and-answer
too
bad.
B
Talk
to
me
about
partitioning,
so
one
thing
that
we
realize
is
that,
like
you
know
what
time
you're
meeting
for
signature
backs
or
something
is
probably
not
relevant
to
the
kubernetes
user
channel,
so
we
can
do.
Is
we
can
create
logical
separations
of
information
so
and
kind
of
finally
control
how
data
flows
between
these,
so
that
a
question
might
only
appear
within
stick
and
Trebek's,
or
an
answer
might
only
appear
within
signature
Bex
and
a
different
answer
might
only
appear
within
kubernetes.
B
Can
control
how
this
data
flows
and
finally
install
focal
everywhere
else?
If
you,
you
know,
if
we
already
in
kubernetes
slack,
you
can
just
install
it's
already
installed.
So
just
add
it
to
your
channel.
If
you
want
to
install
it
to
a
different
kind
of
open
community,
let
me
know
the
best
way
to
contact
me
is
either
through
email
or
you
can
find
me
on
the
kubernetes
slack
as
just
under
my
name.
B
So
if
you
want
open
it,
if
you
want
to
install
it
on
any
open
channel
or
you
can
go
to
focal
io,
/,
OSS
or
obviously
you
can
install
it
within
any
company
what
we
do
there's,
instead
of
pooling
from
things
like
Stack
Overflow,
and
you
know
your
Doc's,
what
we
can
do
is
we
can
pull
from
wiki
and
JIRA
and
all
these
other
things,
so
this
actually
designed
to
mainly
work
within
company
as
well.
So
with
that
I
know
I
kind
of
ran
through
this
quick.
What
questions
do
you
have.
A
A
A
D
I'm
tim,
pepper,
you're
112
release
lead
a
couple
of
updates
right
now.
We
are
still
in
code
freeze.
We
had
our
beta
2
release
this
week
and
the
release
candidate
will
be
coming
out
next
week,
we're
working
towards
a
release
target
of
September
25th.
But
at
this
point
that's
starting
to
trend
at
risk.
We've
had
quite
a
bit
of
a
CI
signal
breakage
and
we're
making
progress
there,
but
it's
been
slow.
D
We've
done
a
lot
of
good
debugging
in
the
last,
even
just
14
hours,
probably,
but
because
some
of
our
tests
take
a
bit
of
time
to
run
and
we
basically
got
seven
workdays
to
the
release
target.
At
this
point
and
once
you
subtract
a
day
or
two
or
three
even
for
CI
signal
to
prove
that
something
was
actually
resolved,
we're
running
short
of
workdays
to
actually
accomplish
improvements,
so
we'll
make
a
call
on
Monday,
depending
on
how
things
pan
out,
based
on
yesterday's
debug
today's
and
tomorrow's
merges
and
the
weekends
test
results.
D
But
by
Monday
we
may
make
a
choice
to
slip.
The
release
a
couple
of
days,
I'm
still
relatively
hopeful
compared
to
two
maybe
last
week.
I
would
say
where
we're
trending
in
the
greener
direction
versus
the
redder
direction,
but
still
squarely
in
the
the
middle
yellow
state
of
risk,
so
stay
tuned.
We'll
have
some
updates
next
week
on
that
front.
Also
I
mentioned
that
we
moved
KK
to
tide
on
and
you
may
or
may
not
have
noticed
that
we
mentioned
it
last
week
in
the
community.
Meeting.
D
A
A
E
I'm
right
here,
thank
you
guys,
all
right
awesome,
take
it
away
so
essentially
on
safe
windows,
who've
been
humming
along.
We
got
to
the
point
where
we
have
finished
a
lot
of
our
functionality
in
terms
of
feature
said
that
we
wanted
to
provide
to
be
able
to
graduate
our
windows
support
to
stable
for
kubernetes.
Unfortunately,
we've
hit
some
hiccups
in
stress
and
performance
and
scale
testing,
and
we
are
proposing
that
we
are
going
to
graduate
to
stable,
but
with
1.13
and
not
with
1.12.
E
So
we
are
right
now
finalizing
a
draft
of
our
feature
freeze
for
us,
so
we
can
actually
stop
working
on
features
and
start
concentrating
on
stability
and
conformance
and
and
start
also
finalizing.
Our
API
Docs
general
documentation,
how-to
guides
and
everything
else
as
necessary
to
graduate
to
stable
so
we're
putting
the
stop
on
new
features
and
concentrating
almost
100%
on
step
and
getting
us
ready
for
for
a
step
for
GA
with
1.13.
A
F
C
C
Alright,
this
is
mass
update
from
a
signal.
It
is
last
a
piece
of
kata
ago.
So
this
is
a
I'm
going
to
update
would
happen
in
a
queue.
Sorry,
sorry,
can
you
go
to
the
next
one?
Please
Thanks,
so
we
finally
have
the
finished
initial
signal:
chatter
and
the
PI's
merged,
and
also
we
have
not
the
we
goes
through
the
new
room
meeting
as
expected
and
a
defined
by
steering
committee,
and
we,
how
do
we
still
hold
the
meeting
with
me
every
Tuesday,
10:00
a.m.
Pacific
time
and
the
English
slides
I.
C
Have
that
our
notes,
and
also
the
link
to
the
YouTube
videos
in
additional
and
actually
we
also
hold
the
sponsor
and
a
hold
and
our
bi-weekly
resource
management
work
and
every
Wednesday,
11
a.m.
and
they're
also
implement.
We
also
have
I
think
put
in
a
size.
Actually,
we
also
have
the
asian-pacific
signal
meeting,
because
there's
also
a
lot
how
work
in
well
with
the
different
winters.
So
especially
on
the
one,
the
earlier
stage
on
the
company
rent
time
implementation
and
the
CI
implementation.
Can
we
go
to
the
next
slide,
Thanks
so
doing
in
the
q3?
C
We
reviewed
our
original
signal,
the
scope
and
update.
What
too,
is
under
this
tape?
So
we
categorize
a
file
area
like
the
work
to
week
we
talked
before
it
is
to
include
of
the
know
the
lifecycle
management,
application
management
on
the
node
and
they'll
include
of
the
content
of
runtime.
All
those
kind
of
things
and
also
resource
management
include
of
the
device,
management
and
also
computer
resource
management
and
access
EQ
memory,
and
also
includes
storage
management
collaborate
with
collaborate
with
the
stick
storage.
C
We
also
enters
the
also
the
secrete
here
and
ISO
nation
on
the
node,
and
then
there
it
is
the
instrumentation
login
on
an
older.
Can
we
go
to
next
one
so
now
I'm
going
to
quickly
update
what
we
delivered
with
the
any
program
in
q3.
So
the
first
one
I
want
mention
that
we
made
another
called
hast
ID
progress,
understand
octopod,
so
there's
the
runtime
counts,
propos'd
and
which
it
is
being
accepted
by
the
SIP
architecture
and
crude
I
read
a
signal
and
in
the
q3
we
deliver.
C
Can
we
go
to
next
slide?
Thank
you
so
like
earlier
Sega,
Windows
Update
and
we
also
work
were
close
in
this
quarter.
We
also
we're
close
with
the
Sega
windows.
We
help
those
working
together
base
than
those
sick
of
induce
the
community
and
define
windows
so
for
the
ga
o--
and
the
document
in
here,
and
we
also
loop
or
instead
of
only
because
the
support
of
the
windows
container,
it
is
a
2-pack.
Unity
is
actually
it
is
a
support
or
new
kinda
form
windows
plan.
C
For
so
we
working
together
and
expand
their
the
working
community
here
include
off
the
six
storage
and
the
sick
at
work
and
in
the
q3.
What
deliver
on
the
node
most
here
it
is
on
a
big
note.
We
added
the
Cuba
stickers
for
windows,
the
system
covenants
as
like
as
the
Cuban
aid
and
the
container
and
time
so.
Basically,
on
the
note
we
already
have
all
the
node
and
a
system
really
need
a
specs
for
windows
plan
for
and
also
there's
them.
C
We
need
I'm
least
here,
because
I
want
to
call
out
of
the
stick
Network
attention.
Please
have
asked
you
to
review
those
PRS
and
a
standard
speed-up
of
the
velocity
on
the
windows
is
support
and
there's
also
cover
off
the
pr
proposed
to
the
artist
invoke
ripple
for
automated
windows
testing
framework,
and
I
also
called
for
the
sega
past-
have
I'm
here
to
review
those
PS
and
we
go
to
links
then
in
the
trustee,
or
we
also
spend
a
lot
ham
to
reverse
our
testing
in
passing
work.
C
There's
the
announcement
for
the
changing
the
node
III
test
and,
and
it's
been
reviewed
by
the
architecture,
and
then
we
introduce
the
new
set
of
the
type
to
have
out
the
node
ringing,
the
features
and
the
node
the
related
of
the
component
tests.
We
also
build
in
our
community
dashboard.
We
have
the
specific
testing
dashboard
so
so,
either
container
random
can
easy
to
plug
into
that
dashboard
and
also
user
or
vendor
the
kubernetes.
C
When
you
can
either
group
at
Lowe's,
say
I
related
dashboard
and
a
figure
out
working
out
the
container
and
family
plan
to
use
think
the
posts,
continuity
and
the
crowd
it
is
in
product
writing
states,
but
each
one
of
them
have
the
distinguished
of
the
feature
support.
So
that's
why
we
we
see
what
this
can
help
our
community
to
figure
out
how
to
choose
their
container
runtime
in
their
production
and
we
go
to
next
one.
C
C
We
talking
through
the
careful
okay
and
the
owners,
many
discussing
to
provide
a
bus
tractor,
a
restriction
on
the
resource
class
to
support
the
more
device
and
the
color
resource
class
it
yet,
and
the
considered
complicate
API
so
still
under
discussing,
and
we
try
to
looking
for
more
than
GP
other
use
cases
more
than
GPU
support
to
to
to
make
this
and
broadleafs
the
process,
and
also
there
are
a
lot
of
progress
made.
The
belief
is
in
a
heart
rate
and
to
ensure
better
scalability,
and
this
one
is
retargeted
to
the
one
concerning
run.
C
Some
of
the
work
is
already
merged
and
some
students
and
the
disgusting
and
the
development
we
also
english
history.
We
also
promote
it
across
heightening
states
sharing
to
the
data
and
also
we
fellow
me
out
ship
agreement
under
the
our
container
idea
and
implementation.
So
the
proposal
merge
and
the
implementation
is
under
development,
so
that
all
updates
for
the
q3
found
a
signal.
A
G
I
am
hi
everyone.
Yes,
we
came
all
right
great.
This
is
your
weekly
kubernetes
steering
committee
announcement,
parent
myself,
your
and
George
are
all
election
officials,
so
you'll
be
hearing
from
us
throughout
this
entire
election
cycle.
Listen
up
tomorrow
is
the
deadline
for
all
nominations.
This
includes
the
entire
process,
which
is
all
the
way
up
to
BIOS
uploaded
to
github,
and
that
is
the
deadline
is
11:59
UTC
and
we
will
be
strict
with
that.
Also
that
seemed
line
of
tomorrow
at
11:59
p.m.
UTC
is
also
the
deadline
for
the
voter
eligibility
forms.
G
What
this
means,
if,
if
you
are
not
listed
on
the
voter
SMD
file,
which
I
will
include
I,
see
that's
not
a
link
in
the
agenda
right
now,
but
I
will
include
a
link
to
that.
If
you're
not
on
that
file,
that
means
that
we
do
not
have
you
listed
as
eligible
to
vote.
The
steering
committee
has
decided
that
50
contributions
are
hired
in
the
last
12
months
is
the
threshold.
G
However,
if
you
feel
like
you
have
made
more
than
50
contributions,
that
would
include
things
like
non
code
or
non
github
events
that
relate
to
upstream
community.
Please
please
go
ahead
and
fill
out
that
form
we
will
be
uploading.
Excuse
me
updating
the
voters
MD
file
with
folks
that
are
approved
and
then
send
out
other
communications
to
folks
who
are
not
next.
This
is
polling.
Ballots
will
go
out
this
coming
Wednesday
September
19th
to
the
emails
that
we
have
on
file
for
those
eligible
voters
if
you
do
not
receive
an
email
by
Thursday.
G
Also,
please
definitely
check
your
spam
and/or
bulk,
because
this
is
coming
from
SIVs,
which
does
do
bulk
emails.
So
if
you
have
any
kind
of
testy
filters,
please
definitely
check
that,
and
but
if
you
do
not
get
an
email
from
sibs,
please
contact
community
at
kubernetes
io.
We
can
definitely
send
you
a
new
ballot.
So
don't
fret
to
think
that
you
are
not
included
in
the
election.
We
can
definitely
get
that
to
you
for
get
that
to
you.
A
Election
officials,
for
keeping
it
straight
and
honest
thanks
so
much
all
right.
Well,
I
guess
we
are
heading
towards
one
of
the
shortest
current
IDs
community
meeting
today.
The
last
part
of
the
agenda
really
here
shoutouts.
If
you
know
anybody
you
have
done.
If
you
know
of
somebody
who
has
done
something
great
and
I
want
to
say,
thanks
use
the
hash
out
outs
channel
in
the
slack.
So
what
I
didn't
gonna
do
is
now
a
readout
from
the
shout
outs
channel,
essentially
from
last
community
meeting
up
until
now.
A
The
first
one
is
from
M
ze
thousand
shout-out
to
ash
thunder
and
whenever
a
for
incredible
help
with
CI
signal,
the
next
arrow
is
from
ash.
Soon
their
huge
shout
out
to
go
anywhere
a
once
again
for
lighting
up
the
right
fires
when
and
where
made
it
for
one
crowd
way
to
go.
The
next
one
is
from
just
August's
shout
out
to
Doug
ahem
James
Ben
tell
their
stats
and
anyone
I
might
have
missed
for
working
the
weekend
to
test
our
release.
A
Engineering,
tooling,
ahead
of
the
next
era,
cut
shout
out
from
misty
Luke
Perkins
for
adding
per
heading
for
heading
anchor
links
to
the
docks,
so
people
can
share
an
end
page
section
at
any
level
without
having
to
go
back
in
the
talk
to
find
the
link
next,
one
from
neo
light
one
two:
three
thanks
to
Timothy
and
Claire
and
Fabrizio
Pandey,
who
helped
her
helped
with
debugging
or
released
blocking
e
to
e
testing
for
SiC
cluster
lifecycle.
Next
one
from
M
kuma
tog.
A
Now
we
have
why
we
112
o
beta
to
today's
images,
are
all
fat
manifest.
This
made
all
other
architectures.
First,
our
citizens,
thanks
James
Doug
m
XD,
Lucas
Caleb
miles,
P,
pepper,
mint,
hello,
Paris
and
FK
mix
Tyler
from
Paris,
shout
out
to
a
make
me
qualms
for
helping
contrib
X
with
our
communication
platform
discovery
and
doing
the
hard
work.
A
Perfect
example
of
chopping
wood
and
carrying
water
last,
but
not
the
least
shoutout
from
Tim
pepper,
huge
shout
out
to
bent
Hildur
for
working
late
last
night
and
right
back
to
it
this
morning
on
diagnosing
resolving
bill
pipeline
issues
in
support
of
112
release.
So
that's
it
from
my
side.
Is
there
any
other
last-minute
question
people
ask
discuss
here
and
go
release
team
I!
Think
that's
about
it!
Now
we're
gonna
wrap
up
we're
gonna,
give
you
28
minutes
back
to
your
life.
Enjoy
everyone
thinks.