►
From YouTube: Kubernetes Community Meeting 20171109
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
A
All
right,
hello,
everyone
welcome
to
community
covered,
is
community
meeting
on
Thursday
November
night
I'm
Erica
I
work
at
core
OS,
it's
nice
to
meet
or
see
people
a
reminder
that
we
are
recording
right
now.
This
is
a
community
meeting
and
will
be
posted
publicly
on
YouTube,
so
please
be
mindful
and
what
you
say
that
what
you
say
is
being
recorded
also,
please
keep
yourself
on
mute
unless
you're
speaking
just
keep
kind
of
conscious
of
that.
Thank
you.
So
we
don't
have
any
demos
today.
Sorry
about
that.
A
B
C
Right
for
one
night
updates
we're
doing
things
mostly
the
same
as
one
eight
and
one
7.
So
hopefully,
this
is
pretty
familiar
to
everyone
by
now,
but
one
thing
that
we
are
doing
a
little
differently
is:
we
want
to
introduce
code
slush
two
days
before
cooking.
What
this
is
about
is
all
that
great
stuff
you're
doing
for
teachers
even
still
keep
working
on
it.
D
A
E
So
I'm
David
needs
from
API
machinery.
It's
been
a
while,
since
we
talked
I
will
go
ahead
and
tell
you
what
it
is,
we're
doing.
Four
one
nine
one
of
our
main
pushes
is
for
custom
resources,
removing
CRT
validation
from
alpha
to
beta.
It
came
in
in
one
eighth
response
has
been
largely
positive
and
so
that's
moving
forward
to
beta
the
other
thing
we're
doing.
Professor
resources
is
starting
to
work
on
scale
and
status.
Those
are
not
definitely
in
for
alpha
and
one
knowing.
E
If
they
make
it
will
be
very
happy,
but
if
they
aren't
ready,
we
will
hold
them
out
until
one
ten
don't
want
to
put
something
in
that
works
badly
for
people.
The
other
thing
that
we're
pushing
on
and
1/9
is
dynamic
admission,
so
initializers
are
going
to
remain
alpha.
Those
came
in
a
couple
releases
ago,
but
we
aren't
pushing
on
them.
We
are
pushing
on
admission
webhooks.
Those
came
in
at
the
same
time
as
initializers.
We
are
going
to
try
to
push
web
book
admission
to
beta.
E
F
E
Planning
to
do
for
beta,
what's
in
what's
out
for
beta,
and
we
are
working
hard
to
make
it
there
I
think
that
we
probably
will
there
are
a
lot
of
activity
in
the
area
and
if
you
guys
want
to
look
in
the
community
repo
to
see
what
we're
thinking
and
make
comments,
that
would
be
great.
Those
are
our
big
items
for
1-9.
D
E
One
that
people
might
be
familiar
with
for
security
reasons
is
pod
security
policy
right.
There
is
this
thing
that
looks
at
a
pause
effect
and
says:
okay,
user
Bob
is
trying
to
create
this
prospect
with
service
account
foo,
and
he
wants
to
have
a
privileged
container
and
pod
security
policy
gets
to
have
a
chance
during
admission
to
look
at
the
content
of
that
request
and
interpret
it.
Is
the
user
allowed
to
make
this
kind
of
a
request
with
this
kind
of
a
body?
E
So
is
the
only
spot
in
the
API
server
chain,
where
someone
can
actually
look
at
the
body
of
the
request,
and
until
recently
it
has
been
an
area
that
you
cannot
externally
hook
into
it
was
compiled
in
only
we
introduced
webhook
admission
to
make
that
externally
callable.
So
you
can
make
a
remote
call,
but
it
was
an
alpha
and
now
we're
trying
to
move
that
towards
beta.
E
You
can
envision
this
being
useful
for
things
that
want
to
say,
look
at
a
pod,
spec
and
maybe
add
a
sidecar
container,
or
they
want
to
look
at
some
sort
of
deployment
config
and
an
annotation
or
automatically
or
run
their
own
kind
of
quota
controller
right
that
those
sorts
of
controls
could
be
done
with
web
book
admission
and
it
couldn't
be
done
with
any
other
mechanism.
We
have.
D
We
meet
every
week,
1.5
hours
before
this
meeting
in
Eastern,
Time
I
know
that
as
11:30
and
other
times
it's
different
times
so
yeah
so
I
want
to
give
a
quick
update
on
what
we're
up
to,
and
one
of
the
big
thing
is
really
how
we
define
the
things
that
go
into
kubernetes
and
Caleb
miles
did
a
very
exhaustive
survey
of
ways
in
which
other
large
open
source
projects
do
this,
and
we
sort
of
crimped
some
things
from
the
rest
RFC
process
and
came
up
with
what
is
known
now
as
the
kubernetes
enhancement
proposal
process,
and
essentially
it's
a
way
of
defining
things
they
get
included
into
the
communities,
ecosystem
and
I,
say
ecosystem,
because
what
is
and
is
not
current,
at
ease
as
part
of
the
discussion
that
the
Sagarika
texture
is
actively
having
so
I.
D
What
we're
really
focused
on
doing
is
trying
to
define
the
value
and
not
necessarily
worrying
about
implementation
details
so
much
it's
really
best
to
just
understand
what
it
is
that
a
particular
state
is
trying
to
accomplish
to
define
it
with
some
well-known
metadata.
In
terms
of
where
the
process
and
implementation
is
that,
and
then
also
the
technical
details
around
how
you
actually
get
the
thing
done,
and
so
there
is
a
link
in
the
community
meeting
notes
to
the
actual
kept
process.
D
So
it
would
be
good
to
review
that,
and
one
of
the
really
great
things
about
this
is
is
that
it
will
give
us
an
end-to-end
process
for
tracking
value
that
gets
put
into
communities
and
I'm
starting
to
use
the
word
value
I,
although
I
get
accused
of
being
using
market
texture
to
to
define
this.
But
really
value
is
important,
as
opposed
to
a
feature,
because
value
could
be
anything
it
can
be
user
facing
it
can
be
internal,
it
can
be
any
kind
of
change
that
that
adds
to
the
overall
richness
of
the
ecosystem.
D
So
what
happens
with
the
cap
is
that
it
gives
us
a
way
to
tie
any
particular
PR
all
the
way
up
to
the
cap
itself
to
understand
what
is
the
actual
milieu
or
context
that
this
PR
is
generated
from,
and
so
essentially
a
cap
boils
down
to
issues
that'll
be
in
the
in
the
codebase,
and
those
issues
will
define
work
that
will
be
done
per
milestone.
So
let's
say
you
have
a
cap,
that's
gonna
be
a
large
implementation.
That's
gonna
span
release,
110,
111
or
112
or
2.0
or
whatever.
D
That
is
the
the
release
numbers,
but
essentially
multiple
releases.
Each
each
release
milestone.
You
would
carve
out
sections
of
deliverable
value
from
the
cap
and
to
find
those
in
issues
and
then
reference
those
issues
in
PRS.
So
you
get
to
see
how
things
roll
up
to
to
the
high-level
diagrams
and
discussions
around
the
architecture.
So
that's
really
important.
It
makes
these
sort
of
large-scale
changes
that
we
want
to
implement
easier
to
track
and
understand.
D
Sometimes
I
know
that
when
we
talk
about
things
that
are
really
abstract,
it's
hard
to
get
your
head
around
them
and
sometimes
the
implementation
details
around
abstract
concepts
or
even
more
difficult
to
parse.
So
this
gives
even
just
casual
observers
of
the
community's
process
in
communities.
Lifecycle,
ways
to
understand,
what's
actually
being
done
so
I
think
that's
important
for
our
user
community
and
the
people
who
want
to
adopt
communities
and
production
environments.
So
that's
one
thing
that
we're
actively
working
on
another
thing
is
that
we
want
to
beta
test
these.
D
They
kept
process
and
essentially
features
that
may
be
already
in
committees
and
run
them
through
this
process.
To
see,
let's
say
we
wanted
to
redo
something:
that's
well
known
in
the
ecosystem
through
the
kept
process.
What
are
those
things
that
we
would
learn
by
trying
to
force
that
through
that
process?
D
And
that's
the
more
of
those
use
cases
that
we
can
get
in
the
more
people
throwing
rocks
at
this?
The
templates
and
things
the
better,
because
we
want
to
make
this
a
really
easily
usable
thing.
Basically,
we
want
a
sake
to
be
able
to
take
this
template
rapidly.
Fill
it
out
and
begin
acting
on
it
as
quick
as
possible.
Administrivia
doesn't
benefit
anybody,
so
we're
really
trying
to
make
this
as
light,
as
is
responsible
to
do
so.
D
Another
thing
that
we
are
working
on
doing
is
providing
arbitration
and
consultation
services
for
sakes
that
have
maybe
architectural
concerns
about
something
that
they
want
to
do
in
an
upcoming
release,
or
maybe
they
want
to
change
something
that
has
implications
for
another
sake,
I
the
the
steering
committee
is,
is
not
a
body
that
should
be
making
technical
arbitration
decisions.
That's
really
something
that
the
the
architecture
sega's
sort
of
tasks
to
do,
and,
of
course,
if
things
don't
get
the
sort
of
the
the
answers
that
they
want
and
maybe
there's
a
deadlock
there.
D
Then
that
would
go
to
the
steering
committee.
The
steering
committee
is
the
ultimate
authority
in
terms
of
decisions
during
the
project,
so,
but
we
want
to
try
and
avoid
having
things
escalate
up
the
steering
committee
as
much
as
possible.
So
we
want
to
get
good
at
providing
solid
guidance
as
a
community
to
our
SIG's
as
they
detangle
some
of
these
architectural
dependencies.
D
So
if
you
are
say
ik-
and
you
know
that
one
of
these
things
is
on
your
docket
to
deal
with
in
an
upcoming
release-
just
add
yourself
to
the
agenda-
that's
linked
in
the
community
notes.
It's
also
available
bitly,
slash
sig
architecture,
and
that
will
help
us
also
define
our
architectural
scope.
So
that
is
incredibly
important
as
we
move.
D
Lastly,
we
encourage
people
to
join
us
at
our
meetings.
It's
a
it's
a
community
effort.
The
one
of
the
known
anti
patterns
and
architecture
is
if
it
becomes
sort
of
an
ivory
tower
organization
where
we
are
dictating
from
on
high
and-
and
that
is
not
in
any
way
what
cigarette
texture
is
about
or
will
be.
D
Lastly,
we're
gonna
have
a
deep
dive
at
coop.
Con
and
I
would
love
to
see
and
meet
people
there,
I'll
be
there
and
I
believe
Brian
grant
will
as
well
and
Joe
Beto's,
also
deep
in
the
weeds
on
architecture.
I
think
it's
gonna,
be
there
too.
So
it's
should
be
actually
Joe's
not
gonna,
be
participating
because
he
has
a
block
at
that
time,
but
other
people
will
be
as
well
so
yeah
join
us.
We
we've
really
want
to
have
you
there.
Are
there
any
questions
now
that
rambled
on.
D
H
F
We
have
started
a
new
workgroup,
the
cluster
API
work
group
that
is
hoping
to
create
an
API
to
describe
kubernetes
clusters
on
a
standardized
manner
in
a
standardized
manner,
and
this
this
group
is
meeting
on
wednesdays
tonight
at
10:00
a.m.
I.
Think
and
it's
gonna
have
a
well
there's
some
work
in
progress
in
the
cube
deploy
repo.
This
is
basically
an
API
for
describing
kubernetes
clusters,
but
the
implementation
is
left
to
the
community
and
the
broader
ecosystem.
F
Then
we
have
a
cubed
M
adaption
working
group,
we're
aiming
to
to
get
more
P,
adopt
cubed
n,
so
we
can
have
more
standardization
on
the
kubernetes
bootstrapping
/
cluster
creation
level
as
well,
and
stabilization
tasks
are
also
important
and
constantly
ongoing
IDI
testing,
for
example,
getting
more
test
coverage
across
the
board.
It's
important
then
for
recuperative
aspects,
new
features
and
such
in
1/9.
F
F
It
is
totally
possible
with
cube
at
them
as
well,
but
it
is
a
bit
harder
than
we
want
it
to
be
so
yeah
and
then,
lastly,
we're
working
with
the
cloud
provider
refactoring
group.
Basically
right
now,
our
cloud
provider
code
is
tightly
coupled
to
the
kubernetes
and
built
into
the
kubernetes
tree.
Eventually,
we
want
to
just
split
these
cloud
provider
integrations
out
so
that
anyone
can
write
them
not
just
a
few
seven
or
so
in
core
and
cubed
M
is
going
to.
G
F
I
Also
talking
with
Parris
about
getting
sick
update
scheduled
for
acute
con
as
well
I
know
that
say,
architecture
is
doing
a
deep
dive.
We
have
the
sort
of
the
other
track
that
we're
planning
on
doing,
which
is
the
say,
update
which
Lucas
and
I
are
planning
on
doing
assuming
we
can
get
a
slot,
so
hopefully
Parris
got
my
email
yesterday.
J
I
J
A
A
K
Hi
everybody
I'm
your
v.19
CI
signally
I'm
supposed
to
be
reading
the
tea
leaves
of
test
grid
and
determining
whether
or
not
we
have
a
release.
That's
ready
to
go
out
the
door
or
not
something
I
know
I'm
still
on
the
floor
and
talked
about
yeah,
just
meaning
in
the
past
is
like
what
makes
her
least
blocking
job
we're
still
working
on
that.
K
But
one
of
the
things
is
that
block
that
job
should
be
actively
made
to
go
green
if
it
ever
goes
red
and
so
I
documented
what
it
is
that
I
do
all
day
when
I
pretend
like
I'm
the
CI
signal
aid
and
I
have
some
power
over
whether
or
not
the
tests
can
pass
or
not.
Essentially,
that
involves
19
people
who
have
owned
jobs
or
19
people,
who've
known
individual
test
cases.
K
I
do
this
nagging
in
the
form
of
filing
giving
up
issues
and
notifying
the
appropriate
sig
as
defined
either
by
the
text
in
the
test
case
or
the
signal
owner
field
and
a
job
configuration.
The
number
of
the
jobs
in
the
release.
Master
blocking
dashboard
did
not
have
others,
most,
notably
most
of
the
jobs
that
were
running
on
GCE,
also,
most
of
the
jobs
that
were
writing
on
gke.
A
new
sig
has
popped
into
existence.
It
just
has
an
all-girl
meeting
this
morning.
K
It's
called
save
GCE
I
am
trying
to
live
in
a
brave
new
world
where
every
cloud
provider
seems
to
be
getting
its
own
safe,
like
a
tough
city,
AWS
state,
GC,
key
signature,
state
OpenStack
and
I
would
trust
that
if
a
cluster
cannot
come
up
at
all
within
that
cloud
providers,
cloud
I
should
probably
go
chase.
The
state
that's
responsible
for
that
cloud
disease.
They
can
help
understand.
Help
me
understand
why
that
particular
job
is
failing.
K
So
this
isn't
necessarily
about
me
chasing
GCB
or
every
single
test
case
that
fails,
because
most
of
the
test
cases
hopefully
have
the
name
of
the
state
that
read
that
test
in
them,
like
I'm,
casing
city.
So
he
HAP's
right
now
to
explain
to
me
why
the
job
doesn't
run
to
completion.
Flake
is
still
happening
perpetually
and
I
was
chasing
Signet
work
a
little
while
ago
to
understand
why
some
of
the
service
discovery
or
anyway,
why
their
test
cases
were
failing
and
they
worked.
It's
great
I
didn't
have
to
involve
any
other
sink.
K
K
K
L
So
that's
my
general
concern
is
that
it's
just
a
catch-all
for
general
functionality
that
has
gone
awry
and
it
becomes
sig
triage
rather
than
sig
GCP.
That
says
we
can
certainly
put
some
more
people
and
effort
into
Chesapeake
I,
think
that
is
entirely
appropriate
and
we
will
staff
appropriately
for
that.
Okay,
thank
you.
K
K
Please
open
issue,
so
we
can
amend
that
thought
to
make
sure
that
the
next
lucky
individual
who
fills
this
role
just
as
well
as
I,
am
most
notably
I'm,
going
to
be
trying
to
make
the
effort
where,
if
I
find
the
test
case,
that's
not
loaded
by
an
individual
sip,
I'm
gonna
work
with
the
sig
who
owns
the
offending
job
to
figure
out
who's.
You
know
in
that
test
case
going
forward,
so
I
don't
have
to
bug
the
person
who
delivers
the
job.
K
Similarly,
there'll
be
some
times
where
I'm
gonna
have
to
bug
both
the
test
case
owner
and
the
job
owner.
If
it
seems
like
that
test
case
is
mysteriously
failing
just
for
that
one
job.
This
has
often
come
up
for
the
scalability
jobs.
The
scalability
correctness.
Job
has
a
number
of
tests
that
fail
just
for
that
job,
not
for
the
rest
of
the
jobs
and
so
we're
trying
to
triage
those
issues
and
agree
yeah
that.
E
That's
awesome
things:
I
I
did
have
a
I
did
have
a
question
which
is
I'll,
say
I'll,
throw
it
upon
the
steering
committee
waters
for
the
moment,
but
there
has
been
some
discussion
off
and
on
about
whether
the
pattern
of
every
provider
getting
a
separate
sig
is
what
we're
doing
generally
well.
I'll
just
say
this
is
sort
of
bringing
up
the
point
that
that
means
it's
an
attention.
I,
don't
know
we
can
just.
We
can
just
table
this
one
for
some
future
point,
but
I
yeah.
K
I
thought
I
would
ask
it
again
here
it's
probably
like
I.
Can
there
was
a
really
great
substance
of
by
the
cloud
provider
working
group
this
morning,
sick,
GCE,
meeting
I
learned
a
lot
I.
Think
one
of
the
challenges
we're
doing
to
face
down
the
road
is,
as
we
look
to
extract
cloud
providers
out
of
the
kubernetes,
treat
and
put
everybody
on
the
same
level
playing
field.
K
How
can
we
ensure
that
all
the
cloud
providers
are
behaving
responsibly
in
the
context
of
the
community's
deployment,
so
in
making
sure
that
the
cloud
providers
have
some
level
of
parody
starts
to
sound?
A
lot
like
the
word
conformance,
which
is
something
I
know
that
the
sake
architecture
Houten's
today,
but
maybe
that's
something.
Maybe
the
working
group.
You
know
exits
with
everything
moves
out
of
tree,
or
maybe
it
makes
more
sense
to
so.
Look
at
the
working
group
becoming
a
full-fledged
state,
the
Kent
elem.
K
K
L
So
just
a
quick
thought
and
we
can
come
back
to
it,
but
we're
talking
with
Lucas
yesterday
at
the
club
provider
meeting
about
having
some
mechanism
for
CI
test
Suites
to
run
on
other
cloud
providers
and
post
back
to
test
fitted
dashboard
so
that
we
can
ensure
that
there
are
no
inadvertently
breaking
other
cloud
providers
and
I
agree.
There
is
a
lot
of
overlap
with
the
conformance
tests
and
maybe
that's
the
starting
point
is
running
that
conformance
Suites
and
posting
results
back
to
test
fit.
L
M
So
something
else
to
think
about
here
is
the
model
of
that
cloud
providers
use.
Imagine
as
we
make
some
more
this
stuff,
pluggable
volume
providers
things
like
that.
If
every
cloud
provider
gets
a
sake,
does
that
mean
every
volume
provider
gets
a
cig
and
every
other
thing
like
that
gets
a
cig.
The
model
that's
used
here
may
end
up
setting
some
precedence.
L
Well,
yeah
I
think
it's
a
little
different
for
volume
providers
is
they're,
not
providing
the
kubernetes
api.
So
it's
not
something
an
end
user
gets
access
to
and
expects
their
workload
to
run
on
or
as
a
cloud
provider
or
a
distribution
of
kubernetes
on
Prem,
for
example,
is
more
something
that
a
user
interacts
with
so
I
think
that's
a
helpful
distinction
so.
M
F
So
yeah,
as
we
talked
about
yesterday
in
the
cloud
provider,
refactoring
group
we're
actively
looking
for
volunteers
from
different
clouds
to
start
running
edie
tests
and
posting
them
to
the
test
grid.
That
would
be
amazingly
useful
for
for
a
group
and
I'm
happy
to
mentor
people
or
provide
pointers.
How
to
how
to
post
these
Russell's
to
test
read
it's
kind
of
non-obvious
right
now,
but
we're
what
we
want
to
provide
more
documentation
around
such
things.
I.
D
Think,
there's
just
to
elaborate
on
that.
Although
posting
your
results
is
useful
information
there,
it
needs
to
be
more
process
involved
for
alerting
to
as
well.
So
as
we
start
to
do
more
federated
testing,
we
need
to
think
about
the
load
that
will
be
incurred
of
what
it
means
to
signify
a
full
release
of
kubernetes.
With
these
other
providers
in
play.
F
K
K
There
is
a
testing
ops
channel,
but
like
give
me
what
every
sync,
their
own
notification
channel,
possibly
I,
think
you
could
also
do
this
if
you
signed
up
so
your
appropriate
stakes,
team
for
test
failures
or
whatever
calculate
you're
interested
in
I
mean
you
could
maybe
like
filter
that
and
your
email
somehow
like
certain
speaking
of
that.
Maybe
this
is
a
control,
X,
actually
I'd,
love
to
see
a
working
demonstration
of
how
you
can
most
effectively
use
email
filters
to
to
approximate
some
level
of
usefulness.
F
F
K
K
N
I
had
proposed
that
next
week
we
had
a
larger
come.
We
should
have
a
larger
conversation
on
how
we're
doing
trashed
test
triage
and
element,
that's
triage
in
general.
Do
we
just
don't
think
we
want
to
do
that
or
do
we
address
our
concerns
for
the
short
term
and
oh.
K
I'm
not
gonna,
say
no
I'm
unavailable
next
week,
but
I
would
love
to
come
back
and
see
what
the
community
has
come
to
consensus
on.
Wait,
I
just
put
forward
the
way
I
wouldn't
individual.
Do
it
I
would
love
to
understand
what
and
how
everybody
else
faces
this
way
or
what
tribally
we
collectively
are
doing
today.
There's
still
value
in
that
okay.