►
From YouTube: Kubernetes Community Meeting 20180823
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
https://contributor.kubernetes.io/events/community-meeting/
A
So
we're
some
above
all
right,
hi.
Everyone
today
is
August
23rd.
My
name
is
Paris
I
work
at
Google,
+,
community
manager.
Here
and
today
we
are
going
to
do
our
usual
Community
Program
first
things.
First,
we
always
start
with
a
demo.
Then
we
go
into
release
updates,
followed
by
errand,
with
scraps
of
police,
to
show
us
some
cool
data
bits,
and
then
we
have
a
cap
on
Tim
all
players
going
to
share
with
us
and
for
those
that
don't
know
it,
cap
is
80
kubernetes
and
Hanson
proposal.
We
will
go
into
some
big
updates.
A
Today
we
have
OpenStack
storage
and
apps
and
then
we'll
go
into
some
quick
announcements
and
shout
out
first
things.
First,
we
do
have
a
code
of
conduct
that
we
bye-bye
please
be
excellent
to
each
other
if
anyone
needs
anything
at
all.
During
that
this
meeting,
please
feel
free
to
ping
me
privately
in
the
zoom
chat
or
DM
me
on
flask.
If
that's
your
preferred
route,
and
then
we
can
get
that
sorted
for
you,
but
first
things.
First,
let's
kick
it
off
and
go
right
into
our
demo.
Today
we
have
keep
close.
A
B
Hi,
my
name
is
Ian
Thorson
I'm,
the
project
lead
on
the
Kiko
team.
Today,
I'm
gonna
be
showing
key
running
on
mini
cube
in
this
case
and
a
couple
of
applications
and
a
service
that's
been
secured
with
key
coke.
So
for
those
don't
know,
Kiko
Piz
is
an
open
source
identity
and
access
management
solution
that
is
built
for
web
applications
and
rest
services.
Primarily
so
the
first
thing
I'm
going
to
do
is
I'm
going
to
set
a
couple
of
environment
variables
so
that
I
can
access
so
I
get
their
host
names.
B
So
I
can
access
everything
from
my
browser.
I'm
gonna
be
using
an
ingress
to
be
able
to
get
everything
expose
that
I
can
get
it
through
my
browser
so
I'm
creating
the
key
called
deployment,
first
of
all,
by
creating
an
ingress
for
Kika
and
then
I'm
going
to
be
doing
the
same
for
by
back-end
service,
which
is
a
simple
node.js
sews
and
finally,
my
front-end,
which
is
a
simple
html5
front-end.
So
next
thing,
let's
make
sure
that
everything
deployed
nicely
on
to
Cuba
neatest.
B
I
could
have
scripted
this
up
together
with
the
image
if
I
wanted
to,
but
I
wanted
to
show
creating
this
doing
a
demo
so
that
you
can
see
what
happens.
Configuration
was
in
Quito,
so
I
have
a
pre-prepared
JSON
file
that
contains
all
the
all
the
bits
and
pieces
that
I
want
inside
my
realm.
So
it's
created
this
round
called
demo
and
inside
that
is
created
a
kind,
and
this
is
required
so
that
the
small
front-end
application
is
allowed
to
obtain
a
login
via
keye
cloak,
which
is
all
happening
to
open.
B
Id
Connect
I've
also
created
one
single
user,
so
I
can
login
to
the
application
and
I've
also
created
a
couple
of
roles
that
the
application
used
to
secure
different
endpoints.
So
I
have
the
admin
role
and
I
have
the
usable.
At
the
moment.
My
user
will
only
have
that
user
role.
So
I
can
take
a
quick
look
at
the
backend
service
is
a
very
simple
service.
B
B
So
it's
key
cook,
that's
actually
displaying
this
login
screen
and
not
the
application
itself,
which
means
that
all
the
credentials
that
I'm
providing
here
will
be
accessible
only
by
key
cook
and
not
by
the
applications
and
also
the
application,
doesn't
have
to
worry
about
providing
these
screens
or
dealing
with
authenticating
the
user.
When
I'm
now
returned
to
the
ml5
front-end
application,
the
application
now
has
two
tokens.
B
B
So
I
now
can
invoke
this
public
endpoint,
of
course,
which
wasn't
secure,
but
I
can
also
invoke
the
secured
endpoint,
which
requires
is
talking
to
be
sent
along
with
the
request,
but
at
the
moment
I
cannot
invoke
the
admin
endpoint,
so
I
can
show
how
I
can
easily
add
the
admin
role
to
the
user.
To
now
be
able
to
invoke
the
admin
endpoint
as
well.
B
The
next
thing
I
wanted
to
show
is
basically
how
easy
it
is
to
enable
login
with
external
identity
providers
or
your
corporate
identity
providers
of
sam'l
Open,
ID
Connect,
and
also
any
social
networks.
I'm
gonna
use
github
in
this
case
I've
already
created
some
configuration
in
github.
That
gives
me
a
client,
ID
and
a
client
secret
that
allows
key
code
to
log
in
via
github.
B
So
all
I
need
to
do
in
the
key
crew
side
is
to
provide
a
client
ID
in
a
secret
for
the
particular
identity
provider,
which
is
negative
one
and
now
I
can
log
out
from
my
application
and
when
I
log
in
back
again.
I
can
now
see
that
I
have
the
option
to
log
in
via
their
github,
without
having
to
modify
the
application.
Anything.
B
B
B
So,
while
actually,
if
I
have
a
couple
of
minutes,
I
can
show
quickly
a
couple
of
things
to
see
what
kind
of
features
we
have
available
in
keep
talk.
So
not
only
can
you
have
as
many
different
providers
identity
providers
if
you
want
configured
for
the
same
Brown,
you
can
also
pull
in
users
from
external
user
stores,
so
these
could
be
LDAP
stores,
or
can
we
can
pull
in
from
Kerberos
as
well?
Obviously,
LDAP
covers
Active
Directory.
You
can
also
write
your
own
custom
providers
to
be
able
to
pull
in
users
from
a
relational
database.
B
We
are
allowed
to
completely
changed
the
flows
for
for
authentication
flow,
so
you
can
have
custom
steps
in
there.
If
you
want
to,
you
can
obviously
configure
password
policies,
we
have
support
for
two-factor
authentication
and
we
have
lots
of
options
around
how
the
tokens
are
built
to
client
scopes
and
protocol
mappers
to
allow
you
to
define
exactly
what
you
want
to
have
in
the
tokens
and
in
same
time
for
identity
providers
and
user
Federation.
B
A
A
B
So
we
have
support
for
primarily
Java
based
ones
and
no
Jess,
but
you
can
use
any
Sam
old
libraries
or
any
open.
Andy
connect
libraries
and
you
can
mix
and
match
between
the
two
as
well.
So
you
can
have
some
clients
using
sam'l
and
others
using
Open
ID
connect,
and
we
also.
We
are
soon
introducing
a
goal
based
proxy
adapter,
which
you
can
sit
in
front
of
any
type
of
application
to
secure.
A
B
So
we
we
have
some
overlap
and
you
know
that
it
behaves
slightly
differently,
but
we
have.
We
have
what
we
call
the
authorization
services
which
is
based
on
a
protocols
that
allow
you
to
set
up
fine-grained
permissions
for
your
resources
and
centrally
managed
access
to.
D
A
E
E
So
just
want
to
kind
of
get
the
shout
out
that
we're
into
that
phase.
People
need
to
be
having
rigor,
obviously
associated
with
this.
We
are
needing
things
like
documentation,
test
cases
coming
in
as
well,
and
then
the
other
major
aspect
of
this
is
the
reason
we're
after
these
things
down,
is
to
get
stabilization
and
have
some
slight
worries
right
now,
it's
getting
better
as
the
week
progresses,
but
our
our
CI
status
is
a
bit
unhealthy.
E
Looking
last
week
we
did
get
our
1.12
branch,
the
I
up
and
running,
but
even
before
that
our
master
branch
health
was
looking
a
bit
whirring.
So
there's
there's
a
cluster
of
or
a
number
of
clusters
of
issues
that
were
reaching
out
to
a
number
of
SIG's
I've
got
kind
of
the
list
of
potential
parties
there
that
we're
needing
some
help
from
and
I
guess.
I
just
mentioned
that
cluster
life
cycle
is
last
on
that
list,
because
even
though
a
number
of
these
are
upgrade
failures,
they
don't
look
like
their
cluster
life
cycle
upgrade
failures.
E
So
it's
easy
when
you
glance
a
touch
screen.
Maybe
you
think
oh,
this
is
an
upgrade
issue.
Cluster
life
cycle
will
be
on
it,
but
it's
actually
often
more
than
that.
So
would
appreciate
additional
eyes
on
some
of
these
things
and
then
I'm
not
sure
if
Maciek
is
on
the
line,
we're
probably
for
the
patch
releases.
The
only
thing
to
mention
is
that
one
dot
10.7
just
came
out
this
week,
and
that
would
be
the
update.
A
C
Hi
everybody
Aaron
Berger
here
your
friendly
github
label,
guy
steering
committee,
guys
testing,
guy
contra,
Beck's
guy
beard
guy,
yes,
the
t-shirt
says
I
have
no
idea
what
I'm
doing,
but
thankfully
I
have
graphs
to
sort
of
help.
You
understand
what
I
think
is
happening
so
today,
I
want
to
talk
about
our
automations
github
API
token
usage,
as
some
of
you
may
know,
the
token
you
get
for
github
API
gives
you
five
thousand
requests
per
hour.
C
That
is
a
limit
that
we've
kind
come
close
to
hitting
a
number
of
times
initially,
when
we
use
nothing
but
lunch
github
the
thing
that
sweeps
around
and
does
things
to
things
it
finds
whatever
we're
going
to
deal
with
this
in
the
dark
it
we
dealt
with
this
initially
by
having
munch
github,
keep
an
in-memory
cache.
This
contributed
to
much
github
being
a
real
pain
operationally
because
it
took
forever
to
warren
this
cash
up
on
restart.
Once
we
got
that
in
place,
we
then
had
to
sort
of
tune.
C
We
started
creeping
back
up
to
that
wall,
so
we
eventually
decided
you
know
what
the
idea
of
having
a
proxy
for
github
or
a
cache
for
github
was
pretty
cool.
We
just
hate
having
it
in
memory,
so
thanks
to
Cole
Wagner
for
working
on
this
thing
called
GH
proxy,
which
I'll
link
here
just
to
follow.
This
is
I.
Think
in
theory,
something
anybody
could
use
and
gh
proxy
is
just
a
reverse.
Cache
a
reverse
proxy
in
front
of
this
cache,
which
is
a
little
more
github
specific.
C
If
you
want
to
find
out
all
of
the
fun
lessons
that
Cole
learned
while
implementing
this
I
suggest,
you
read
the
code
and
the
read
nice,
but
today,
since
this
is
graph
of
the
week
now
that
I've
told
you
the
story,
let's
show
you
the
hero,
graphs,
so
I'm
going
to
show
both
two
graphs.
Let's
see
if
I
can
get
this
to
go
side-by-side.
So
on
the
left
side
of
the
screen,
you
can
see
graphs
for
our
github
cache
wow.
C
This
is
really
maybe
not
going
to
be
as
cool
and
awesome,
but
I
kind
of
want
to
point
out
that
the
graph
here
shows
that
we
turned
on
the
github
cache
about
midway
through
May.
Certainly
by
June
we
turned
on
the
github
and
you
can
see
fun
stuff
like
how
often
we
are
hitting
versus
missing
the
cash.
How
many
tokens
we
think
we
have
saved
cumulatively
over
time
since
we've
started
the
cash
up,
how
efficient
we're
being
this
is
good.
Metrics
are
awesome,
you
can
you
can
see
your
success
now.
C
C
You
can
see
over
times
that
it
was
kind
of
up
here
and
often
poking
up
into
the
yellow
thing,
sometimes
even
maxing
out
in
the
red
area,
which
is
super
bad
and
then
you
can
see
around
about
the
end
of
May
after
we
turned
it
on
and
kind
of
got
used
to
what
the
cash
is
supposed
to.
Look
like
and
hooked
everything
up,
our
github
token
usage
has
gone
down
significantly.
C
I
know
this
looks
noisy,
but
the
yellow
line
here
is
kind
of
like
the
rolling
average
for
Kate's
merge,
robot,
which
is
Munsch
github,
which
today
really
is
just
the
submit
queue
for
kay-kay,
and
the
green
line
represents
the
rolling
average
for
prowl.
So
you
can
see
that
both
lines
kind
of
went
down
as
soon
as
we
turned
on
the
cache.
C
That's
super
cool
another
way
you
can
see
success
here
back
over
here
where,
like
github
token
usage,
got
completely
maxed
out,
guess
what
that's
generally,
what
happens
when
we
come
out
of
code
freeze
and
there's
this
huge
stampede
of
pull
requests
that
need
to
be
merged,
so
that's
kind
of
what
you're
seeing
with
this
bike
here.
That's
why
I'm
showing
all
the
graphs
at
the
same
time,
and
so
as
we
were
churning
through
all
of
that.
Our
token
usage
was
maxing
out
with
the
cash
in
place.
C
The
next
time
we
had
a
similar
peak
of
pull,
requests
come
through,
we
turned
through
them,
but
we
didn't
blow
through
our
token
usage
and
so,
as
a
result,
we
were
able
to
churn
through
the
backlog
significantly
faster.
This
is
awesome.
Hopefully
this
is
the
last
time
you're
gonna
see
graphs
like
these
in
any
informative
manner.
We
do
still
have
a
desire
to
move
away
from
lunch
github
and
towards
browse
implementation
of
the
submit
queue
it's
called
tied.
I.
C
Don't
have
anything
to
show
you
about
that
today,
but
rest
assured
we're
going
to
have
similar
metrics
to
the
tide
is
doing
things
successfully.
I've
been
turning
on
tide
for
pretty
much
every
single
repository
out
there
as
a
result
of
sweeping
through
all
the
automation
and
I,
gave
you
a
heads
up
about
this
last
week,
so
folks
in
Cooper,
Nader's,
incubator
and
kubernetes.
Look
for
me
to
start
coming
to
you
about
all
of
that.
I'm
gonna,
stop
sharing
any
questions.
C
A
A
All
right,
so
next
we
actually
do
have
a
cab
and
again
for
those
join
late
cap.
Is
a
community
enhance
the
proposal
you
can
find
those
in
the
Cate
community
repo
for
now
at
least,
and
let's
see,
Tim
really
take
it
away
for
your
tap
and
also
for
those
following
on
the
cuff
link
is
in
the
agenda
today.
If
you
would
like
to
continue,
okay
go
ahead,
Tim.
C
F
F
You
need
to
know
which
features
are
supported
by
different
runtimes,
so
that
you
can
do
that
validation
of
the
control
claim
level,
and
you
don't
have
to
wait
for
something
to
be
scheduled
to
the
node.
Before
you
say,
actually,
I
can't
run
this,
so
those
are
sort
of
the
two
primary
goals
of
runtime
class.
The
first
is
to
support
multiple
runtimes
and
then
the
second
is
to
to
provide
a
mechanism
for
surfacing
the
information
and
the
properties
about
those
runtimes
up
to
the
control
plane.
F
F
That's
been
given
to
the
runtime
by
the
configuring
user,
so
the
administrator,
whoever
sets
up
the
nodes,
and
that
is
that
just
maps
to
a
specific
configuration
to
run
containers
with,
and
then
this
is
referred
to
from
the
pod
spec,
where
we
are
going
to
have
a
runtime
class
name
which
just
refers
to
the
runtime
class
object
itself
and
let's
the
the
user.
Whoever
is
deploying
the
pod
select
which
runtime
to
actually
use-
and
this
kind
of
this
layer
of
indirection
is
will
probably
come
in
handy
more
in
the
future.
F
F
F
F
Yes,
we
want
to
add
a
lot
more
to
this
and
here's
a
lot
of
the
ideas
that
we've
thought
about,
but
we're
trying
to
keep
it
very
simple
and
very
scoped
down
for
the
initial
implementation
so
that
we
can
kind
of
iterate
and
and
put
more
thought
into
each
one
of
these.
A
few
to
call
out
I
already
mentioned
pod
overhead
right
now.
Resources
and
kubernetes
are
only
associated
with
containers
themselves
and
there's
no
resources
that
are
associated
with
the
pod
itself.
F
A
A
A
A
H
Perfect,
okay,
thank
you
great
well.
This
will
be
a
short
update,
so
hopefully
we
can
catch
up
on
a
little
bit
of
time
for
the
rest
of
the
meeting.
My
name
is
Chris
Hodge
and
I
am
one
of
the
co-chairs
for
cig
OpenStack,
along
with
David
Lyle
and
Robert
Morris,
and
just
a
quick
update
of
some
of
the
work
that
we've
done
in
the
last
cycle.
H
So
cig
OpenStack
has
a
bunch
of
different
things
that
we've
been
working
on
are
one
of
our
primary
things
that
we
have
is
the
cloud
provider
OpenStack,
which
is
both
it's
the
cloud
provider
is
has
both
in
in
tree
instance,
and
an
external
instance
over
the
last
few
months,
we've
after
you
know
doing
a
bunch
of
work
on
on
getting
in
a
test
grid
and
some
other
things.
We've
added
a
bit
of
conformance
testing.
We've
done
a
bunch
of
you
know.
H
It's
mostly
just
been
bug,
fixes
and
a
little
bit
of
performance
enhancements,
and
so
you
know
things
like
adding
the
cluster
name
to
the
load,
balancer
descriptions
so
that
people
can
actually
identify
where
their
load
balancers
are
lots
of
bugs
fixes
and
improvements.
As
you
know,
making
sure
that
we
are
syncing
bug
fixes
with
the
entry
provider,
now
it's
important
to
note
that
the
that
our
entry
provider
is
deprecated.
H
H
A
pretty
exciting
feature
that
improves
the
integration
between
Keystone
and
kubernetes,
as
well
as
block
volume
support
for
the
cinder
volume
plugin.
One
of
the
big
things
that
we've
also
been
working
on
is
improving
the
documentation.
We've
made,
we've
put
a
lot
of
effort
into
documenting
everything
that
we
do
and
also
working
with
a
sig
cloud
provider
to.
H
You
know
to
kind
of
make
documentation
consistent
across
all
of
the
providers
and
finally,
we've
begun
the
transition
to
become
a
signal
provider,
working
group,
and
so
this
this
may
be.
The
last
update
you
received
from
cig
OpenStack
as
we
wind
the
group
down
and
become
a
subgroup
within
the
sig
cloud
provider
and
that
moves
us
on
the
future
work.
So
one
of
the
things
we're
really
excited
about
is
we
have
reviewing
for
magnum
conformance
testing
for
Coverity
certification.
This
is
something
that
is
near
and
dear
to
my
heart.
H
I
run
this
certification
program
for
the
OpenStack
foundation,
and
so
it's
exciting
that
one
of
our
projects
gets
to
participate
in
the
kubernetes
equivalent
of
that.
So
we
should
be
looking
for,
hopefully
a
magnum
to
be
a
community
certified
installer
for
the
release.
It's
coming
up,
we're
still
working
on
our
two
auto
scaling
drivers,
one
is
based
on
OpenStack
Heath
and
one
is
based
on
OpenStack
Sandlin
storage
driver
consolidation
is
also
in
the
works,
as
well
as
a
new
project
for
Barbican
as
building
a
BBQ
and
driver
for
key
management.
H
Again,
as
I
mentioned
before,
we
have
a
plan
for
the
for
the
start,
removing
entry
code
for
one
for
the
1.13
release
and
we're
taking
we're
going
to
continue
to
work
with
sig
cloud
provider,
with
short-term
goals
for
building
generic
docs
for
the
entry
and
external
providers
and
also
a
build
provider.
Specific
Docs
that's
going
to
be
used
as
a
model
for
the
other
cloud
providers
to
kind
of
build
their
Doc's
off
of,
and
so
this
is,
you
know
it's
kind
of
exciting,
to
see
all
the
collaborations
that
we
have.
H
You
know
with
sig
cloud
provider,
but
also
sig
Docs,
and
you
know,
they've
been
fantastic
work
to
work
with,
as
well
as
sig
cluster
API,
which
we've
also
been
active
participants
in
sort
of
wrap
things
up.
If
you
are
interested
in
getting
involved
with
any
of
the
work
that
we're
doing,
we,
our
group
meets
at
a
number
of
different
events:
September
10th
Jan
4th
we're
going
to
be
at
the
OpenStack
ptg
in
Denver.
H
A
I
Right
thanks
Paris,
so
my
name
is
Sodom
sigelei
de
cig
storage.
Today,
I'm
gonna
give
you
a
quick
update
of
what
we've
been
working
on
for
the
1.12
release
and
then
answer
any
questions
you
might
have
at
the
end.
So
the
big
items
that
we
have
for
112
our
topology
aware
volume
scheduling
this
has
been
a
multi
quarter
effort.
The
basic
idea
here
is
that
different
types
of
volumes
main
be
equally
available
to
all
the
notes
within
a
cluster.
I
Previously
kubernetes
has
a
hard-coded
logic
to
handle
this
for
two
volume:
plugins,
the
GCE,
persistent
disk
and
the
Amazon
EBS
volumes
to
understand
the
concept
of
zones
somewhat.
It
was
a
pre
pour
solution
and
it
didn't
scale
to
a
you
know:
arbitrary
different
volume
plugins
in
the
last
quarter,
we
moved
we
actually
implemented
topology
support
in
both
kubernetes
and
on
the
CSI
side,
and
in
this
quarter
were
implementing
topology
support
in
the
entry
volume
plugins
so
using
the
functionality
that
was
exposed
last
quarter.
I
This
is
for
the
gcpd
ones,
the
AWS
ones,
as
well
as
the
azure
one,
and
then
we're
also
adding
support
to
the
bridging
logic
between
kubernetes
and
CSI,
to
enable
this
end-to-end
for
CSI
volume
plugins.
Some
of
the
use
cases
that
this
is
going
to
support
some
of
the
new
use
cases
are
the
ability
to
have
volumes
provisioned
in
a
smarter
way
where
the
scheduler
can
help
influence
where
a
volume
is
provisioned.
Today,
once
you
create
a
PBC,
it
kind
of
gets
randomly
assigned
to
a
zone.
I
I
What
we
realized
is
that
it's
a
common
enough
operation
that
it
is
good
to
abstract
it
within
the
kubernetes
api,
but
we
were
at
a
weird
place
within
the
wider
kubernetes
community,
where
the
recommendation
from
sig
architecture
is
that
there
shouldn't
really
be
any
more
core
API
objects
that
are
going
into
the
API.
Everything
should
be
added
on
as
a
CR
D.
There
is
a
number
of
reasons
behind
this
we
can
elaborate
on,
but
what
this
meant
is.
I
This
is
one
of
the
first
kubernetes
storage
features
that
couldn't
be
part
of
the
core
and
we
got
to
figure
out
how
to
make
that
work.
And
luckily
it's
worked
out
very
nicely.
We
have
an
external
snapshot
controller
along
with
CR
DS
and
that's
going
to
be
released
as
alpha
at
the
end
of
this
quarter
and
then
we're
going
to
have
make
some
minimal
set
of
changes
to
the
kubernetes
core
to
enable
this
functionality
that
is
adding
in
a
data
source
to
a
PBC
object.
I
This
will
allow,
when
you
provision
a
new
PVC
to
be
able
to
say
I
want
it
to
be
pre-populated
with
a
specific
snapshot
in
the
future.
We
plan
to
expand
that
to
be
a
more
generic
hook.
That
would
allow
you
to
pre-populate
with
any
type
of
data.
It
could
be
another
PB
that
we
clone.
It
could
be.
You
know
a
docker
image.
It
could
be
a
github
repo.
We
want
to
support
arbitrary
external
pop.
You
laters
paired
with
external
provisioners
kind
of
working
dynamically
together.
I
The
things
that
we're
focusing
on
this
quarter
for
CSI
are
supporting
ephemeral
volume,
so
I
think
remote,
persistent
volumes
work
fairly
well
with
CSI
at
the
moment,
but
local
ephemeral
volumes
so
think
things
like
a
secret
volume
or
a
config
map
volume,
or
maybe,
like
a
you,
know,
Kerberos
token
injection
volume,
don't
really
work
very
well
with
kubernetes,
with
the
CSI
that
we
have
at
the
moment,
or
at
least
the
kubernetes
CSI
implementation
that
we
have
at
the.
So
we
have
a
number
of
projects
to
make
that
easier.
I
This
quarter,
and
we
also
want
to
nail
down
some
of
the
basic
functionality
for
how
you
connect
to
a
CSI
driver.
So
we
had
the
cubelet
device
registration
mechanism
introduced
as
alpha
last
quarter.
We
want
to
get
that
push
to
beta
because
CSI
relies
on
it
and
we
want
to
make
sure
that
that's
g8
before,
along
with
with
CSI
and
then
we're
also
adding
in
a
couple
of
new
API
objects,
and
that
will
help
with
some
of
the
functionality
mentioned
above
CSI.
I
Cluster
registry
is
an
idea
of
introducing
a
object
to
represent
a
CSI
driver
and
within
the
kubernetes
api.
You
know
one
of
the
benefits
of
this
will
be
ease
of
discoverability,
of
what
drivers
are
actually
installed
on
your
cluster,
so
you
can
do
something
like
cube.
Cuddle
gets
the
aside
drivers
and
get
a
list
of
the
drivers
that
are
installed,
but
it
also
serves
as
a
mechanism
by
which
an
arbitrary
CSI
driver
can
configure
how
kubernetes
interacts
with
it.
So
I
talked
about
these
ephemeral
volumes
above
today.
I
Kubernetes
does
an
attach
operation
on
all
volume
plugins,
regardless
of
the
volume
plug-in,
but
you
can
imagine
that
an
ephemeral
volume
could
have
a
configuration
that
says:
I
don't
need
to
do
an
attached.
Please
skip
that
operation
for
me
when
it
registers
itself
with
kubernetes.
So
that's
in
the
works
and
then
node
status.
Api
object
is
we
want
node
specific
information
for
CSI
drivers,
things
like
what
the
driver
thinks
the
nodes
name
is
what
the
topology
key
prefixes
are
things
like
that,
which
are
very
node
specific.
I
We
want
to
be
able
to
capture
that
information,
but
we
don't
want
to
continue
to
extend
the
node
object,
which
is
getting
very,
very
large,
so
we're
thinking
about
introducing
a
secondary
object
to
capture
that
information.
All
those
discussions
are
ongoing
in
various
caps.
Please
reach
out
to
me,
if
you're
interested
in
those
and
we're
also
trying
to
come
up
with
reusable
libraries
for
some
subset
of
CSI
drivers,
for
example,
I
scuzzy
is
the
basis
for
a
lot
of
CSI
drivers
and
the
mounting
logic
can
get
fairly
complicated.
I
So
what
we're
trying
to
do
is
abstract
away
that
mounting
logic
into
a
library
that
other
CSI
drivers
can
then
import
and
build
on
top
of,
and
they
can
implement
things
like
the
provisioners,
which
tend
to
be
very
custom
but
reuse,
that
core
driver
mounting
library
and
we
hope
to
do
that
for
other
common
protocols
as
well.
We
also
have
a
big
effort
underway
this
quarter,
to
add
conformance
testing
to
for
storage
to
the
kubernetes
conformance
suite.
I
Michelle
has
been
leading
that
effort
and
if
you're
interested
I
will
put
you
in
touch
with
her
and
then
finally
block
volumes
support.
Now
you
just
heard
that
sig
OpenStack
was
implementing
block
volumes,
support
for
their
volume,
plugin
we're
gonna
move
that
functionality
from
alpha
to
beta.
It's
been
in
alpha
for
a
few
quarters.
I
A
J
All
right
can
you
see
all
right
fantastic.
So
what
is
the
gaps
for
those
of
you
who
might
be
unfamiliar
with,
say,
gaps?
We
cover
deploying
developing
and
operating
applications
in
kubernetes
we're
really
looking
at
it
from
the
application
developer,
an
application
operator
point
of
view.
We
kind
of
have
a
few
areas
we're
working
on
right
now.
One
of
them
is
an
application
CRD
and
controller.
We
have
the
workload
API
understa
gaps,
there's
an
application
called
compose
that
came
out
of
the
old
incubator
process
and
the
example
space.
J
We
also
used
to
have
helm,
but
now
it's
a
part
of
the
full-si
ncf
project
I'll
touch
a
little
bit
on
that,
just
what
touched
kubernetes
since
our
last
update,
but
those
are
kind
of
the
areas
we're
handling
these
days.
They
say:
yeah,
it's
Charter
is
one
of
the
things
that
we're
actually
working
on
right.
Now,
it's
it's
coming
along
we'll.
J
For
example,
if
you're
going
to
do
a
site
like
the
ever
example,
WordPress
you've
got
probably
a
deployment
you're
running
a
database,
that's
probably
running
as
a
stateful
set.
You've
got
a
service
exposed,
maybe
load
balancers.
You've
got
a
bunch
of
stuff.
That's
going
on
that
encapsulate
your
application.
You
want
to
have
a
point
of
view
now.
If
you
want
to
have
many
different
things
that
can
deploy
it,
maybe
you
deploy
some
applications
with
coop
control.
J
Maybe
you
deploy
some
of
them
or
something
else
you
know
you
want
to
be
able
to
to
sort
and
filter
and
do
stuff
with
them,
and
so
one
of
the
thing
things
that
we
worked
on
and
actually
it
happened
under
the
appdev
working
group,
but
the
documentation
is
owned
by
cig.
Apps
is
a
set
of
recommended
labels
to
to
use
to
specify
here's
the
details.
You
know,
here's
your
application,
here's
what
manages
it
and
those
were
merged
into
the
documentation.
J
In
fact,
that's
also
been
merged
into
Helms
documentation
and
upcoming
releases
of
helm
to
use
this
as
well.
But
tools
are
starting
to
adopt
a
common
set
of
labels,
so
we
have
interoperability
with
each
other.
Another
thing
that
we're
working
on
is
the
application
CRD
and
controller.
This
is
an
application,
an
actual
customer
resource
that
can
describe
an
application.
The
code
you
can
see
is
in
the
kubernetes
SIG's
repo,
but
it
provides
a
cross
to
a
way
to
to
describe
an
application
as
well.
J
It's
another
one
of
those
things
that
complements
the
labels
and
lets
you
describe
here's
an
application,
here's
details
about
it,
you
know:
here's
where
you
can
get
the
icon
for
it,
if
you're
going
to
display
it
in
a
user
interface.
Here's
where
you
can
find
more
information
on
the
maintainer
z--
of
the
application
if
they
need
to
be
contacted.
Details
like
that
that
are
useful
both
in
a
listing
and
at
runtime
can
be
grouped
in
this
and
again.
The
goal
of
this
is
to
provide
cross
tooling
interoperability.
J
The
next
thing
we
have
is
the
workloads,
API
and
there's
three
things,
and
we
recently
discussed
these
we're
looking
at
things
like
lifecycle
hooks,
and
how
are
we
going
to
handle
that?
Because
that's
something
that
was,
if
you
look
at
the
deployments
and
what
originally
came
out
of
Red
Hat's
openshift,
that
came
into
kubernetes,
they
had
lifecycle
hooks
and
it's
something
that
we
do
find
useful
today,
for
example,
helm
has
its
own
version
of
lifecycle
hooks
baked
into
it,
and
so
we're
looking
at.
How
do
we
do
more
with
that?
J
We're
looking
at
pod
disruption,
budgets
and
deployments,
because
there's
actually
some
issues
in
there
and
then
jobs
with
deterministic
pod
names?
These
are
a
few
of
the
things
that
we're
looking
at
trying
to
come
up
with
a
strategy
to
tackle
here.
We
talked
about
these
just
in
the
last
meetings,
but
the
workloads
API
is
getting
dedicated
time.
J
These
we're
trying
to
schedule
now
it's
a
new
thing
schedule
time
every
other
week
where,
if
there's
something
to
talk
about
with
the
workloads,
API
we're
doing
that
and
then
on
the
alternating
weeks
from
that
we're
actually
talking
developer,
tooling
and
things
that
go
along
with
that.
So
our
demos
will
reflect
these
stuff
like
that,
because
we
have
demos
every
week,
but
this
is
what
we're
looking
at
with
the
workloads
API
right
now.
J
There's
compose
we
we
technically
own
compose
compose
for
those
of
you
who
don't
know
converts
docker,
compose
files
to
kubernetes
objects
from
the
the
application
survey
we
did
not
long
ago
earlier
this
year.
You'll
see
it
has
some
minor
usage,
it's
not
in
heavy
usage,
especially
with
some
of
the
other
things
that
docker
has
been
doing
lately
and
cig.
Apps
technically
owns
this,
but
it's
mostly
on
autopilot,
and
we
don't
do
much
with
it.
It's
it's
rarely.
J
A
topic
of
conversation
I
had
to
go,
look
it
up,
it
is
still
actively
developed
and
it
has
had
releases.
You
know
in
the
last
few
months
that
continue
just
to
have
minor
bug,
fixes
and
changes
to
it.
So
it's
actively
being
worked
on.
Oh
I
got
a
typo,
it's
version,
1.16
Matt
0.16
was
released,
and
so
it's
actively
being
developed,
but
there
isn't
a
whole
lot
of
conversation
on
the
Tees
days.
J
Over
to
the
helm,
organ
github,
so
kubernetes
folks,
don't
have
to
worry
about
that
anymore,
but
charts
is
still
using
the
prow
and
tied
automation
because
it
turned
out
that's
quite
useful
when
we've
gotta
have
many
owners
of
many
different
charts,
and
so
we
are
continuing
to
use
that
automation
like
so
many
projects
are
starting
to
do.
But
that's
what's
going
on
with
helm
and
that's
my
quick
update.
Are
there
any
questions?
Do
we
have
time
for
questions.
J
Yeah
I'm
happy
to
talk
about
that
at
another
point,
because
Helm's
not
really
under
kubernetes
anymore,
so
we
can
jump
that
out.
If
you
want
to
catch
me
in
a
side.
Channel
I
am
very
happy
to
talk
about
that.
But
since
we
don't
have
much
time-
and
this
is
not
a
simple
topic-
I-
don't
know
that
I
can
really
squeeze
it
in
here.
A
Quickly,
we
always
do
shoutouts
or
shoutouts
are
tools
from
the
shout
outs
slack
channel,
so
please
feel
free
to
give
those
a
quick
thank
you
for
work
that
you've
seen
that
is
above
and
beyond,
or
just
naturally
awesome.
First
ones
from
makea
shout
out
to
George
first
video
on
how
to
use
discuss
back
serenity
without
I.
Oh,
that's.
A
new
communication
platform
that
we've
set
up
that's
attached
to
the
minis
website
feel
free
to
watch
that
video
and
tell
you
how
to
do
things
like
read
it
for
your
email,
client
and
things
along
those
lines.
A
So,
thanks
to
George
for
that,
I'm
shouting
out
sin
for
improving
o
to
contributor
experience.
This
trip
systems
created
with
the
PR
link
in
the
agenda,
so
reduced
from
the
manual
assignment,
takes
the
full
requests
awesome.
Stuff.
Chris
Taub
also
wanted
to
shout
out
meal
at
one
two,
three
Nikita,
who
also
been
doing
the
same
work,
if
they're
being
a
contributor
experience
as
of
late
and
then
last
Nikhil
also
wanted
to.
C
C
Next
up
progress
on
automating,
all
the
things
I
talked
about
the
last
week.
All
the
labels
are
in
all
the
places
all
the
repos
are
in
60ml.
That's
how
dams
got
to
do
his
awesome
thing.
There
is
one
repo
left
that
doesn't
have
an
owner's
file
at
the
root
and
then
the
only
orcs
that
don't
use
tide
for
merge
automation
are
kubernetes
incubator
and
kubernetes.
Kubernetes
cuba,
nerdy's
incubator
and
kubernetes
I
will
keep
poking
individuals
to
get
that
done
and
then,
finally,
whether
or
not
kubernetes
kubernetes
should
move
off
of
the
submit
queue
to
tide.
C
A
All
right,
and
then
next
up
is
about
the
contributor
filament,
that's
happening
in
Seattle
for
those
who
are
interested
in
attending
that
lot,
so
the
new
contributor
workshop,
as
well
as
content
for
experienced
contributors,
that
is
a
sort
of
acute
fund
registration
process
in
order
to
attend
that
you
have
kind
of
a
communicated
events
in
your
registration,
that's
December,
9th
and
December
10th.
So
please
sign
up
for
that.
A
Also
that
a
steering
committee
election
announcement
went
out
on
the
21st
or
the
22nd,
depending
on
where
you
are
in
the
world
and
that
went
off
on
the
communities
development
mailing
list.
Please
check
out
all
the
details,
everything
you
need
for
removing
nominating,
etc
is
in
in
that
email
with
mom
a
lot
of
detail.
The
next
deadline
that
everybody
needs
to
be
aware
of
is
September
14th.
This
is
for
nominations
are
due
as
well
as
exception
forms
for
eligible
voters.
So
please
definitely
workout
schedule.
Look
at
the
details.
A
A
All
right
and
then
the
last
one
is
about
the
contributor
summit
in
Shanghai
there
not
doing
a
large
contributor
summit.
However,
they
are
doing
a
social
at
the
end
of
the
new
track.
That's
representing
at
Shanghai
the
venue
in
the
federal
details
to
be
announced,
however,
seemed
and
Seattle.
In
order
to
go
to
event,
it
is
a
to
located
event
on
the
registration
for
Shanghai.
So
just
please
be
to
please
look
out
for
that
detail
alright
and
that
wraps
our
show
for
today
same
time
same
place
next
week
and
happy
Thursday.