►
From YouTube: Kubernetes SIG Storage 20180412
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 12 April 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.9sewzrvo0mzb
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:59:09 From hekumar : https://github.com/container-storage-interface
09:59:25 From IanC : +1 and Thank you to everyone who worked on CVEs. LOTS of work.
A
All
right
today
is
April
12
2018.
This
is
the
meeting
of
the
kubernetes
special
interest
group.
We
have
a
lot
on
the
agenda
today,
so
let's
go
ahead
and
get
started.
First
up
we
have
a
presentation
from
Tim,
Al
Claire
and
the
security
team.
Unfortunately,
Jordan
has
family
emergency,
so
he's
not
able
to
be
with
us
today.
B
C
D
B
B
B
B
B
Thanks
a
lot
to
everyone
who
is
more
active
on
on
developing
those
fixes,
I,
don't
think
we
need
to
go
too
detailed
into
the
specific
timeline
here.
I
will
note
that
our
release
infrastructure
is
not
set
up
well
to
deal
with
security
releases
right
now,
and
so,
even
though
we
have
this
private
security
repo,
we
still
need
to
build
the
releases
out
of
the
public
repo,
which
meant
that,
on
on
March
12th
the
actual
release
day,
there
were
some.
B
F
At
one
common,
are
they
I
the
the
security
repo?
You
know
we
had
a
series
of
regressions
after
the
sixth
month
I
and
you
know
after
merged
we've
lost
access
to
the
repo.
In
some
of
the
history
we
kind
of
caused
some
problems.
I,
don't
know
how
we
could
address
that
going
forward.
I
know
that
there's
a
need
to
you
know
have
limit
exposure
to
who
sees
that
repo
mm-hmm.
E
B
So
what
I'm
hearing
is
kind
of,
maybe
two
separate
pieces
there.
One
is
you'd
like
to
be
able
to
see
like
the
conversation
history
on
the
discussions
that
were
happening
in
private.
Is
that
right,
yes,
is
a
little
more.
What
we're
hoping
to
talk
about
today
is
how
we
might
have
prevented
some
of
these
regressions
with
better
test
coverage.
B
B
B
B
B
B
C
C
Previously,
the
subhead
feature
had
very
minimal
test
cases
to
begin
with,
so
we
added
like
almost
a
hundred
more
test
cases
and
I
guess
you
know,
even
even
though
we
added
so
many
new
test
cases,
we
still
ended
up
having
regressions
at
the
end,
so
I
guess
I
want
to
go
into
more
detail
about
each
regression
and
kind
of
see
like
where
we
were
missing
some
test
coverage
and
maybe
consider
like
how,
in
the
future,
we
can
make
sure
all
our
new
features
and
bug
fixes
also
have
adequate
test
coverage.
C
C
E
C
Ii,
oh
and
the
other
kind
of
funny
thing
to
note
about
atomic
volumes
is
that
the
using
sub
path
with
with
atomic
volumes
is
already
not
great,
even
before
the
fix,
because
those
those
sub
path
mounts
didn't
actually
get
any
of
the
API
updates
from
the
secret
option,
so
config
Maps
anyway.
So
that
was
already
sort
of
an
omen.
C
G
C
C
B
C
B
C
C
G
C
All
right,
so
the
next
three
I
would
consider
less
serious,
because
either
they're
kind
of
rare
based
conditions
they're
covering
various
conditions
or
there
are
workarounds
that
are
possible.
So
this
one
is
that
the
sub
mounts
don't
work.
If
you
mount
a
socket
file,
I
think
for
the
most
part,
most
users
will
will
do
something
like
post
path
to
doctor
the
doctor
socket
or
something
that's
usually
done
with
host
cat
volumes.
C
C
So,
let's
see
the
next
regression
was
related
around
reconstruction,
this
one,
because
reconstruction
is
already
a
really
small
window
of
time
to
trigger
so
I
think
this
one
was
not
as
high
priority
but
still
important
to
fix,
because
it
will,
if
you
hit
this
condition,
it
will
leave
behind
still
mounts
so
for
this
issue
we
do
have.
We
do
have
test
cases
to
test
reconstruction,
but
they
did
not
test
it
with
post
pathologies.
C
Let's
see
the
next,
let's
see
so
the
remaining
regressions
are
things
we
still
have
not
fixed.
Yet
these
are
actually
like
the
hard.
These
are
the
hard
problems.
Luckily,
for
us
most
of
the
fiction
issues
that
we
did
fix,
we're
really
easy
to
fix
the
remaining
issues
that
we
have
not
fixed.
We,
we
haven't
fixed
it,
because
it's
difficult
to
fix.
C
C
C
So
as
as
a
consequence
of
that,
if
you
had
a
nested
volume
inside
an
atomic
volume
and
that
path
did
not
exist,
you
wouldn't
be
able
to
nest.
We
couldn't
create
the
directory
and
mount
things
to
it
afterwards.
So,
to
get
around
that,
the
atomic
writer
code
actually
looks
at
all
the
rocks
through
all
the
volume
mounts
in
the
container
spec
and
tries
to
recreate
the
directories
for
the
mount
points,
nest
amounts.
C
So
if
you
do,
but
if
you
actually
want
to
mount
a
file
that
that's
not
going
to
work
because
when
we
try
to
find
Matt
Vine
mount
a
file
to
a
directory
the
by
now,
it's
going
to
fail
because
its
source
and
target
or
not
the
same
type.
This
is
a
really
hard
issue
to
fix,
because
we
need
to
basically
guess
or
not
guess,
but
we
need
to
predict
what
type
of
volume
mount
other
volumes
are
going
to
be.
C
C
The
next
issue
is
reconstruction
and
post
path
so
before
we
never
needed
to
care
about
reconstruction
of
host
path,
because
host
path
was
just
a
direct
pass-through
of
the
even
death,
2d
container,
runtime
and
now
for
sub
path.
Now
that
we
do
these
intermediate
by
mounts
now
clean
up
a
post
path,
actually
matters
and
chiclet
needs
to
be
made
aware
of
it.
I
think
fixing.
This
will
either
require.
C
C
This
is
some
reconstruction
is
a
window
to
the
window
that
addressed
it
addresses
is
if
a
pod
gets
forced
deleted.
While
you
blaze
is
down
okay,
so
it's
the
way
we
construction
is
a
way
for
cube.
The
cube
live
all
you
manager
to
read
the
bounce
state
from
the
disk
and
kind
of
figure
out.
What
left
over
announce
word
is
no
there,
so
originally
with
host
path,
because
we
never
managed
mounts
for
host
path.
It
didn't
matter,
but
now
that
we
are
creating.
C
If
you
use
sub
path
with
host
paths,
we
are
now
creating
a
by
Mel
for
this
a
path.
So
now
it's
important
that
we
actually
clean
up
the
sub
path
notes
again.
I
would
consider
this
lower
priority,
because
both
reconstruction
window
is
small
and
also
you
could
just
not
use
sub
path
of
those
paths
as
a
look
around
and
then
this
last
issue
is
host
subpaths,
don't
work
in
containerized
cubelet.
C
This
is
actually
sort
of
it's
a
it's
a
functional
issue,
a
major
functional
issue,
but
it
also
didn't
already
work
completely
properly
before
the
subtext.
It
was
already
kind
of
broken
George.
So
that's
why
I
also
consider
this
sort
of
low
priority,
and
you
can
always
work
around
this
by
not
using
said
that.
C
C
The
I
mean
major
common
theme
is
that
you
know
we're
just.
We
have
a
lot
of
missing
test
coverage
for
things,
but
I'm
not
sure
like
it
might
be
part
of
a
broader
issue
in
that
we
just
have
so
many
features
and
different
behaviors
and,
like
all
these
different
volume
plugins.
That
is
just
like
the
combinations
of
all
of
the
features
interacting
together
across
different
volume
types.
It's
just
like
impossible
to
really
cover.
You
know
to
think
about
it
and
cover
all
of
them
and.
C
But
I
guess
maybe
one
one
major
of
coverage
that
we
are
missing
in
general
is
the
containerized
culet
environment?
That's
just
not
it's
just
not
there!
So
yeah
going
to
action
items.
I
came
up
with
a
few,
but
I
think
I'd
like
to
open
it
up
to
the
safe
to
also
think
about
how
we
can
kind
of
improve
the
process
here.
I.
C
Guess
some
immediate
actions
that
we
probably
need
to
take
we
need
to
find
owners
for
the
remaining
bugs
I
would
consider
that
kind
of
high
priority.
The
other
high
priority
action
item
is:
we
need
to
find
it
owner
for
containerized
to
blitz
and
then
maybe
some
longer
term
items
are
to
look
into
e
to
e
testing
framework.
F
Questions
on
the
testing.
You
know
it's
it's
one
of
those
things
like
we
could
go
in
and
over
engineer
it
and
try
to
figure
out
what
all
the
tests
touchpoints
that
exists
and
you
know,
guess
and
speculate
where
the
things
are
going
to
go
on.
But
you
know
there's
one
of
these
unknown
unknown
things
right
like
we
don't
really
know
what
storage
touchpoints
are
going
to
break
with
a
thick
or
you
know,
with
a
bug
without
stepping
in
at
first
yeah.
C
I
agree:
I
agree,
that's
I
mean
that's
like
the
the
general
issue
that
we
just
have
like
so
many
combinations.
We
can't
possibly
think
of
and
test
them
all,
but
is
there
like
anything?
We
can
do
to
make
things
easier
in
any
way
like,
for
example,
the
sub
pasty
originally
only
had
test
cases
for
empty
dirt
and
host
pet
volumes.
There
were
no
test
cases
for
any
of
the
other
volume
types.
C
B
I'm
sorry
I
agree
with
that.
I
think
there
are
some
basic
combinations
where
it's
like.
You
know
a
mount
feature
paired
with
all
you
know
all
or
a
representative
set
of
volume
types
and
then
I'm
wondering
if
we
could
have
kind
of
our
broad
representative
set
of
volumes
and
then
a
narrower
set
of
like
these
are
sort
of
the
like
classes
of
volumes
like
here's,
an
atomic
writer,
here's
an
ephemeral,
here's,
a
persistent
here's,
a
block
or
whatever,
and
have
that
small
set
that
we
dive
deeper
into
like
more
nuanced
combinations,
yeah
I
think
I.
F
F
C
Definitely
like,
as
just
like
those
were
those
are
kind
of
like
the
three
items
I
have
here.
It's
just
like
going
forward
for
any
bug,
fixes
or
features.
We
have
to
make
sure
that
we
have
to
make
sure
there
is
adequate
test
coverage
for
it,
but
then
there's
also
like.
Is
there
anything
we
can
do
to
also
prevent
things?
F
A
A
C
F
B
All
right
cool.
Thank
you.
One
piece,
I'm,
not
sure
we
sorry
if
I
missed
this
earlier,
but
so
there
was
a
decision
kind
of
early
on
to
make
the
atomic
writer
volumes
read-only
and
we
sort
of
decided
that
was
the
intended
behavior
of
those
volume
types
and
made
that
change
and
I
agree
with
that
decision
for
kind
of
like
the
long-term
direction.
B
C
B
C
B
B
Yeah
so
I
think
going
forward.
We
need
to
think
like
we're
kind
of
always
thinking
about
backwards,
compatibility,
but
I
think
sometimes
we
can
get
caught
up
and
they'll
like
kind
of
security
like
fix.
The
vulnerabilities
and
magnets
were
important
to
remember
that
we
need
to
take
backwards,
compatibility
almost
more
seriously
in
case
of
security
incident,
because
users
don't
have
the
same
freedom
to
just
delay
the
upgrade
case.
A
H
Just
one
quick
thing
on
the
previous
section:
this
is
cleared
from
open.
It
sure
one
of
the
action
items
was
to
audit.
The
e
to
e
does
not
super
familiar
with
the
current
personality,
but
that's
something
kind
of
interesting
to
me
to
look
at
so
that
I
can
also
get
familiar
with
the
functionality.
Oh
you'll
probably
put
my
name
there
and
then
I'll
reach
out
to
reach
out
on
this
slag
to
get
some
help.
Yeah
you.
D
E
D
All
see
that
yep,
so
this
is
a
proposal
that
clean
and
Liggett
and
I
wrote.
In
110,
we
had
a
new
API
called
the
token
request.
Api
go
to
alpha,
it's
actually
linked
from
the
summary.
Basically,
what
the
API
allows
is
for
clients
of
the
kubernetes
api
to
request
tokens
from
the
api
server.
These
tokens
are
very
similar
to
the
current
service
account
tokens,
but
they
have
some
notable
improvements
and
that
they
are
time
bound,
meaning
they
expire
rapidly.
They
they
can't
expire
rapidly.
D
So
we,
some
of
the
motivation
of
this
work,
was
to
replace
the
current
mechanism
of
distribution
of
service
count
tokens
it's
problematic
from
a
security
perspective
because
they
are
extremely
difficult
to
rotate.
I
am
not
sure
if
anybody's
rotating
these
rapidly
or
at
all
in
in
the
wild
I've
actually
tried
to
rotate
the
current
service.
Kentuckians
before
and
I've
filed
a
bunch
of
bugs
that
resulted,
so
it's
definitely
not
tested,
they
also
require,
and
they
they're
actually
have
some
scalability
issues.
D
Previously,
the
secret
gets
where
a
large
portion
of
the
API
server
traffic
I
think
that
has
been
reduced
with
the
new
work
in
the
secret
manager,
but
they
they
actually
take
up
a
the
service
count.
Token
secrets
actually
take
up
a
ton
of
space
and
sed
we
duplicate
the
CA
certificate
per
namespace
in
or
per
service
account
secret,
and
it's
a
tremendous
portion
of
the
total
storage
of
the
API
server.
D
So
those
are
the
main
issues
that
we're
trying
to
solve,
with
a
different
form
of
distributing
service
count
tokens
this
proposal,
I,
proposes
a
new
service,
account
token
projection,
which
will
source
tokens
from
the
this
new
token
request.
Api,
the
goal
is
to
maintain
backwards
compatibility
with
the
current
service
cap
token
volumes
or
this
service
count
token
secrets,
so
that
we
can
transfer
transparently
over
a
few
releases
phase
out.
D
D
A
The
part
that's
a
little
confusing
to
me
here
is:
we
already
have
a
secret
volume
and
downward
API
volume
and
a
config
map
volume.
Is
this
new
volume
source?
It's
it's
going
to
be
injecting
a
token.
That
part
makes
sense
to
me,
but
it
sounds
like
it's
also
going
to
be
injecting
a
secret
downward
a
PR
big
map.
No.
D
We
are
using
the
config
map
volume
source
to
project
the
config,
the
config
map
projection
to
inject
the
in
the
CA
search.
So
if
you
see
right
here,
the
this
is
meant
to
be
a
fully
compatible
replacement
for
the
service
account
the
secret
volumes
that
we
have
right
now,
the
service
count
token
projection
is
responsible
for
injecting
the
token
and
a
config
map
projection
will
be
responsible
for
injecting
the
CI
search
and
that
will
just
use
the
existing
config
map
volume.
D
The
correct
okay
with
one
slight
modification
I
think
the
current
config
map
projection
is
useful
today
and
it
gets
us
halfway
there.
The
the
since
the
config
map
projection
uses
a
local
object
reference.
It
cannot
cross
namespaces,
namespace
boundaries,
there's
a
fairly
long,
there's
a
very
long
outstanding
issue
to
solve
authorization
for
cross
namespace
config
map
and
secret
references.
I
think
that
it
is
a
generally
useful
feature,
but
I
think
it's
going
to
take
some
work
to
solve
generally.
D
Hence
why
it's
been
such
a
long-standing
issue,
so
I
think
that
potentially
this
proposal
could
be
broken
up
into
two
separate
changes.
One
is
to
is
highlighted
in
this
paragraph
to
support
cross
namespace
config
map
references
generally
and
the
other
is
to
actually
implement
the
service
count.
Token
projection.
Does
that
answer
your
question
yep
so.
D
C
D
D
Token
is
basically
completely
separate,
so
this
is
the
CA
cert
that
we
sign
all
our
certificates
in
the
cluster
with
this
CSR
is
assigns
these
API
server
certificate
yeah,
so
I
think
it
might
be
good
to
give
a
little
background
on
the
current
service,
a
gap
token
secret
volume.
What
we
put
in
it
is
a
token
and
they
see
a
cert.
The
token
allows
a
client
to
the
API
server
to
to
authenticate
to
the
API
server,
and
the
CA
search
allows
the
client
to
verify
the
authenticity
of
the
server.
D
A
D
A
A
A
A
I
So
presently,
there's
utils,
there's
a
mount
unsupported
in
utils
that
are
is
used,
especially
during
unit
testing,
but
also
for
any
unsupported
platform
and
presently
all
the
methods
in
there.
None
of
them
return
errors,
and
so
anyone
is
using.
Those
may
get
an
indicator
that
things
are
working
fine
for
them,
because
they're
not
returning
errors,
and
so
what
this
does
is
it
returns
errors
appropriately
for
any
unsupported
platform.
I
A
I'm
so
yeah,
this
sounds
like
a
good
idea
to
me.
What
scares
me
is,
there
might
be
things
that
we
may
potentially
break,
even
though
this
is
the
right
thing
to
do
so.
What
I'd
like
to
do
is
get
this
merged
early
in
the
quarter
and
see
who
screams
and
and
go
forward
or
that
yawn
do
you
have
any
bandwidth
to
take
a
look
at
this
is
on
the
line,
yeah
sure.
I
I
A
A
E
Is
just
add
up
for
those
who
use
Mon
propagation?
If
you
don't
use
Mon
propagation
or
don't
care
about
it,
then
you
can
ignore
it.
If
you
use
third
man
propagation
again,
nothing
changes
for
you.
But
if
you
depend
on
some
on
slave
on
propagation,
which
was
default
in
1.10,
then
we
are
going
to
change
it
back
to
private
in
one
point
and
or
something
because
it
costs
regressions,
so
just
stay
alert
and
you
will
see
in
release
notes
that
we
succeed
back
to
private.
Oh
thanks,
a
lot
of.
A
F
H
Yeah
but
the
external
storage,
this
excel
sheet,
I
added
myself
as
a
one
of
our
open
ideas,
but
my
question
was
also
related
to
there
used
to
be
a
process
called
not
to
add
new
projects
into
kubernetes
incubator.
You
know
like
the
way
a
node
problem.
Detector
not
feature
discovery,
etcetera
came
up,
but
this
document
now
says
that
that's
deprecated
and
we
have
to
use
sub
projects,
and
it
looks
like
this
needs
to
go
through
the
sink
storage
team.
To
add
new
projects
add
relevant
projects.
I
was
not
able
to
find
the
process
for
that.
F
As
I
understand,
it
is
will
going
forward
from
the
old
processes
to
send
out
kind
of
a
proposal
for
the
project
and
then
the
signals
aren't
adding
a
you
know,
given
enough
information
about
ownership
and
maintenance
and
then
I
seeked,
there's
a
kubernetes
SIG's
repository
where
we
move
specific
projects
for
for
incubation
now.
If
this
is
something
that
you
want
to
get
in
right
away
and
if
it's
something
that's
super
critical,
we're
still
using
that
external
storage
repository
to
incubate
things.
But
our
your
direction
is
to
move
stuff
to
the
big
storage.
H
F
Can
start
a
one-off
for
next
week?
I
mean
it's
it's
not
without
precedent.
So
if
you
feel
like
the
design
is
something
big
you
know,
if
the
you
know
it
needs
a
full
hour
like
you
could
schedule
something,
and
you
know
anyway,
on
the
significant
just
send
the
invite
to
the
state
working
group.
Okay,.
A
J
Yet
just
in
case
it
gets
a
move
to
an
alternate,
but
it's
tentatively
at
a
location
in
Mountain,
View
I
do
have
the
budget
to
host
this
so
for
purposes
of
booking
air
travel
for
those
who
are
traveling
or
getting
a
hotel.
I
think
it's
solid,
you
know
just
get
something
in
the
Mountain
View
Palo
Alto
area
should
be
fine.
I
would
like
to
invite
if
somebody
else
wants
to
help
out
with
hosting
meals.
J
Traditionally
for
those
who
haven't
gone
before
we
failed
the
dinner
and
I.
Don't
have
the
budget
to
host
the
dinner
this
time
so
reaching
out?
If
any
other
organizations
might
want
to
do
it
worse
cases,
we
just
all
go
and
self
host
our
own
dinners.
You
know
we'll
pick
a
venue
for
that
later.
So
I'm
hoping
to
get
the
details
with
an
actual
address
out
by
end
of
day
tomorrow,
maybe
it'll
even
be
in
the
day
today,
but
it'll
be
out
soon
cool.
A
And
so
keep
an
eye
out
for
the
document
from
Steve
you'll
be
able
to
start
RSVP'ing
in
there
and
then,
if
you're
interested
in
helping
sponsor
this
event,
these
events
can
get
pretty
expensive.
Please
put
your
name
or
organisation
down
here
and
Steve
will
reach
out
to
you
and
help
sort
that
out
all
right.
E
A
K
She
sent
out
I
think
a
three
week,
I'm
going
a
meeting
invite,
so
everyone
should
have
received
that
starts
next.
Thursday
I
believe
on
the
other
thing
she
wanted
to
bring
up.
Sod
was
she's
going
to
plan
on
starting
the
sig
storage
and
to
end
testing
back
up
again
and
on
the
second
week
of
May,
just
another
FYI
from
her.
A
That
would
be
awesome,
I
think,
given
everything
that
Michelle
and
Tim
we're
talking
about
today,
this
would
be
a
very
valuable
effort
for
the
sake
in
terms
of
the
sig
onboarding.
It
sounds
like
there's
three
reoccurring
meetings,
that's
what
it's
set
up
for
right
now,
yeah,
and
so
how
is
that
going
to
work?
Is
the
idea
that
anybody
who's
interested
in
learning
about
how
to
onboard
onto
the
state
can
just
attend
these
meetings?
Yep.
It's
completely.
A
F
F
B
F
A
Cool
all
right,
looking
forward
to
that,
if
you
are
new
to
the
sig
and
you've
been
wondering
how
to
get
involved,
this
will
be
a
great
opportunity
to
start
to
learn
and
I.
Think
that's
all
that
we
have
for
today's
luckily
made
it
with
two
minutes
of
time,
we'll
reconvene
in
two
weeks
and
hopefully
go
over
the
planning
spreadsheet
again,
if
you're
working
on
a
feature,
please
go
ahead
and
update
the
comment
section
to
give
a
status
update
so
for
folks
we're
interested
in
figuring
out
what's
going
on.
A
F
A
A
So
the
CSI
spec
is
hosted
at
github.com,
slash,
container
storage
interface.
This
organization
is
independent
of
kubernetes
and
independent
of
any
particular
cluster
or
registration
system
inside
it
there's
a
repository
called
speck
and
there
is
a
speck
d-file
which
basically
has
the
entire
specification
inside
it.
If
you're
interested
in
contributing
to
it
or
making
changes,
please
open
up
issues
or
pull
requests
against
that.
So
it's
good,
thank
you
and,
if
you're
interested
in
getting
involved
with
that
community,
there's
a
community
repo
that
lists
the
meetings
for
a
CSI
cool
thanks,
alright
I
think
that's
it.