►
From YouTube: Kubernetes SIG Storage 20180329
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 29 March 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.gdgu1pfbpinu
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:26:53 From erinboyd : mkimura
09:40:13 From Dinesh Israni : nah, just dial in to see what they are planning to add
09:44:50 From brad childs : https://docs.google.com/spreadsheets/d/1Gb88gpWMjCsROM31rXdylv0ibvCupLi-KYT_Tez5_2Y/edit?usp=sharing
09:44:58 From John Griffith : Thanks!
09:55:32 From hayley : +1 to 15th to 16th
A
All
right
today
is
March
29
2018.
This
is
the
meetings
of
the
kubernetes
storage
special
interest
group.
Today
we
have
a
few
things
on
the
agenda.
First
up
is
Paris.
She's
gonna
give
us
an
update
from
contra
Bex.
After
that
we're
gonna
do
planning
for
the
storage,
sig,
so
I'm
gonna
hand
it
over
to
Paris
and
take
it
from
here.
B
B
Some
of
you
have
actually
already
heard
this
because
you
might
be
core
contributors
and
other
SIG's,
but
what
we're
doing
right
now
is
going
to
all
of
the
all
of
our
SIG's
and
working
groups
to
try
to
communicate
a
little
bit
better.
All
of
you
are
contributors,
so
all
of
this
information
is
relevant
to
you
and
all
the
work
that
we
do
is
relevant
to
you.
So
we
would
love
to
capture
feedback
and
also
let
you
know
how
we
communicate
out
certain
changes
that
relate
to
the
workflows
and
just
general
contribution
process.
B
So
you
can
expect
us
probably
once
a
quarter,
hopefully
not
with
this
much
information,
since
this
was
our
first
time
we
loaded
it
with
a
lot
of
info
so
next
time.
Hopefully
this
is
only
a
three
to
five
minute.
Instead
of
a
ten
minute
so
really
quickly.
One
of
the
main
goals
of
contributor
experience
this
year
is
a
if
it's
not
automated,
it
better
be
documented
and
then
also.
We
would
very
much
like
to
see
more
more
smooth
path
for
contributors
across
repos
right
now.
A
lot
of
repos
have
different
workflows
and
things
like
that.
B
So
that's
kind
of
our
goals
this
year
and
you'll
you'll
see
that
in
in
what
I'm
about
to
say
for
the
rest,
so
something
that
we've
been
working
hard
on
is
label
definitions,
bots
that
help
your
productivity,
so
the
Phaedo
bot,
which
is
the
scale
issues
but
and
we'd,
really
like
your
feedback
on
a
lot
of
this
stuff,
because
some
people
are
using
labels
differently,
they're
also
using
bot
commands
differently.
So
you
can
see
some
of
the
threads
that
I
think
in
the
agenda
here.
B
So
there's
a
couple
links
in
here
that
are
asking
you
for
feedback,
so
just
give
us
some
feedback
when
you
have
some
time
on
these
things
and
endure
if
you
care
another
thing
that
we're
working
on
is
issued,
triage
and
issue
management,
we've
come
up
with
some
issue
triage
guidelines,
so
we
were
curious
to
get
some
feedback
on
if
we
should
apply
these
current
issue.
New
triage
guidelines
across
kubernetes
repos
link
again
is
within
this
document
and
if
you
could
just
give
us
some
feedback
on
that,
that
would
be
awesome.
B
So
how
do
you
actually
find
out
about
a
lot
of
these
changes?
That's
one
of
the
things
that
we've
been
hearing
from
some
of
the
repos,
especially
as
we
apply
things
across
repos
as
hey.
How
did
I
hear
about
this
or
how
were
we
supposed
to
hear
about
this?
We
actually
outlined
that
outline
this
in
our
charter.
B
That's
going
out
to
contributor
experience
it's
going
out
to
sigelei
distro
and
it's
also
going
out
to
kubernetes
dev,
so
you'll
see
all
three
of
those
at
some
point
before
changes
actually
have
and
and
then
we're
also
announcing
changes
to
the
announcement
section
of
the
community
meeting
on
Thursday.
So
that's
really
how
you
can
find
out
about
us
and
find
out
about
the
changes
that
we
would
like
to
adhere
to
speaking
of
charters.
B
B
We
actually
have
our
first
edition
of
our
Charter
I'm
in
the
middle
of
a
PR
right
now
to
fix
that
the
things
that
we're
fixing
are
that
we're
using
the
word
chair
instead
of
lead
now,
based
again
on
the
steering
committee
state
governance
and
we're
also
using
technical,
leads
and
sub
project
owners
I've
also
linked
to
a
project
guide
to.
If
you
want
to
see
some
of
the
projects
that
we're
working
on
as
well.
B
So
speaking
of
new
contributor
guide
before
in
the
very
beginning,
we
do
have
one
we'd
love
for
you
to
review
and
make
comments
and
poke
holes
in
it.
That's
the
only
way
we'll
ever
get
a
very
good
contributor
guide
is,
if
we
have
it
nice
and
tight.
The
developer
guide
portion
is
coming
it's
underway
right
now,
and
we
would
love
to
have
your
comments
on
what's
currently
missing
what
we
could
do
better
with
what
new
Docs
we
need
again
in
the
developer
guide
issue
link
is
in
the
doc
as
well.
B
Next
thing
is:
how
is
your
group
growing
your
current
contributors?
Things
like
burnout
or
real
we're
releasing
quarterly,
so
we
do
want
to
make
sure
that
we
have
a
nice
healthy
ladder
going
and
we've
created
some
programs
with
a
couple
things
in
mind
one
your
time.
Time
is
all
it
seems
to
be
always
the
number
one
reason
as
to
why
folks,
don't
you
know
officially
mentor,
even
though
most
of
you
I
know
actually
are
unofficially
mentoring,
but
these
are
programs
that
can
help
you
and
we
would
love
to
get
you
involved
in
these.
B
One
of
them
is
a
sort
of
a
mentors
on
demand
feature
which
is
called
meet
our
contributors
and
that's
once
a
month.
It's
a
live
stream,
it's
very
similar
to
use
our
office
hours
except
for
its
contributor
questions,
and
this
is
geared
towards
new
and
current
contributors.
Our
next
one
is
this
Wednesday
coming
up,
and
we
have
two
time
zones
for
that.
We
take
questions
on
slack
and
Twitter
love
to
have
you.
B
We
do
have
two
slots
left
for
each
time
zone
for
next
Wednesday,
but
questions
are:
how
did
you
get
into
kubernetes
all
the
way
to?
Why
is
my?
Why
is
this
test
flaking?
So
it
can
be,
you
know
across
the
gamut
and
no
you
don't
have
to
know
all
the
answers.
There's
four
to
six
other
people
on
the
call
to
another.
One
that
I
want
to
highlight
is
group
mentoring,
and
this
really
takes
a
concept
of
peer
mentoring.
B
So
we
get
a
group
of
people
together
that
have
a
shared
goal,
which
the
tests
that
we're
currently
running
is
members
to
reviewers
for
three
different
SIG's
and
it's
working
out
really
well.
One
of
lessons
learned
is
that
where
we
can't
spend,
we
can't
stay
on
eight
time
zones
that
just
doesn't
work,
especially
in
mentoring.
B
So
we
definitely
are
going
to
take
some
of
the
lessons
that
we've
learned
and
spawn
those
off
into
the
next
cohort,
but
errand-boy
is
actually
stepped
up
for
sig
storage
and
we're
gonna
be
running
a
cohort
here
within
the
next
couple
of
weeks,
where
she's
been
a
mentor
for
individuals
from
new
contributor
status
to
membership
and
then
hopefully,
possibly
even
see
them
through
to
the
next
step,
but
if
you're
interested
in
this.
Ideally,
this
is
going
to
be
self-service
because
humans
involved
in
this
process,
don't
necessarily
scale
mentoring.
B
So
we're
working
out
some
really
cool
automation
and
things
on
making
this
self-service.
The
next
is
the
Buddy
Program
and
the
Buddy
Program
is
inspired
from
gopher
con
and
gopher
con.
Has
it
where
new
folks
that
are
coming
to
go
for
con
and
one
a
tour
and
things
like
that
gets
assigned
a
buddy
I'm.
Taking
this
one
step
farther
and
you
get
a
one-time
one
hour
time
commitment
with
someone
and
a
level
higher
than
you
and
you
either
compare
program
with
them.
B
You
can
do
code
reviews
it's
your
hour
with
them
and
again
that
is
hopeful,
hoping
to
get
that
automated
and
very
self
service
as
well.
Some
more
mrs.
Morel
nest
to
slack
is
definitely
a
hot
topic
for
us,
as
of
late
and
especially
in
a
lot
of
open
source
communities.
However,
I
just
want
to
mention
that
you
should
definitely
start
pinning
important
documents
to
your
channel
because
we're
own
boarding
people
like
crazy
right
now.
B
We
own
boarded
2000
people
in
the
last
30
days,
just
on
slack
and
now
we're
up
to
four
thirty
four
thousand
people.
This
is
huge
so
things
like,
potentially
your
charter,
how
new
people
can
get
involved,
your
meeting
agenda,
etc.
We
do
have
slack
guidelines
now
to
take
a
look
and
then
also
please
join
in
on
the
slack
admins
channel.
If
you
need
anything
at
all
the
last
thing
and
then
I'm
done
I
swear
is
user
office
hours.
We
always
are
looking
for
more
volunteers.
These
are
great
user
questions.
B
You
have
to
hear
from
the
community
as
to
where
some
of
the
pain
points
are.
Please
there
is
a
link
in
the
doc.
Please
volunteer
for
that,
and
that's
it.
So
again,
if
you
need
anything
from
contributor
experienced,
please
reach
out,
have
any
feedback
for
us
again
slack
key
our
mailing
list:
Astro
pigeon
owl.
However,
you
need
to
get
it
to
us,
get
it
to
us.
Thank
you
so
much.
C
A
We'll
move
on
to
the
next
step,
we're
gonna
move
on
to
planning
for
the
kubernetes
1.11
release.
So
the
purpose
of
the
planning
session
today
is
to
come
up
with
a
list
of
features
that
we
want
to
work
on
as
a
sig
for
this
next
quarter
and
preliminary
assign
who's
gonna
work
on
it.
We
can
finalize
on
these
at
the
next
meeting,
which
is
going
to
be
on
April,
12th
I,
believe
so,
let's
jump
into
the
planning
doc.
This
is
linked
in
the
agenda.
A
D
A
C
C
A
A
So
we
would
need
to
update
the
si
si
external
provisioner
to
handle
this
chang
on
our
side
has
agreed
to
work
on
that
and
I
can
help
Shepherd
that
design.
There
are
also
going
to
be
changes
required
to
the
existing
entry
volume
plug-in
provisioners.
To
enable
this
we're
gonna
leave
that
unassigned
at
the
moment,
and
then
we
can
pick
it
up
in
subsequent
quarters
or
if
folks
are
interested.
Let
me
know-
and
we
can
add
that
to
this
quarter
next
up
is
the
local
I'll.
F
G
G
H
A
D
G
D
D
A
Okay,
so
we'll
leave
that
unassigned
for
now
and
then
folks
are
interested
in
picking
that
up.
Let
me
know.
Moving
on
next
item
is
local
volume.
Dynamic
provisioning
is
what
I
referenced
earlier.
The
idea
here
is
currently
local
volumes
require
you
to
provision
TVs
manually
ahead
of
time
or
using
some
script,
and
we
want
to
be
able
to
have
local
volumes.
Dynamically
provisioned,
like
GCE,
PDS
or
Amazon
AWS
DBS
volumes.
A
I
So
I
think
I'll
have
someone
helping
me
out
as
well
there
because
the
design
was
of
by
Emmel
actual
handle.
So
it's
like
the
the
idea
is,
we
are
already
doing
the
resizing
of
mounted
volumes
in
because
XFS
requires
us
to
do
it.
So
so
all
we
have
to
do
is
like
Vinny
hooks
into
the
code
to
support
it.
So
we
have
a
design.
Now
we
need
to
like
figure
out
few
things,
and
then
we
should
be
good
to
go.
Okay
and.
I
We
are
planning
to
take.
The
whole
resizing
was
already
alpha
for
two
quadrants
we're
trying
to
take
the
resizing
is
beta,
and
this
because
we
are
already
resizing
the
mounted
volume,
so
you
want
to,
along
with
there's
another
yeah
there's
another
item
for
a
volume.
Expansion
goes
to
beta,
so
this
will
as
well
go
to
Peter.
Otherwise,
we'll
have
to
create
a
different
feature
flag
just
for
online
resizing
for
just
that
bit
to
go.
Okay.
I
J
I
J
I
Expert
that
Mission
Control
just
does
some
sanity
check
to
make
sure
that
you
know
why
you
only
resize
the
for
which
stories
class
permits
it
to
resize,
not
every
PVC,
you
can
resize,
but
the
admission
control
like
gets
out
of
the
way
after
that,
so
the
resizing
actual
resizing
is
handled
by
a
controller
like
a
dedicated
controller,
so
that
Mission
Control
registers,
the
sub
sanity
checks.
Okay,.
A
I
Yeah
so
I
had
I
added
that
item
but,
like
maybe
we'll
find
a
owner
later
on.
So
we.
A
Well,
next
up
is
volume
snapshotting
and
entry,
so
this
is
the
the
snapshotting
work
that's
been
going
on
for
the
next
quarter.
The
remaining
work
is
closing
on
the
design
for
restore
and
moving
or
introducing
snapshotting
and
CSI.
So
the
introducing
snapshotting
and
CSI
ching
is
already
working
on
that
and
that's
progressing
and
then
finally
consider
moving
volume
snapshots
entry.
We
can
track
these
as
three
separate
items
or
a
single
item.
E
A
A
I
L
Nowadays,
kubernetes
has
no
has
no
waste
to
money
to
pee
wees.
If
Warren,
for
example,
forums
are
deleted
mistakenly
or
become
now
healthy
communities.
Will
you
not
know
that
and
that
will
lead
to
data
loss,
so
it's
necessary
path
to
monitoring.
Chibis
I
have
read
a
proposal
for
that
and
working
on
a
prototype
we'll
send
it
out
we're
ready,
awesome.
A
Ok,
I
think
there
was
a
big
discussion
around
this
at
the
last
face-to-face
and
there
was
confusion
about
exactly
what
the
scope
of
this
feature
should
be.
It
might
be
an
interesting
topping
for
the
topic
for
the
next
face
to
face,
but
yeah.
Let's,
let's
take
a
look
at
this
and
review
the
designs
it
looks
like
Michelle
has
already
signed
up
as
a
reviewer
yeah.
K
M
K
The
stuffs
in
a
bad
position,
and
then
you
know
the
actual
like
recovery
part,
is
orthogonal
that
you
can
do
that.
It's
really
separate,
you
know
so
I
think.
Probably
the
bulk
of
this
is
just
getting
the
metrics
in
place
to
show
that
pb's
are
in
a
bad
state
and
then
the
actual
monitoring
and
recovery
of
that
you
know
anybody
can
handle
all
over.
They
want.
A
C
N
K
A
C
A
M
A
A
A
lot
classically
huge
next
up
is
preparing
CSI
for
GA
in
q3.
So,
as
many
of
you
are
aware,
CSI
was
introduced
in
alcohol
in
1.9
at
the
end
of
q4
last
year,
and
then
it
was
moved
to
beta
this
quarter.
Q1
I
in
the
1.10
release
and
what
we're
planning
to
do
is
keep
it
in
beta
for
one
quarter
so
for
the
1.11
release
q2,
it's
going
to
remain
in
beta
and
we're
gonna
target
moving
CSI
to
GA
to
stable
in
q3
1.12.
A
In
the
mean
time
for
this
quarter
we
need
to
drive
towards
all
the
loose
ends
that
we
have
the
two
dues
that
are
outstanding
for
CSI,
there's
a
separate
document
that
we,
where
we
meet
regularly
and
keep
track
of
all
the
work
that
needs
to
be
done
there.
So
we're
going
to
continue
to
drive
towards
a
ga
in
q3,
so
for
this
quarter
it
remains
in
beta.
But
there's
still
a
lot
of
work
left
to
be
done.
I
can
help
lead
this
and
Vlad.
Would
you
be
okay,
being
the
reviewer
on
this
stuff?
A
Next
up
is
the
migration
story
for
entry
plugins
to
CSI,
so
eventually,
as
CSI
becomes
stable
and
GA,
we
have
two
motivations
for
moving
entry
volume
plugins
out
of
tree.
One
is
to
not
have
two
different
places
where
we're
maintaining
the
same
same
code,
and
the
second
is
that
there's
a
big
push
to
get
cloud
provider
code
out
of
the
kubernetes
core.
A
So
today
the
entry
volume
plugins
like
GCPD
Amazon
AWS
cinder,
rely
on
cloud
provider
code
that
is
vendored
into
the
core
of
kubernetes,
and
we
want
to
to
decouple
that
so
there's
big
motivators
for
getting
the
entry
plugins
out
of
tree.
That
said,
the
entry
volume
plugins
expose
themselves
in
the
kubernetes
api
and
therefore
the
deprecation
policy
for
the
kubernetes
8
gie
applies.
A
C
K
A
A
All
right
next
up
is
the
last
quarter.
Jing
did
a
heroic
effort
to
try
and
fix
the
volume
reconstruction
code.
So
for
those
of
you
who
are
familiar
with
how
the
core
mounting
and
unmounting
logic
works,
you'll
know
that
there
is
this
edge
case
where,
if
the
cubelet
gets
crashes
and
restarts
it
loses
its
in-memory
state
and
if
the
pod
that
was
referencing,
the
volume
is
no
longer
available
on
the
API
server.
A
Cubelet
does
not
have
any
way
to
recover
that
state,
and
there
is
a
bunch
of
code
called
the
volume
reconstruction
code
in
the
cubelet
volume
manager
that
attempts
to
read
check
for
orphaned
volume,
mounts
and
and
tried
to
unmount
them
cleanly.
This
code
has
been
a
source
of
a
lot
of
bugs
in
the
past
and
so
Jing
attempted
to
fix
a
lot
of
those
bugs
in
this
last
quarter,
but
it's
still
a
very
complicated
bit
of
code,
and
the
fact
is
that
the
volume
path,
the
mount
path
alone
does
not
contain
enough
information.
A
A
Apparently,
there
is
a
new
cubelet
checkpointing
routine
that
allows
certain
entry
data
structures
to
be
periodically
check,
pointed
to
disk
and
then
recovered
at
a
later
point.
The
purpose
of
this
item
would
be
to
explore
replacing
the
existing
volume
reconstruction
code
with
that,
with
the
hope
that
it
would
make
this
code
more
robust
and
stable
for
this
quarter.
This
would
basically
be
exploratory
in
design.
A
F
F
K
C
A
C
A
Right
cool
thanks,
Vlad,
yawn
and
bread
next
up
is
moving
the
GC
cloud
provider,
disc
API,
to
auto
generated
code.
This
is
so
the
the
cloud
provider
code
that
currently
exists
entry
that
the
gcpd
volume
plug-in
uses
is
like
a
manual
wrapper
shell
around
the
go
client
code
that
GCE
cloud
platform
team
ships
last
quarter
there
was
the
previous
quarter.
A
There
was
a
big
effort
to
auto-generate
that
wrapper
instead
of
manually
generating
it,
the
auto
generated
code
is
complete,
but
the
kubernetes
gcpd
volume,
plugin,
still
references
the
old
code
and
we're
gonna
need
someone
from
our
team
to
help
update
that
ching
ching
from
my
team
has
agreed
to
work
on
that.
But
if
anybody
else
is
interested
happy
to,
let
you
work
on
that
as
well
or,
if
you're
interested
in
helping
review
this.
A
A
If
a
socket
appears
in
a
directory
that
is
being
watched,
then
cubelet
will
automatically
probe
it
with
a
specified
G
RPC
call
to
determine
what
that
socket
is
responsible
for
and
then
register
it
internally,
so
that
internal
components
like
CSI
and
device
plugins
can
use
that
to
communicate
with
the
device
plug-in.
So
this
is
a
big
outstanding
piece
of
work
for
CSI
and
looks
like
Vlad
you've
already
signed
up
for
it.
So
I'm
happy
to
let
you
leave
that
yeah.
F
A
A
M
A
I'm
gonna
meet
everyone.
Please
unmute
yourself.
If
you're
gonna
talk
okay,
so
that
concludes
the
items
that
we
have
planned
for.
1.10
feel
free
to
add
items
to
this
over
the
next
two
weeks.
We're
gonna
review
this
at
the
next
meeting
and
nail
down
the
priorities
and
then
convert
them
into
feature
issues
in
the
kubernetes
feature
repo
and
then
we'll
get
cracking
on
1.11.
So
thank
you
for
that.
We
have
20
minutes
left
in
this
meeting
next
up
I'm
gonna
hand
it
off
to
Brad
to
discuss
the.
K
K
So
we
have
I,
don't
know
about
15
projects
and
the
external
storage
repo.
Some
of
them
are
pretty
active.
Some
of
them
are
not
too
active,
but
still
useful,
and
then
some
I
think
are
dead.
I,
but
the
first
step
to
breaking
them
up
is
to
identify
the
owners
form
and
then
I
go
create
different
repositories
for
each
one.
This
Fritchie
is
the
first
step
of
that.
K
K
Automate
maintenance
over
a
certain
period
will
probably
just
delete
it,
but
we
do
need
to
identify
owners
for
these
projects
before
we
can
break
them
out
and
if
no
one
has
identify
the
project's
wanting
to
get
axed
so
I
I'll
leave
this
open
I.
Don't
think
it's
worth
the
time
right
now
to
step
through
each
one
of
these
I'll
give
everybody
two
more
weeks
to
come
and
you
know
sign
up
for
stuff,
but
please
sign
up.
If
you
don't
probably
won't
see
the
project
get
more.
A
K
D
K
And
I
think
we
should
stick
with
the
steering
committees,
advice
and
you
know,
identify
ownership
for
each
one
of
them
and
could
probably
want
to
keep
track
of
all
six
storage
projects.
Ministry,
I,
don't
know
you
know.
We
haven't
really
done
that
in
the
past,
because
we
haven't
had
individual
projects,
but
I've
noticed
that
other
SIG's,
you
tended
to
keep
like
a
project
list.
Yeah.
K
One
other
piece:
you
know:
there's
there's
a
couple
different
phases:
that
a
project
can
be
in
a
zone
where
I
say
we
can
have
something,
that's
incubating
and
could
still
be
in
the
kubernetes
incubator
organization.
You
have
the
core
project
stuff,
that's
actually
gonna
bundle
and
ship
with
kubernetes,
so
that
would
go
into
a
submarine
sub-recoil,
the
kubernetes
org
and
then
there's
a
Trinity
SIG's
board
that
we're
supposed
to
put
projects
that
are
kind
of
in
the
undefined,
like
the
not
really
sure,
if
they're,
incubating
or
they're
incubating
on
the
weights
of
being
core
projects.
A
Okay,
so
if
any
of
these
projects
looks
familiar
to
you
and
something
that
you've
worked
on,
please
come
in
and
update
ownership
information
here
and
then
we'll
proceed
with
the
next
step
of
figuring
out
where
they
should
be
moved
all
right.
Thank
you,
hey
Brad.
Do
you
have
the
link
to
that
spreadsheet.
A
D
D
I
feel
like
these
problems
are
serious
enough,
that
we
need
to
find
an
owner
for
container
IceCube,
lit
and
and
of
all
and
integrating
with
the
volume
subsystem
to
you
know,
because
some
of
these,
these
issues
that
we've
hit
are
like
core
fundamental,
like
architecture,
design
issues
gonna
be
easily
solved.
I.
K
K
D
D
I
D
G
K
K
Okay,
do
you
have
a
development
resource
that
you
can
assign
to
this,
and
if
one
of
us
takes
lead
to
that,
you
know
SME
and
bring
them
up
to
speed
that
we
could
I,
don't
want
us
just
to
hold
all
the
information
is
what
I'm
getting
at
it's
like.
We've
done
this
for
open
ship
or
SMEs
on
it,
but
I
don't
want
to
necessarily
take
the
full
ownership
upstream,
like
I,
really
just
split
it
with
someone
in
the
community.
I
think.
A
Michelle
might
be
able
to
help
review
some
of
this
stuff,
but
we're
pretty
constrained
on
resources
on
our
side
and
GC
and
GK.
Don't
actually
do
containerize
cubelet,
so
we'd
really
like
someone
else
to
step
up
and
take
ownership
of
this.
Otherwise,
what
gonna
be
an
area?
That's
gonna
continue
to
suffer.
That's.
K
A
D
A
A
D
D
It
was
always
just
you
gave
it
like
a
global
mount
path
or
something,
and
we
assume
that
you
already
globally
melted
it
somewhere
else,
but
because
of
the
sub
path
changes
now
we
are
actually
doing
mounts
for
host
path,
and-
and
now
we
kind
of
had
this
whole
when
a
host
Pat
volume
doesn't
go
through
reconstruction.
So
now,
anytime,
that
event
hits
you're
going
to
be
left
over
with
these
stale
sub
path
mounts
for
hose
path.
D
The
really
the
only
fix
I
can
think
of
is
to
actually
modify
the
host
path.
Volume
to
find
mounts
to
actually
do
a
buy
announced
into
varlet
cubelet,
but
I
feel,
like
that's,
also
a
pretty
big
change.
You
know
we're
changing,
who
are
changing
something
that
has
it
this
this.
That
change
has
the
potential
to
impact.
Like
you
know
everything
so
more
than
just
the
sub
path
feature
so
yeah.
A
A
A
Q
Q
P
A
So,
let's
next
steps
are
to
LAN,
could
you
put
together
an
RSVP
or
a
document
like
we
usually
have
for
the
agenda
and
have
a
RSVP
list
at
the
top
and
folks,
if
you
plan
to
attend,
please
fill
that
out
as
soon
as
possible,
because
it'll
help
the
organizers
get
an
estimate
for
how
many
people
are
coming
planning.
An
event
like
this
is
a
massive
effort
and
the
earlier
that
we
could
get
a
head
count.
The
better
yeah.
A
That's
all
alright!
Thank
you
for
driving
that
art
Alon
and
thanks
for
hosting
Steve.