►
Description
Join xing and jay as we explore how CSI providers and Kubernetes controllers deal with idempotency and resource leaking for volumes and snapshots.
- 00:00:01 welcome to TGIK with xing ! episode 169
- 00:01:00 News of the week
- 00:10:00 CSI controller service RPC
- 00:15:00 CreateVolume should be idempotent, so should most other things
- 00:17:00 Create volume names returns an ID
- 00:24:00 Volume UIDs and handles, AWS, Vsphere
- 00:27:00 DeleteVolume succeeds, even if a volume isnt there
- 00:36:00 AWS EBS csi driver idempotency support and parameter mismatches
- 00:51:00 Demo of deleting PVs, deletion timestamps, vSan leaked storage example
A
Hi
everybody,
it's
jay,
it's
another
tgik
episode
today,
it's
the
october,
the
first
and
I
have
my
friend
zing
she's,
a
csi
maintainer,
so
she's
deep
in
this
stuff
and
the
storage
stuff,
so
she's,
gonna,
she's
gonna,
show
us
some
cool
stuff
today.
So
we'll
give
people
a
minute
to
roll
in
make
sure
the
stream
is
running.
Okay
and
then
yeah
cool.
It's
working
all
right!
A
So
martin's
here
and
let
me
see
here
there
we
go
yeah
martin
is
here:
martin
borgman,
hello,
it's
good
to
see
you
from
the
netherlands.
We
see
you
every
week,
I'm
so
glad
and
I'm
I'm
I'm
having
a
beer
right
now,
because
I
don't
have
to
do
the
show
today,
zing's
doing
the
show.
So
I
could
I
could
just
watch
her
do
stuff.
So
what's
up
balzas
from
hungary
and
we've
got
folks
from
finland
yucca,
I
just
feel
like
I'm
just
becoming
friends
with
everybody.
A
We
had
one
of
our
friends
over
from
from
red
hat,
join
us
miss
massimiliano
giovagni
from
italy,
cool
so
good.
We
got
everybody
in
here
and
ricardo's
here,
okay
ricardo's
here
now
we
can
start
so.
What's
going
on,
let's
see
news,
let's
start
out
with
news
of
the
week.
A
It's
been
a
long
week
for
me,
we
had
a
lot
going
on.
I've
been
doing
a
bunch
of
stuff.
I've
been
hanging
out
with
customers.
I've
been
hanging
out
with
my
team,
different
parts
of
the
country,
I'm
ready
for
the
I'm
ready
to
learn
about
some
random
upstream
stuff.
Going
on.
Let's
see,
we've
got
policy
g,
rpc,
retries,
okay,
solicity
2.1,
and
I
think
they
have
some
network
policy
stuff
that
they
put
in
here
too.
So
this
is
kind
of
related
to
one
of
our
recent
shows.
A
Let's
see
here,
requests
retries
for
http
request,
container
startup
ordering
work
around
smaller,
so
link
rd
has
always
been
lightweight
and
easy
to
use
compared
to
you
know.
Whatever
other
service
messages,
that's
always
been
their
claim
to
fame,
and
I
guess
maybe
it's
even
smaller,
now
automatic
mutual
tls.
These
policies
are
built
directly
with
I
mean
I
think,
that's
been
around
for
a
while,
so
to,
conversely,
set
a
default
authorization
policy,
so
this
seems
to
be
like
it's
yeah.
A
This
is
all
there
are
existing
features,
but
now
they
have
some
new
features
in
linker
d
server
and
server
auth,
which
allow
fine
grain
policies
to
be
applied
so
yeah
they
have
like
pod
level
policies.
I
guess
who
else
is
joining
in
okay,
martin's,
drinking
whiskey,
okay,
so
everybody's
having
fun
right
now,
cool
balls?
Ask:
can
you
bring
the
message
pop-up
window?
A
A
A
I
know
it's
annoying
when
I
keep
these
on
all
the
time,
so,
okay,
which
together
allow
fine-grained
policies
to
be
applied
across
arbitrary
sets
of
pods,
so
server
can
select
across
admin
ports
on
all
pods
in
a
namespace
and
server
authorization
can
allow
a
health
check
connection
from
the
kublet
okay,
so
they're
ad,
adding
some
notion
of
specific
network
policy
type
stuff,
that's
component
specific
in
linker
d.
What
else
is
going
on
kubecon?
A
A
Oh
okay,
well,
leed
you're
from
saudi
arabia
cool
ago,
you're
from
pakistan
cool!
So
is
anybody
excited
about
anything?
I
know
my
friend
arun
is
giving
he
he
made
a
tool
called
kate's
netlook,
I'm
excited
about
that
and
he's
gonna
be
presenting
that
over
at
kubecon
k8's
net.
Look!
Where
is
it
here?
A
K8
net
look
github,
so
this
is
he's
got
yeah.
This
is
my
friend
arun.
We
used
to
work
together
and
well.
We
work
together
again
now
because
he
just
came
to
vmware
he's
going
to
be
presenting
this
tool.
I'll,
add
this
to
the
show
notes
if
folks
want
to
play
with
it,
it's
a
way
to
debug
pod
networking
in
real
time.
So
he's
given
a
coupon
talk.
I
don't
know
seeing
anything
interesting
in
the
storage
space
going
on
at
kubecon
that
you
know
about.
B
A
A
B
A
Tgik
episode
on
this
that
joe
went
over
all
this
stuff
and
I
think
we've
been-
I
was
talking
to
lori
the
other
day
about
doing
another
tgik
a
new
one.
On
the
same
thing,
I
think
someone's
going
to
do
that,
so
what
else
is
going
on
building
and
sustaining
open
source
communities?
This
is
important
I
feel
like
this
is
important
at
work
and
in
open
source
like
so.
Let's
see
all
these
different
things.
Let's
see
all
these.
These
are
all
the
different
ones.
All
the
different
linux
conferences.
A
There's
ospo
europe
envoy
con
okay.
So
that's
what
contour
and
all
those
other
things
are
based
off
of.
I
think
that's
what
istio
and
and
maybe
even
linker
d
is
based
off
of
I
don't
know
so
we
got
a
lot
of
other
conferences
check
those
out
valero.
A
Okay,
so
valero
is
valero,
1.7.0
he's
out,
okay,
so
congratulations
to
the
valero
team.
That's
our
backup
and
restore
solution
for
k8s.
It
used
to
be
called
arc,
so
the
latest
version
is
out.
I
don't
know
if
they
have
a
chain
lock
yeah,
they
have
the
releases.
So
let's
see
what
happened
valero.
B
Yeah
yeah,
I
actually
I'm
the
tech
lead
for
the
plugin
for
vsphere.
B
Well,
so
we
are
going
to
do
a
release
of
1.3
soon,
so
yeah
cool
completely.
For
that
I'll
just
be
testing
it's
the
most
of
that.
B
Here
so
it
basically
well
backup
the
vsphere
volumes
by
taking
a
snapshot.
A
B
Lara
will
be
basically
think
of
the
backing
up
the
kubernetes
metadata.
A
B
Well,
because
it
has
to
store
this
as
a
different
place
right,
so
it's
not
just
we'll
take
a
snapshot.
A
All
right
now,
okay,
so
I
guess
there's
a
lot
of
stuff
in
here:
they
they
went.
They
went
over
to
the
distro
list,
so
it's
safe
for
now
and
now
we're
gonna
get
into
one
more
and
then
one
more
news
thing
and
then
we're
gonna,
zing's
gonna
take
over
and
I'm
just
gonna
hang
out
and
hang
out
and
like
drink
a
beer
and
watch
her
watch.
Her
teach
watch
her
like
just
teach
me
a
bunch
of
csi
stuff.
I'm
excited
this
is
an
easy
day
for
me.
A
Kate's
node,
flag,
command
line
option
on
falco
falco
is
like
a
security
tool
right.
What
is
falco?
It's
like
some
kind
of
a
security
tool
for
cloud
native
security,
project
defect
or
kubernetes
yeah
threat.
Detection
thing
this
is
from
who
is
it
systig?
Or
I
don't
know
it's
one
of
these?
I
forgot
the
company
behind
it.
A
One
of
these
security
companies,
anyways
kate's
node
command
line,
option
which
allows
filtering
by
node
name
when
requesting
pod
nated
data.
Typically,
it
should
be
set
okay.
So
this
is.
These.
Are
the
new
falco.3.0.30.0.
A
Features
all
right,
something.
A
What's
your
representative
okay,
so
this
is
just
a
performance,
improvement
and
then
proposal
for
plug-in
system,
so,
okay,
falco,
plug-ins,
okay
and
then
they've
got
a
new
falco
release
schedule.
I
guess
falco
is
100
open
source.
I
thought
it
was
like
a
closed
source
thing,
but
it's
an
open
source
thing.
I
guess
they
have
a
whole
community
thing
and
it's
not
directly
associated
with
the
company.
So
that's
cool.
I
think
it's
like
a
continuous
monitoring
solution
anyway.
A
So
the
new
falco's
out,
if
you're
a
security
person,
you
probably
know
more
about
it
than
I
do.
Let
me
see
check
back
in
the
stream
and
see
what's
going
on
what's
up
vlad.
A
When
are
you
going
to
do
another
show
me
and
vlad
are
supposed
to
do
a
show.
One
of
these
days,
falco
rules
filters
the
syscalls
from
containers
krk4.
What's
up,
and
carlos
is
here
carlos.
What's
up,
I
had
fun
with
you
last
week.
A
A
A
Well
lee's
excited
about
security
day.
What's
web
3
carlos
and
then
we're
going
to
take
it
away
with
zing,
go
ahead
and
let
me
know,
and
then
we're
going
gonna
go
so
zing
is
gonna,
show
us
the
csi
stuff
really
quickly.
She's
gonna
be
sort
of
digging
into
like
some
of
the
api
server,
kube
controller
manager,
parts
and
some
of
the
csi
driver
stuff,
and
how
all
this
gives
us
item
potency
and
so
on
and
so
forth.
There's
a
lot
of
idiosyncrasies
in
terms
of
how
resources
are
managed
and
stuff.
A
B
So
I
will
talk
about
item
potency
in
csi,
so
let's
first
look
at
a
csi
spec.
Let's
look
at
how
csi
spec
defines
item
potency,
so
here
it
talks
about
item
potency
requirements.
This
is
under
timeouts.
So
if
a
call,
if
a
csi
call
times
out
the
ceo
may
retry
in
that
case,
the
caesar
driver
needs
to
make
sure
that
it
will
continue
where
it
either
left
off
when
it
gets
retried.
B
Actually,
let's
look
at
the
potency.
Let's
look
at
the
create
volume.
Yeah
look
at
this
one,
so
for
create
volume.
B
B
So
if
the
volume
already
exists,
then
it
should
just
return.
This
return.
Okay
with
this
create
volume
response
so
think
about
it
like
when
you
call
create
when,
when
the
clipboard
is
called
for
the
second
time,
if
it's
already
there,
then
the
second
time
when
it's
called
it's
almost
like
a
get
volume,
call
because
you're
supposed
to
you
get
the
same
volume
back
so
so.
B
Those
are
only
the
like
the
list
volumes-
those
we
usually
don't
say
they
are
that
important,
because
you
know
every
time
you
call
that.
A
A
Don't
think
it's
yeah,
so
I
just
asked
if
it's
csi
related,
but
I
just
wanted
to
let's
make
sure
and
and
hi
again
alex
so
go
ahead
and
keep
going.
You
may
not
want
to
share
this
screen
because
it'll
show
that
recursive,
like
anyways
yeah,
so
go
ahead.
Keep
going.
B
Sorry
yeah
most
of
the
call,
no
no
yeah
go
ahead
and
ask
questions
most
of
the
calls
right,
create
volume,
delete,
quick
snapshot,
delete
snapshot
like
controller
publish
volume
or
no
property
volume.
All
of
those
are
item
potent.
So
one
thing
I
want
to
mention
is
right.
So
if
you
look
at
here,
as
it
says,
the
input
is
a
name,
and
maybe
I
should
let
you
go.
B
Let's
see
if
it's
go
to
the
next
one
yeah.
So
this
is
a
creative
volume
request
right,
so
in
creative
volume
requests
you.
This
is
co
well
provider
name
and
at
when
you,
when
it
sees
the
driver,
give
a
response.
It
actually
will
return
a
id.
So
it's
here
right.
So
this
is
the
this
volume
will
be
in
the
credit
volume
response,
so
this
will
give
a
volume
id
back.
So
these
two
are
actually
not
the
well,
not
necessarily
the
same.
B
A
B
Okay,
so
right,
so,
if
you
look
at
here
right,
this
is
this:
is
this?
Is
the
water
name
prefix?
So
by
default?
It
is
pvc,
and
this
is
followed
by
a
euid,
so
this
you
can
actually
specify
if,
if
your
driver
does
not,
cannot
take
a
volume
name
that
long,
then
you
can
actually
truncate
it,
but
by
default
tdc-uid
is
the
warning
name
that
the
css
driver
will
get
in
the
create
volume
request?
B
And
then
you
know
after
the
season
driver
created
a
volume,
it
will
return
a
volume
id
so
that
may
or
may
not
be
maybe
the
same
or
maybe
different
from
this
name
right
depends
on
the
driver.
Most
of
the
time
I
I
see
they
are
different.
I
actually
have
not
seen
a
driver
use
the
the
same
url
name
and
the
endpoint
id,
and
then
once
you
get
that
id
in
subsequent
volume
operations
like
delete
volume,
so
you
all
give
that
boring
id.
That's
the
unique
identifier.
A
B
Oh,
you
mean
a
name
and
a
id
yeah,
so
this
is
the.
This
is
the
input
right.
So
when
you
give,
this
is
a
this
is
the
input
that
the
csr
driver
will
get
at
crevolent
time
so
when,
when
it
is
done,
it
will
give
you
a
warning
id
that
is
actually
from
the
storage
system.
So
I'm
going
to
show
you
how
that
looks
like
right.
So.
A
B
If
you
look
at
here
right,
so
this
is
a
pvc,
so
I
have
this
one
created.
So
that's
the
this
is
the
pvc
name
and
this
this
pvc.
Of
course
this
has
a
uuid
right.
So
the
input
to
the
create
volume
request
call
that
name
will
be
the
pvc
dash
and
the
uid
of
this
pvc
and
then
I
think,
that's
I
think.
That's
this
one.
B
A
Well,
waleed
has
an
interesting
comment
here.
He
said.
B
A
B
Oh
75,
the
name
or
which
field,
because
for
the
forwarding
name
right
now,
the
default
actually
should
just
be
40,
because
maybe
you
are
talking
about
a
different
field.
B
Yeah-
and
I
don't-
I
don't
know
which
field
you're
talking
about,
so
I
know
that
we
actually
have
some
problem
with
the
name
with
the
like
the
length
requirements.
We
actually
increase
that.
B
B
About
that,
I
guess
we're
talking
about
openshift.
I
know
that
in
csi
we
actually.
Actually
you
can
actually
check
this
out.
I
think
see.
That's.
B
Integration
look
at
here
right.
If
you
look
at
here,
it
says
you
look
at
this
one
increase
from
192
to
256.
This
is
the
node
id
field,
so
I
think
there
are
some
other
places.
We
also
increase
relax
the
the
size
limitation
like
that.
A
B
Yeah
so,
but
I
don't
know
that
when
you
are
talking
about,
maybe
that's
not
a
csi
limitation.
That
maybe
sounds
like
data
store.
That's
yeah!
That's
probably
something
else.
Sorry,
I'm
not
sure
exactly
which
one
you're
talking
about.
But
if
it's
css
specific
you're
welcome
to
bring
that
up
to
csi
community
meeting.
A
B
This
is
a
yeah.
This
is
a
right,
it's
a
cluster
running
on
vsphere,
so
I
have
this
one.
I
just
want
to
show
you
right.
The
I
was
educated
to
show
you
the
volume
id
part.
So
this
is
the
the
part.
B
All
right,
so,
if
you
look
at
here
right
now,
the
volume
handle
so
this
is
a
this.
Is
the
unique
volume
id
returned
by
the
csr
driver,
but
of
course
your
driver
may
return
something
different
that
I'm
just
showing
you
this
example,
but
this
is
the
the
warning.
Handle
field
is
the
one
where
we're
looking
we're
looking
for
so
so
this
volume
I
can
show
you,
okay,.
A
B
Here
thing
yeah:
so
if
we
but.
A
B
And
so,
if
I
look
at
here
right,
so
I
should
refresh
this
one.
I
just
want
to
show
you
there's
one
volume
created
here
right
right,
so
you
see
that
this
is
our.
This
is
the
list
here
ui.
It
shows
this
one
volume.
So
when
I
create
when
I
created
this
pvc
right,
so
this
is
my
pvc
demo
right.
B
So
if
I'm
making
this
call
when
I'm
making
this
call
when
I
actually,
when
user
requested
this
ppc,
I
should
say
that
and
then,
if
the
external
provisioner
make
the
create
volume
call
multiple
times
with
the
same
request,
then
it
should
just
you
know,
create
this
one
one
item
here:
one
one
volume,
yeah.
A
B
A
B
Yeah
yeah,
that's
it
he's
here
so
waleed
has.
B
Now,
oh,
so,
actually
there's
a
there's
a
way.
Now
I
just.
I
think
this
is
actually
a
relatively
new
feature
right.
So
if
you
look
at
here,
if
you
select
this
so
so
of
course
you
of
course
you
watch.
Why
don't
you
try
to
delete
that
from
kubernetes
right?
You
don't
want
to
delete
from
here,
but
I
actually
understand
another
example.
I'm
going
to
show
you
later.
In
that
case
I
actually
have
to
go
clean
up.
B
So
let's
say,
for
example,
for
some
reason
you
have
leaked
volumes
which
I'm
going
to
you
know
look
at
later,
then.
In
that
case
you
want
to
use
this
to
clean
up.
So
that
is,
if
you
don't
have
any
you,
don't
have
a
pv
api
object
in
kubernetes
anymore,
that
is
associated
with
this
volume.
Then
in
that
case
you
have
to
manually
clean
it
up,
so
you
can
actually
come
here
and
you
know
you
can
click
this
and
delete
it.
A
B
B
A
B
Okay,
all
right
so
and
then
for
I,
oh
it's
the
dog,
okay,
it's
this
one's
back!
It's
here!
So,
let's
see,
let's
go
back
to
this
item.
Potency
thing,
that's
the
create
and
then
so
for
delete
the
same
thing
right.
So
this
must
be
item
potent.
B
If
so,
if
the,
if
the
volume
id
does
not
exist,
then
if
you
can't
find
it
then
actually
the
see
the
driver
should
just
return.
Okay,
it's
already
deleted
because
you
know
it's
already:
it's
it's!
You
don't
you're
not
supposed
to
return
a
not
found
arrow
in
this
case,
so
that's
actually
very
important
yeah.
So
that's
this
one.
Let's
take
a
look
of
a
few
drivers
and
see
what
they
do.
B
A
B
If
it
you
know,
if
it
times
out,
and
you
don't
want
to
say
you
trying
to
create
one
volume
but
ended
up
having
10
and
then
you
don't
have
a
kubernetes
api
object
associated
with
that,
then
you
know.
Oh.
A
A
B
Okay,
so
yeah
so
the
great
volume.
This
is
the
gcp
the
driver.
So
first
it
gets
the
input
parameters
like
the
capacities.
It
gets.
The
name
brand
then
based
on
it
based
on
this
name
it.
I
think
it
will
try
to
get
a
volume
key
so
basically,
based
on
the
replication.
B
Yeah
and
and
then
it
gets
a
volume
id
here
and
then
you
see
here
it
tries
to
acquire
a
lock.
So
basically,
if
there
is
already
a
credit
volume
operation
going
on
with
the
same
morning
id
then
you
know
it's
going
to
return
this
saying:
okay,
it's
already
there,
so
it's
not
going
to
allow
two
operations
going
on
and
then
also,
if
you
look
here
right,
it's
going
to
do
it's
going
to
validate
the
disk.
It's
going
to
say:
okay,
let's
look
at
this.
B
If
this
disk
is
already
is
existing
and
then
it
checks
the
input
parameters
and
see
if
there
is
any
incompatibility
so
just
to
see
if
there's
any
existing
disk,
but
it
was
created
with
some
different
parameters,
so
that's
incompatible
yeah.
So
those
are
the
things
it's.
It's
checking
to
make
sure
that
it's
actually
that
important.
B
So
that's
on
there
create
what
inside
and
then
let's
say,
delete
volume.
A
B
It's
actually
quite
different.
I
can
actually.
B
Like
it's
actually
so
I
need
to
open
another.
A
B
B
First
you'll
see
it's
very
different,
so
this
actually
initially
it's
implemented
using
a
using
a
a
map,
it's
in-memory
map,
so
it
basically
saves
a
volume
id
along
with
the
task
info
and
to
keep
tracks
of
that
and
making
sure
that
it's,
it's
not
going
to
you
know,
go
create
another
task
for
the
same
boarding
id,
but
then
I
think
we
found
some
problems
because
when
you,
so
this
solves
the
problem
when
when
this
is
called
multiple
times
by
the
by
the
external
provisioner,
however,
if
the
csi
driver
crashes
in
the
middle
of
the
gray
volume
or
if
see,
the
driver
cannot
communicate
with
this
field
for
some
reason
for
a
long
time,
then
this
this
one
does
not
work.
B
So
that's
why
actually
right
now!
This
is
something
actually
kind
of
ongoing.
So
this
this
is
a
very
new.
So
that's
why
it's
actually
still
controllable.
B
Yeah
this
is
revenue.
This
is
going
to
be
it's
not
released
here.
This
is
going
to
be
a
alpha
feature
in
the
coming
2.4
release
yeah.
So
this
just
used
a
different
approach.
It's
basically
what
persists
that
know
in
this
data
store.
So
it's
going
to
check
it's
going
to
check
whenever
it
gets
requested,
you're
going
to
check
to
see
if
it's
already
there's
retrieves
this
from
the
store
and
if
it's
already
created
successfully
and
returns
this
voting
id
here
right.
B
So
that's
the
only
idea
here
and
if
it's
still
ongoing,
then
you
know
it
will
just
retrieve
this
task
information
and
then,
if
it's
a
new
request,
it
will
make
this
call
it's
seen
as
api
called
to
create
volume,
and
after
that,
it's
going
to
retrieve
that
information
and
then
and
then
either
it
will
return
failure
right
so
that
stat
as
a
status,
error
or
return
success
and
then
updated
the
information
in
the
store.
So
so
it's
going
to
be
a
little
different
from
the
the
previous.
B
You
know
the
the
map
approach
yeah,
so
this
is
still
still
on
ongoing.
I
think
that
this
is
merged,
but
I
think
there's
still
some
bug
fixing
on
going.
It's
been
tested,
yeah,
yeah,.
A
B
A
B
A
Yeah,
that
might
be
interesting.
I
think
it
might
be
worth
having
them
side
by
side,
even
maybe
zing
like
having
one
on
one
side
and
one
on
the
other,
so
we
can
yeah.
This
is
really
cool,
so
I
I
never
have
looked
into
this
code
in
detail,
so
this
is
cool.
I'm
appreciating
it
aws
csi,
okay,
cool
right.
B
So
if
you
look
at
the
so,
if
you
okay,
I
think
it's
this
one
controller
right,
so
you
see
here
they
actually
recently.
This
is
july.
They
recently
made
a
change
to
ensure
credible
ad
potency
utilized
latest,
go
sdk.
B
Yeah,
so
if
you
look
at
this
right
to
just
check
and
see
what
what
changes
they
made
here,
and
so
it
actually
added
this
added
this
arrow
basically
to
check
if
the
parameters
mismatch,
so
let's
actually
see
yeah.
So,
basically,
when
it,
when
crit
warning
returns
an
error,
it
checks
to
see
if,
if
it's
this
mismatch
or
basically,
if
making
us
making
a
call,
we
can
create
a
volume
call
with
the
same
name
but
different
set
of
parameters.
So
that's
a
mismatch.
Then
you
return.
A
B
So
that's
actually
similar
right.
You
actually
still
remember.
We
were
actually
looking
at
some
code
earlier
if
it's
existing,
then
yeah
right.
It's
similar
here
validate
this
is
the
google
one.
B
B
B
A
A
B
A
B
A
B
B
A
Well
he's
saying
I
think,
she's
saying
that
carlos
he's
carlos
is
saying
why
not
store
it
in
net
cd,
so
I
think
he's
I
think
yeah
yeah.
B
Yeah
yeah,
so
that's
better,
but
I
think
there
is
one
slight
problem,
but
that's
probably
you
know
maybe
something
we
need
to
look
into
in
the
future,
which
is
it's
actually
it's
actually
better
for
csr
driver
or
not.
It
depends
on
kubernetes,
because
if
you
look
at
the,
if
you
look
at
the
css
back
okay,
I
think
this
screen
is
too
small
I'll
go
back
to
the
full
screen,
so
so
the
css
back
right,
supposedly
it's
set
or
actually
should
go
to
the
beginning
of
that
right.
B
It
actually
says
that
I
don't
know
where
it
says
that
somewhere
it
says
that
supposedly
you
are,
you
are
oh
okay.
Here
is
the
objective
right,
so
the
storage
vendor
right
one
plug-in,
and
then
that
should
work
across
a
number
of
container
orchestration
systems.
But
now
I
think,
because
kubernetes
is,
you
know
very,
very
dominant,
so
I
think
it's
mainly
kubernetes
now
and
that's
why
we're
not
not
thinking
about
that
very
much,
but
it's
actually,
you
know,
should
be
just
you
write
this
driver.
B
A
B
A
A
B
B
So,
okay,
I
will
talk
about
the
second
part,
which
is
you
know,
avoiding
leaky
resources,
so
we
actually,
you
know,
run
into
several
bugs
in
kubernetes
above
us.
So
this
this
this
is
a
bug
fix
this
one
was
actually
fixed
quite
quite
some
time
ago,
just
to
go
there
yeah.
So
you
can
fix
this.
Since
I
said
you
wanna
if
you're
2019,
so
this
is
about
fixing
external
provisioner,
because
what
happens
is
before
this
fix.
When
create
volume
comes
out.
B
Basically
it
sees
that
driver
takes
a
long
time
to
create
a
volume
right
takes
the
times
out,
and
then
we
ended
up
having
those
volumes
created
on
the
sword
system,
but
they
are
not
really
recognized
by
external
provisioners.
They
are
kind
of
lost,
so
there's
a.
A
B
A
B
A
B
Driver
will
also
need
to
do
something
and
then
co
also
need
to
do
something
to
make
sure,
and
there
is
a
nothing
that
leaks
so
yeah,
so
you
see
that
he
actually
made
this
change
so
so
now
there
are
three
status.
If
it's
finished,
then
that's
you
know
now
we
know
that
driver
is
completely
done.
It
could
be
failed,
it
could
be
successful,
but
it's
finished.
B
No
change
is
a
temporary
error
and
then
in
background,
meaning
that
it's
still
ongoing.
So
we
can
actually
look
at
what
are
the
transient
arrows
there?
It's
a
final
error
yeah.
So
if
you
look
at
here,
this
is
a
credit
volume
right.
So
do
you
create
volume
and
then,
if
this,
if
there's
an
arrow,
and
then
we
check,
if
it's
the
final
error,
if
the
final
error,
then
we
return
finished.
B
Otherwise
we
return
in
the
background.
So
it's
still
ongoing.
So
so
here
there
are
a
few
error
code
here,
so
if
it's
either
the
client
application
console
or
timeout.
So
actually
this
is
the
one
that
I
see
most,
which
is
this
time
out.
The
csr
driver
comes
out
while
creating
volume
and
then
or
if
it's
a
server,
shutter
or
temporary,
auto
resource
of
all
of
those-
and
this
is
a
csi
css
back
defined
error
code
aborted,
which
is
operation
pending
forwarding.
So
all
of
this
will
say
it's
john
go
in
so
return.
A
B
And
and
then
after
that
right
we
still
discover
stations.
Well,
we
have
leaky
volumes,
so
this
is
an
issue.
This
actually
is
an
issue
discovered
by
a
customer.
Actually,
so
they
found
out
that,
depending
on
you
know
how
you
delete
the
pvc,
you
know
whether
you
delete
pvc
first
or
whether
you
delete
pv.
First,
you
get
different
results,
so
this
one
was
actually,
but
I
was
actually
surprised
when
I
found.
A
B
This
one,
so
I
can't
just
show
you
how
this
one
is
like
right.
So
so
I
have
this.
B
B
B
Yeah
there
is
a
default.
Actually
you
can
actually
set
that
so
the
external
provisioner.
Actually
it
actually
has
the
timeout
actually
operation
time
yeah.
I
see
this
one
right
so
in
the
in
the
external
provisioner,
you
can
actually
set
this
okay
there's
a
default
here
10
seconds,
but
then
you
can
actually,
if
you
so.
B
B
So
so
so
I
I
I
showed
earlier
that
I
have
this.
I
have
this
volume
here
right
you,
if
you
look
at
the
reclaim
policy
here,
it
says,
delete
delete
means
when
I
delete
the
pvc
it
should
just
go
ahead
and
delete
the
pv
and
also
the
volume
associated
with
this
right.
So
I
can
actually
go
ahead
and
do
that.
A
B
B
Yes,
it
should
it
will
retry
yes,
so
it
should
retry
forever
yeah
yeah.
So
if
you
look
at
here
right,
so
I
deleted
that
right
because
I
delete
the
pvc
you
see
that
deleted
and
then
you
can
go
check
here.
A
A
B
A
I
don't
think
you
can
always
recover
the
data.
I
think
there's
different
policies
for
different
volumes
right
like
there's
some
policies
you
can
set
where
volumes
will
not
delete
the
actually
get
deleted,
like
from
the
underlying
storage,
but
they'll
get
deleted
from
the
kubernetes
api
of
pvs,
but
then
I
think
there's
other
volume
controllers
that
will
actually-
and
I
think
it's
normal
like
if
I'm
in
the
cloud
and
I
delete
a
volume-
I
think
it's
normal-
that
I
lose
the
persistent
disk
associated
with
that
volume.
B
B
This
is
the
this
is
the
kubernetes
side
right,
so
kubernetes
had
the
the
pv
pv
controller
and
the
and
also
as
well.
B
A
A
B
Right
so
it's
a
hanging
because
there's
a
finalizer
there,
so
it's
not
going
to
be
deleted,
but
the
time
stamp
the
deletion
timestamp
will
be
added
to
pv.
So
so
it's
actually
the
pv
right.
Now
it's
actually
still
still
there,
but
if
you
get
it,
it's
still
there
right.
So
now,
let's
now,
let's
go
ahead
and
delete
the
pvc.
B
B
So
this
guy's
still
here,
oh
yeah,
so
this
is
so
this
is
a
leaking
right.
So
basically
what
happens
is
even
though
the
deletion
is
delete,
but
if
you
try
to
ddpv
first,
then
you
run
into
this
problem.
Your
thought
is
deleted,
but
then
it's
actually
not
it's
still
there
right.
You
know
all
your
kubernetes.
A
B
Yeah
yeah,
so
this
is
a
this
is
a
bug,
so
we
actually
we're
trying
to.
B
A
B
B
B
That's
what
the
cabinet
says
owner
pv,
reclaim
policy,
so
I
I
will
just
jump
into
the
this
is
the
how
we
are
going
to
fix
this
so
the
way
to
fix
this
is
well.
We
will
be
adding
a
feature
gate
because
the
since
this
this
behavior,
it
has
been
there
for
sin
since
the
beginning
right,
so
we
don't
know
if
actually
anyone
is
depending
on
this
behavior
for
some
reason,
so
we
don't
want
to
just
you
know,
change
it
without
a
feature.
B
B
A
B
B
Keep
going
zing
this
is.
That
should
be
the
same
though
I
mean
it
should
not
really
matter.
I
mean
with
this
reclaimed
policy
part
right.
If
you're
talking
about
item
potency,
then
there
could
be
bugs
in
the
drivers
or
you
know
some
someplace
else,
but
but
the
reclaimed
policy
should
work.
But
in
this
case
you
see
that
the
clean
policy
is
not
owner
right,
so
you
can
say
it
does
not
work
because
there's
this
bug
so
we're
trying
to
fix
right.
B
So
so
this
finalizer
is
actually
already
it's
already
there,
but
it's
always
it's
actually
just
not
not
added
by
default
because
there's
a
flag
there,
it's
always
false.
So
what
we're
going
to
do
is
just
to
you
know
with
this
feature
gate
if
the
feature
gate
is
enabled,
then
we
are
going
to
add
this
finalizer
to
the
pv.
So
every
time
the
the
pv
is
after
the
volume
is
created,
we
will
when
we
create
the
pv,
we'll
add
this
finalizer
there
and
then
on
a
deletion
time.
B
A
Think
this
is
really
cool.
You
actually
brought
us.
You
showed
us
a
problem,
it's
the
first
time
on
tgak
that
anybody's
actually
showed
us
a
problem
and
then
showed
us
the
kept
that
solves
it
all.
In
the
same
thing,
that's
really
cool,
or
at
least
the
first
one
that
I've
been
involved
with.
A
B
Yeah
so
trying
to
get
this
one
in
one
or
twenty
three
and
and
then
there's
and
then
there's
another
bug
so
just
still
yeah,
it's
like
we
have
already.
You
know
done
so
much,
but
then
there's
still
some
cases.
Then
there
is
a
potential
leaking
there.
So
this
one
you
know,
found
this
one
black
open
by
patrick
here.
So
if
we
start
to
create
volume,
we
basically
cause
etcetera
decree
volume
in
external
provisioner.
B
Now
before
this
is
complete,
the
exact
permission
is
stopped.
So
at
this
time
we
don't,
we
don't
have
a
pv
yet
and
then
pvc
is
deleted
and
now
external
provision
started
now
we
actually
lost
that
handle
to
that
volume
because
that's
still
going
on
in
the
background,
we
don't
even
know
that
exists.
B
B
No,
this
is
a
yeah,
so
this
is
the
also
back.
So
this
is
also
you
so
fabio
is
working
on
this.
I
think
he's
going
to
send
me
the
cap
for
this
one
as
well.
I
think
the
proposal
is
also
I
had
a
finalizer
on
the
before.
B
B
A
A
B
Yeah,
so
I
I
think
that.
A
B
It
does
it's
going
to
add
a
finalizer
so
similar
to
our
previous
fix,
the
you
know,
the
previous
problem.
I
was
showing
yeah.
This
is
the
proposing
to
idle
finalize
on
the
pvc
itself,
so
the
one
that
I
showed
earlier
this
one
it's
a
this
finalizer
will
be
added
on
pv
cool,
so
this
is
protecting
the
volume
and
then
this
one
is
basically,
but
this
is
still
relatively
new
because
you
know,
I
think,
he's
still
doing
some
prototyping
yeah.
B
Provisioner
when
it
actually
creates
that
pv
object
right
so
if
to
provision
right,
so
it
will
creep
up
provision
then
close,
create
volume
here
and
then
after
the
volume
is
created,
then
it
will
go,
go
ahead
and
create
this
present
volume
pv
right.
So
but
then
that's
the
thing
here.
Then
there
is
a
possibility.
Then
something
could
happen
in
between
you
know.
When
you
you
talk,
you
you
call
it
create
volume.
You
could
make
the
csi
call
it's
not
back
yet
and
then
so
you
don't
really
have
your
pd
yet
so.
A
B
A
B
Yeah,
so
so
this
right,
so
that's
this
one.
I
thought,
if
there's
a
question
earlier,
that
I
thought
are
we
trying
to
answer.
B
Okay,
you
can
maybe
talk
about
that
one
later
yeah,
it's
fine!
So
so
that's
this
one!
I
I
actually
want
to.
I
actually
want
to
show
show
this,
so
the
so
snapshot
right
snapshot.
This
one
was
added
later.
So
the
support
for
snapshots
added
later
it's
actually
modeled
after
the
pvpc.
B
So
you
can
see
a
lot
of
similarities,
but
we
also
made
some
changes
because
you
know
when
we
see
some
problems,
we
also
made
some
enhancements.
So
one
thing
one
thing
that
is
different
is
actually
exactly
this
place
like
to
show
you.
B
So
snap
shutter
so
so
snapchatter
we
also
have
two
objects.
We
have
a
volume
snapshot
that
is
similar
to
pvc,
that's
a
request
to
a
snapshot,
and
then
we
also
have
a
volume
snapshot,
content
that
is
referencing
a
physical
snapshot.
Just
like
you
know,
pv
is
referencing
a
volume
system
so,
and
we
also
have
two
controllers
right.
So
for
the
for
the
for
the
volumes,
we
have
intrigue.
Pvc
controller
that
is
handling
you
know,
they're
doing
a
lot
of
stuff
handling.
The
binding
of
the
pvc.
A
B
Similarly,
here
we
have
a
common
controller.
The
snapshot
controller
is
a
com
controller
that
handles
the
binding
of
the
volume
snapshot
and
women's
natural
content,
and
then
we
also
have
a
side
car.
So
this
is
like
the
external
provisioner,
so
when
as
soon
as
one
difference
that
we
that
only
show
here.
A
B
Right,
so,
if
you
look
at
here
when
we,
when
we
get
a
volume
snapshot,
requires
a
new,
a
new
volume
snapshot,
so
we
are
going
to
try
to
create
a
snapshot
array
so
very
early
on.
We
will,
in
this
common
snapshot,
controller,
will
be
creating
a
snapshot
content.
So
that's
the
volume
snapshot
content
api
object,
so
you
see
that
the
sequence
is
different
from
the
external
provisioner,
because
it's
the
provisional,
actually
we'll
be
creating
the
volume
first
after
the
voting
is
created,
then
create
a
create
pv.
B
B
Yeah
and
then
this
this
volume
snapshot
volume
snapshot.
Content
will
be
so
so
this
sidecar
controller.
So
this
is
the
seaside
snapshot
of
sidecar.
This
is
going
to
you
know,
watching
the
volume
snapshot
content.
So
this
sidecar
is
pretty
small.
It
only
communicates
with
the
csi
driver,
so
it
basically
will
only
watch
the
volume
snapshot
content.
It
doesn't
even
watch
the
one
snapshot
if
you
object.
A
B
So
meaning
that
this
will
be
deployed
together
with
the
with
the
csr
driver,
so
it
will
be
watching
the
api
object,
like
the
voting
system.
A
B
Yeah,
so
this
so.
B
So
for
snapshotter
we
have
a
common
snapshot,
controller
and
then
sidecut
deploy
with
the
css
driver.
We
also
have
a
validation
web
hook
so
yeah,
but
for
the
controller
part
it's
just
two,
the
sidecar
and
the
common
controller.
B
And
okay,
so
if
I
go
to
quick
snapshot
right
so
so
this
basically
will
be
watching
the
watching
the
snap
one
etcetera
content
right
as
we
look
at
here
right.
So
this
one
is
getting
a
one
snapshot,
content,
api
object
and
then
there's
some
checks.
It
checks.
Well.
This
is
the
for
the
delete,
part
checks.
If
it's
going
to
be
well
actually
this
this
actually
is
similar
to
what
the
what
you
know
the
previous
question.
We
can
actually
just
look
at
this
one.
I
think
the
question
previously
is
about
the
reclaim
policy.
B
B
So
you
see
that's
how
we
can
guarantee
how
this
can
work
for
all
the
drivers,
because
if
deletion
policy
is
retained,
we're
not
even
going
to
call
the
csi
daily
snapshot
is
not
going
to
be
deleted.
You
see
what
I'm
saying
yeah
so
same.
This
is
the
same
thing
or
similar.
I
say
similar
to
the
you
know
how
this
is
handled
for
the
for
the
reclaimed
policy
for
the
for
the
volumes
right
similar.
Basically,
you
check
the
policy.
If
it's
retained,
then
it
does
not
even
go
make
the
call.
A
B
B
We
created
content.
First,
we
have
that
handle
basically
have
that
api
object
and
then
come
to
the
side.
Car
we're
going
to
do
really
making
the
call
to
the
seaside
driver
right.
So
that's
one
thing
that
we
we're
doing
and
another
thing
yes,
and
there
is
another
change
that
we
made
so
initially
when
the
snapshot
support
was
introduced,
because
we
think
that
snapshot
is
normally
time
sensitive.
So
when
people
take
a
snapshot,
they
want
snapshot
to
be
taken
at
that
time.
So
we
actually
don't
do
re-twice.
B
So
we
thought
you
know
that's
important,
but
then
we
got
leaking
snapshots
because
if
it
times
out,
then
you
know
you
get
snapshots
created.
Then
you
know
we're
not
really
taking
care
of
those,
so
we
actually
make
me
make
the
change
so
now.
This
is
actually
the
same.
Very
similar
to
what
is
done
in
the
external
provisioner,
which
is
we
always
retry
chris
snapchat
when
it
fails,
we
always
retry
so.
A
So
now
we
have
some
more
comments:
rolling
in
t
t
g,
I
k
dot.
I
o
slash
notes
so
lucas,
I
I
don't.
I
didn't
watch
it,
but
it's
good
that
there's
vmware
and
red
hat
stuff
going
on.
I
don't
know
what
this
bgfs
csi
driver
is.
What
is
do
you
know
what
b
b
e
e
g
f
s
is
using.
B
Let's
see
if,
let's
see
we
have
a
dog,
let's
see
if
it's
in
a
dog
I
don't
know
which
one
they
are
talking
about.
Maybe
they
can
tell
me
if
I
show
the
dog
here
so
so,
there's
a
csr
driver
dog
that
so
any
csr
driver
can
add
an
entry
to
the
stock
and
put
the
driver
here
right.
So
the
drivers.
A
So
so
yeah
and
you
can
add
it
to
the
notes
too
lucas
by
the
way
so
feel
free
to
just
jump
into
tgik
dot,
io,
slash,
notes
and
and
add
that
to
the
notes-
and
you
know
like.
B
A
B
Are
in
the
you
know
in
the
in
the
you
know,
you
are.
If
you
are
a
driver
developer,
you
can.
Actually
you
know
you
can
come
here.
You
can
submit
a
pr
and
at
your
drive
and
see,
let's
see
if
any
new
you
don't
have
a
new
any
new
job.
You
can
see
right.
There
are
some
pr's
here,
yes
and
there's
a
there's,
a
pr
ibm
blog.
So
I
think
this
is
the
update,
but
you
can
submit
you
can
submit
apr
to
add
an
entry
here.
You
see
a
lot
of
drivers
here.
B
A
B
So
yeah
well,
okay!
So
we're
talking,
oh
so
we're
talking
about
this
this
actually,
so
I
just
want
to
show
you
this.
We
actually
have
something
similar.
I
remember
we
actually
look
at
this
one
in
the
external
provisions.
We
actually
made
a
fix
here
because
we
found
there
were
leaking
snapshots
right.
So
we
actually
also
added
this
check
here
when
we
create
snapshot.
B
We
also
want
to
check
to
see
if
the
final
arrow,
so,
if
it's
a
time
out,
then
you
know
that's
still
going
on
in
the
background.
So
that's
not
a
final
error.
In
that
case,
we
will
still,
you
know,
keep
going
so
we
also,
we
also
actually
added
a
annotation.
B
So
when
we
create
when
we
get
a
creator,
create
a
snapshot
request
before
we
go
ahead
and
create
a
snapshot.
We'll
add
this
annotation
one
in
snapshot
being
deleted.
I've
been
created,
and
so
we
know
that
this
is
ongoing
and
then
we
will
not
remove
this
until
we
get
the
final
error
so
now
see
if
it
says
the
final
error,
then
we
will
remove
this
annotation
we
allow.
Then
we
allow
this
snapshot
to
be
deleted.
A
A
Yeah
that
one
okay
container
storage
interface
for
that.
B
A
B
I
yeah,
I
think
I
think
that's
all
I
want
to
show
here.
Are
there
any
more
questions
about
this,
or
did
I
miss
any
questions?
I
have.
A
Like
a
million
questions,
and
so
I'm
hoping
that
we
can
get
you
back
here
to
do
a
deep
dive
into
the
csi
api
and
how
it
works
at
some
point,
but
I
mean
this
is
really
cool,
because
you've
kind
of
showed
me
all
the
stuff
that
I
never
actually
think
about
that.
I
need
to
learn
about.
A
And
then
carlos
is
saying
something
he
might
he's
getting
a
bonus
in
the
fourth
quarter,
so
he's
gonna
he's
gonna,
take
us
out
all
out
for
like
for
donuts
at
the
next
kubecon.
So
I'm
excited
about
that
he's.
Gonna
buy
me
something
with
his
bonus
joe
thompson.
We
have
horror
stories
about
gpfs.
A
A
That's
true:
let
me
go
deep
here:
don't
we
all
right,
web3
csi
driver
on
the
making
walid
all
right?
Well,
you
all
have
been
great
everybody's
been
great
walid
and
carlos
and
lucas,
and
everybody
else
has
been
jumping
in
and
sharing
stuff
with
us
and
thanks
everybody
for
welcoming
zing
onto
the
show.
That
was
that
was,
and
thank
you
so
much
of
course,
zing
for
showing
us
all
this
stuff.
A
We
not
not
very
often
that
we
get
to
go
deep
with
an
expert
on
things,
so
this
is
cool
blockchain
csi.
Can
we
make
a
blockchain
csi
driver
zing?
Can
we
do
a
startup
after
this.
A
But
I
think
we'll
make
it
in
secret
and
you
know
where
we're
going
to
put
it
we're
going
to
put
it
in
that.
What
is
that
town?
That's
doing
all
the
blockchain
stuff
nowadays,
that's
it's
like
I
forgot,
there's
some
town,
they
have
a
bunch
of
rivers
and,
like
all
the
blockchain
stuff,
is
going
to
that
town.
Now,
in
some
town
in
asia
or
some
country,
I
don't
know
laos
they're
doing
all
the
bitcoin
mining
nowadays,
hadoop
csi,
yeah
ricardo.
B
A
All
right
cool,
so
I'll
I
mean,
I
guess,
yeah,
let's,
we
can
just
start
closing
up
here.
If
anybody
has
any
other
questions,
carlos
rust,
yes,
we'll
be
rust
and
deploy
webassembly
we're
getting
fancy
now.
So
all
right,
if
anybody
has
any
other
questions
for
zing,
well
lead
absolutely
anytime
and
waleed.
If
you
have
any
other
ideas,
feel
free
to
like
ping
me,
you
know
I'm
on
kubernetes
slack,
I'm
and
you
can
find
me
on
twitter.
You
know
so
whatever
like.
If
you
got
something
you
want
me
to
dig
into
on
an
episode.
A
Just
let
me
know
I'm
happy
to
do
one
and
hopefully
we'll
get
zing
back
again
and
seek
open
ebs
in
the
future.
All
alex,
I
don't
know
yeah
open
vbs.
I
don't
see
why
not.
If
we
haven't
done
one
already,
what
do
you
think
about
open
ebs
thing?
Have
you
ever
used?
It
obvious.
B
A
Okay,
what
what
did
you?
What
what?
What?
What
sort
of
what
can
you
tell
us
why
it's
a
cncf
project,
I'm
confused.
B
B
Yeah
there
are
well,
if
you
look
at
the
the
driver,
you
know
that
dog
that
showed
you
earlier
right.
We
have,
I
don't
know
how
many
I
think
we
have
more
than
100
drivers,
yeah,
so
yeah.
We
have
a
lot
of
choices.
B
Well,
I
think
it's
just
a
you
know:
it's
just
different.
It's
like
you
know.
You
also
have
rook
right.
If
you
look
at
you
know
the
storage
projects
in
cf,
you
have
rook,
you
have
longhorn.
That
was
also,
I
think
it's
actually
also
applying
for,
but
isn't.
A
A
A
A
A
Cool
I
mean
we
should
just
do
a
whole
storage
tgik
one
of
these
days,
but
unfortunately,
today
is
the
show's
over,
and
I
guess
it's
it's
time
for
the
weekend,
so
I
I
don't
know
it's
great
hanging
out
with
you
zing.
Thank
you
so
much
again
for
coming
to
coming
to
join
us.
Carlos
says,
thank
you
zing.
So
you
are,
you
are
our
favorite
and
we
are
so
glad
to
have
had
you
today.
Thank
you.
This
is
great.
A
Yeah
this
is
cool
all
right,
so
let's
dig
into
this
stuff
again
next
time,
lester's
the
one
from
oracle-
I
don't
know,
I
don't
know
if
there's
lester
and
lustra-
and
I
don't
know
you
know-
I
used
to
work
on
gluster
and
I
don't
know
whether
that
actually
came
from
there.
But
so
you
know
what
I
used
to
work
on
with
me
and
brad
child
used
to
work
on
this
lester.
F,
that's
hadoop!
Back
in
the
day,
oh.
A
Connector
anyways
take
care
everybody,
everybody's,
making,
sun
micro
systems,
jokes
now
gluster
is
red
hat,
so
I
guess
all
right
take
care
and
thanks
everybody,
you
remember
it.
I
thought
we'll
see
you
next
time.
Thank
you.
Come
for
coming
to
tgik,
thanks
for
hanging
out
bye,
everybody.