►
From YouTube: 2017-09-25 Rook Community Meeting
Description
Community meeting recording of Rook.io project - open source File, Block, and Object Storage Services for your Cloud-Native Environments.
Visit https://rook.io/ for more details about Rook
C
A
D
D
A
A
I
think
the
problem
is
that
the
problem
is
that
it's,
let
me
turn
off
magnifications
one.
Second,
the
problem
is
that
we
that
we
don't
differentiate
between
like
currently,
we
just
use
a
local
directory
on,
and
we
point
the
path
that
it's
that's
so
spat
right,
mm-hmm,
host,
Bob
right
and
the
problem
with
it
not
being
a
volume.
If
there
is,
you
can't
schedule
its
deletion,
you
can't
say:
okay,
look,
I'm
done
with
this
data
I'm
going
to
go,
delete
the
data
right.
A
E
E
A
Not
about
how
did
the
Lisa
local
park
is
differentiating
when
the
admin
wants
to
delete
versus
you
know
failover,
but
there's
there's
no
entity
that
says
deleting
apart
does
not
imply
to
leaving
the
data
behind
it
and
and
and
right
now
that
that's
the
only
doctor.
That's
the
only
option.
We
either
leave
data
behind
always
and
it's
somehow
it
gets.
We
started
on
that
note:
it's
in
a
bad
state,
so.
E
One
question
I
have
is
I,
haven't
looked
into
this,
that?
What
is
that?
The
challenge
on
this,
because
to
me
from
what
I
seen
from
the
example
I
haven't
actually
look
at
detail,
is
that
you,
you
create
a
PVC
just
like
you
create
a
regular
on
any
other
volume,
and
then
you
specify,
like
the
you,
know,
the
directory
from
that
host
right.
What
is
the
challenge
on
getting
this
stuff
for
sale?
Please
fix
really.
D
Yeah,
it's
really
just
a
matter
of
time
and
going
and
implementing
it.
That's
like
the
two-hour
thing
it's
you
know
feels
like
on
the
order
of
days
it
could
blow
up
even
more
than
you
know
it's
an
issue.
You
know
the
prior
as
far
as
priority
4.6
and
blocking
about
six.
It
wasn't
clear
why
the
cleanup
scenario
was
was
critical
for
it
at
six,
I
guess,
yeah.
A
I
think
what
we
should
prioritize
in
the
in
the
list,
in
the
bucket
of
we
need
to
get
changes
into
1.9
that
we
need
I
was
at
another
in
that
market,
just
spending
time
understanding
what
we
would
need
changed
even
just
to
improve
the
experience
and
then
making
sure
that
we
have
tickets
open
at
a
minimum
feature.
It
occurs
open
or
book
in
the
kubernetes
repos.
C
C
I'm
going
to
that,
okay
and
I
definitely
have
some
debt
in
terms
of
a
lot
of
the
big
designs
like
local
storage
and
CSI
and
stuff
that
I
need
to
bone
up
on
before
heading
there.
So
if
there's
any,
you
know
notes
or
lessons
Travis
did
you
have
from
local
storage?
You
know,
please
do
share
them
with
me,
so
that
I
can
give
her
a
little.
A
D
D
B
A
E
Yeah
so
last
week,
I
gave
a
tie
and
then
we
decided
that
we
were
known
as
support
the
the
multi
read
write
and
the
reason
why
we're
supporting
it
because
koobideh
knees
don't
say,
have
a
way
or
ask
to
to
present
the
volume
as
a
roadblock
device.
There's
a
new
feature,
I
think
late
for
1.9,
where
you
can
specify
the
volume
type
to
be
raw
or
with
a
file
system.
But
that's
not
there.
Yet
so
kubernetes
actually
formats
the
volume
and
expect
the
volume
to
be
to
have
a
file
system.
So
I
guess
a
team.
E
A
E
Correct
we
have
control
over
that.
Yes,
we
have
to
control
with
that,
but
kubernetes
it
gives
you
like
a
directory
same
for
me.
I
want
you
to
mount
this
direct
this
this
device
on
the
volume
ID
on
this
directory,
and
you
know
able
to
mount
it
unless
you,
you
format
it
with
a
file
system
or
or
something.
E
Okay,
so
so
we
decided
to
let
to
do
that
for
1.9
when,
when
we
have
local
Ross
raw
local,
no
solo,
sorry
roadblock
feature
when
it's
when
it's
released
so
fast
for
multi
right
also
last
week
for
testing
I
found
out
that
for
1.6
there
was
some
challenges.
There
was
some
information
that
were
not
given
to
us
that
we
were
expecting
like
part
information
that
we
we
use
those
part
information
for
fencing.
So
me
and
Jared.
E
E
It
only
didn't
change
a
little
bit
so
before
you
know
how
we
were
storing
the
pot
identity,
meeting
the
namespace
and
the
actual
port
ID
or
the
pot
name.
It
turns
out
that
we
actually
need
one
more
information
that
we're
storing
and
we
calling
it
like.
We
calling
it
pot
controller,
a
port
parent.
So
these
the
reason
why
we
storing
this
information
is
because
kubernetes
when
it
fails
over
a
pot
it
it
actually
deleted,
creates
a
new
pot.
E
So
the
new
pot
has
a
new
name
and
a
new
ID,
and
this
is
for
pots
that
are
started.
I
started
with
with
a
wrapper
like
a
deployment
or
replicate
brick
press
replica
set
or
ecological
troller,
so
pour
that
I
started
with
that.
They
actually
get
a
new
identity
when
they
fell
over
so
because
they
get
a
new
identity.
E
The
agent
or
the
the
root
plug-in
doesn't
know
whether
this
part
is
actually
a
pot
that
was
filled
over
or
a
new
part
one.
What
a
different
part
that's
trying
to
attach
the
same
volume.
So,
in
order
to
differentiate
between
that,
we
added
one
extra
information
to
the
CRD
and
that
extra
information
is
the
it's
pretty
much
like
the
the
part
parent
like
hey
with
your
party.
Do
you
have
a
part
parent,
and
so
who
is
it
so
we
say
yeah.
A
E
E
So
that's
one
case.
The
other
case
is
we
want
to
what
happens
when
that
know.
Goes
Down-
and
you
know
you
know,
and
and
and
and
pick
kubernetes
scheduler
it's
starting
to
migrate,
the
pot
from
that
from
they
know
that
went
down
to,
and
you
know
that
is
from
twelve
to
another
node
when
that
happens.
Therefore,
ID
changes,
so
the
name
changes
the
ID
changes
right
and
and
when,
when
it
comes
to
to
attach
it
root
volume
plug-in.
E
If
we
don't
don't
render
that
case,
a
root,
ball
playing
and
say:
hey
we're
a
second
you're,
a
different
part,
go
away.
You
know
I
like
to
attach
so
the
volume
the
volume
will
be
locked.
Nobody
will
touch
it,
so
you
know
have
terminated
it
right,
yeah
it
has
terminated,
but
when
it
gets
created
it
gets
created
with
a
new
name.
It
seems
like.
B
E
B
Like
it
feels
like
who
should
get,
the
next
lock
should
should
be
anywhere
right.
What
why
do
I
would
I
want
to
give
anybody
preference
and,
at
the
end,
you're
concerned
about
somebody
not
being
able
to
get
the
pod,
because
it's
locked,
and
so
maybe
the
thing
we
should
be
investing
in
is
how
do
we
clean
up
the
lock,
which
is
why
we
did
this
whole
agent
thing
to
begin
with.
E
A
E
A
C
Davido,
so
in
our
conversation
last
week,
what
I
would
I
took
away
from
it
was
that
they're
far,
the
most
important
thing
is
to
prevent
from
you
know,
multi
attach
into
having
to
consumers
or
to
writers.
To
that
volume.
That's
the
last
thing
we
ever
want
to
have
happen
because
that
chaos,
corruption
right
then
my
understanding,
it
doesn't
matter
who
the
old
one
was
as
long
as
we
can
be
confident
that
they're
gone
and
they're
no
longer
writing
to
it.
That's
that's
the
important
to
me.
Yeah.
E
A
E
A
C
E
Yeah
I
see
okay,
so
I
saw
Jerry
I
had
I
had
your
your
case
handle
so
right
now,
if
a
part
one
to
attach
at
the
volume
and
if
not,
it's
only
been
attached
each.
So
it
just
make
sure
that
the
old
part
has
been
removed.
If
it's,
how
it
has
been
terminated,
then
Abel.
No,
oh
yeah!
This
is
an
orphan
volume,
so
I'm
gonna
proceed
attachment,
but
I
was
also
handled
the
case
that
checking
hey.
Who
is
your
who's,
your
pot
parents,
so.
C
E
A
E
Other
the
only
chain
was
adding
that
extra
parameter,
what
it
tells
you
who
you're,
who
the
part
pairing
is
and
that
was
added
to
determine
whether
it
was
the
part
was
being
fell
over
or
or
a
different
new
part.
But,
based
on
this
conversation,
we're
not
removing
that
in
and
going
back
to
the
you
know,
I
guess
they
already
know.
You
know
CR
the
expect
without
that,
okay.
E
A
E
A
A
B
A
B
E
I'm
keeping
in
mind
when
I'm
writing
development
is
that
in
the
future
we
might
we
we
are
going
to
support
multiple,
read
and
write.
So
this
spec,
it's
gonna,
it's
gonna,
be
the
same.
So
we
still
gonna
have
CRD
will
still
have
a
list
of
attachment
and
implementation
still
gonna
be
the
same.
So
it's.
A
E
D
A
So
we
could
I
mean.
Why
would
we
put
that
logic
in
my
agent
right
now
and
it
has
no
bear
like
I,
don't
understand
why
we
would
fail,
somebody
could
decide
not
to
use
that
or
not.
It
doesn't
have
a
lot
of
excess
ability,
or
it's
not
useful.
Why?
Why
would
we
artificially
fail
at
the
lower
layer
because
of.
C
A
E
A
E
That's
gonna
be
after
this
plugin.
This
is
PR
test,
merge,
I
would
I
be
focusing
mostly
on
block,
and
the
reason
is
because
I
just
want
to
focus
on
the
the
whole
plugin
and
framework
and
make
everything
work.
The
fencing
and
once
I
get
that
solid.
That
could
merge,
then
adding
the
support
force.
Ffs,
you
know,
will
be
some
you
know
would
be
a
better
suited
for
a
separate
PR.
E
E
E
A
E
E
E
C
A
C
A
C
A
C
A
A
E
E
A
Then
and
then
726
Jerry,
that's
that's
the
one
you're
looking
at
for
make
FS
for
blue
door.
Yes,.
C
Sir,
that
yeah,
that
is
correct
and
I've,
so
since
Friday
night
and
I
hadn't
touched
it
yet,
but
we
are
just
to
give
everyone
a
quick
summary
to
know
of
where
we
are
with
that
that
we're
to
the
point
where
we
can
revive
an
OSD
to
have
it
back
up
and
in
in
the
cluster.
But
we
do
experience
the
law
of
its
previous
placement
groups
so
because
stuff
will
wipe
them
out
again
when
we
start
up
the
OSD
again
try
to
recreate
its
file
system.
Luckily,
everything
that
it
creates,
isn't
very,
very
static.
C
So
there's
not
really
any
you
know
dynamic,
snapshotting
or
keeping
up
to
date
with
the
file
system.
Then
we
have
to
do
overtime.
It's
all
very
static,
set
of
known
files,
so
I'm
hoping
that
if
we
can
restore
those
and
not
doing
a
whole,
you
know
make
FS
operation
that
wipes
out
all
of
its
data
that
the
OSD,
since
we
know
it,
can
be
get
back
to
up
and
in
and
it
had
if
it
still
has
all
of
its
previous
placement
groups,
that
it
will
be
functional
and
ready
to
go
again.
So.
A
C
So
I
think
the
scenario
that's
being
focused
on
now
is
is
preventing
data
loss.
So
if
we,
if
we
have
you
know
SD
data
devices
still
around,
but
we've
lost
their
metadata,
what
do
we
do
to
get
an
OSD
back
up
and
running
again,
so
that
has
been
the
focus
so
far.
If
you
lose
every
single
thing
in
the
cluster
like
every
single,
you
know
that
a
host
path
or
every
single
local
storage
or
whatever
it
may
be.
That
is
not
a
scenario.
I've
been
tackling
right
now.
A
I'm
trying
to
figure
out
where
clusters,
where
we
have
cluster
states
stored
right
now.
We
have
cluster
states
so
underneath
OSD,
underneath
storage
node,
we
are
either
devices
or
directories
and
hopefully
they
all
go
into
volume
at
some
point
right
and
then
there
is
config
States
and
config
files
and
magic
files
and
all
this
other
stuff.
That's
that
gets
generated
right.
A
But
it's
because
it's
generated
we're
saying
it's
no
longer.
It
doesn't
need
to
be
persistent.
It's
not
durable
right!
So
that's
we're
trying
to
arrive
at
that
state.
We're
trying
to
arrive
to
the
point
where
all
can
think
everything
outside
of
the
devices
is
generated
from
CR,
DS
and
state
in
the
operator
and
then
any
durable,
long-term
persistence.
They
lived
on
on
on
volumes.
C
C
And
I
I
can't
speak
under
percent
to
whether
or
not
that's
gonna
be
you
know,
fully
achievable
and
a
quick
timeframe,
because
I
have
not
thought
much
about
monitors,
but
but
what
I've?
The
you
know,
the
durability
of
monitors
has
been
somewhat
less
important
for
me.
So
far,
at
least
because
you
can,
you
know,
failover
monitors
and
bring
up
a
new
one
to
replace
an
old
one,
but
you
cannot
do
that
for
OS
DS,
that
that's
why
my
priority
has
been
on
the
OST
specifically
so
far.
C
D
A
D
A
D
A
D
C
A
A
A
D
C
A
D
D
A
A
D
A
C
Yeah
we've:
we
have
already
had
a
few
instances
for
our
users
that
have
lost
their
metadata
directory
or
you
know,
lost
a
config
file
or
whatever
it
may
be,
for
OS
DS,
and
then
the
data
it
was
on
the
OST
s
is
just
gone
at
that
point.
So
that's
right.
This
is
a
at
least
a
useful
scenario,
Travis
for
people
that
have
already
been
bitten
by
it.
This
first
step
for
this
first
phase,
because
you
know
that
being
able
to
keep
the
entire
cluster
of
you
know
all
state.
C
A
A
The
kind
of
EBC
queues
use
local
volume
or
use
CBS
or
use
whatever,
and
then
they
don't
read,
you
can
create
them,
and
this
is
what
I
was
saying
about
staffing
the
cluster
earlier,
because
I
could
delete
the
entire
pods
that
are
running
the
cluster,
all
the
OSB
parts
and
all
the
moms
and
everything
else.
But
the
data
is
also
there
to.
A
A
D
B
A
D
D
D
We're
making
good
progress
I
have
the
pipelines
for
release
integration
and
load.
Long-Haul
testing
almost
ready,
I
just
have
too
much
smaller
tiers
to
go
in
before
that
self
to
be
out
the
culture.
But
my
goal
under
that
was
there
helping
proof
coverage,
but
I
might
take
a
look
at
that
good
works
project
and
see
how
good
does
against
it.
A
E
D
A
D
Okay,
I
think
there's
one
more
question
to
bring
up
kind
of
a
minor
issue,
but
maybe
an
important
versioning
question.
We
talked
about
yes
last
week,
the
CR
DS
so
to
word:
CR
DS
volume
attached
and
object,
store
and
file
system
they
to
support
both
TP
ours
and
CR
DS.
They
they
have
to
basically
start
with
an
uppercase
letter
and
then
the
rest
lowercase.
So
the
second
word
is
lowercase
and
it's
kind
of
awkward
chasing
right.
So,
if
we
be,
you
know,
it
feels
like
the
right
solution.
D
B
That's
migrating
existing
TP
ours
to
see
our
DS.
What
Travis
is
talking
about
is
in
the
TP
r-spec.
They
have
some
tokens
that
are
essentially
reverse
camelcase,
so
they
start
with
a
capital
letter
and
then
have
all
lowercase
letters
and
it
would
be
more
work
to
convert
those
to
the
proper
casing
for
CR
DS,
and
so
that's
what's
under
discussion.
A
E
I
think,
for
the
last
time,
I
heard
it's
bad.
We're
gonna
provide
some
hello.
Okay,
we're
gonna
provide
a
way
to
a
document.
It's
not
gonna,
be
the
outcome,
for
this
stack
of
that
cast
is
to
for
a
documentation
page
that
illustrate
how
we,
if
the
user
for
migraine
upgrading
unit
for
monthlies
1.7,
how
would
they
migrated
together
in
the
classic
CPR
the
pool,
TPR
and
all
the
stuff?
That
was
the
outcome
of
this.
A
D
I
think
with
the
naming
issue
right
now,
it's
it's
about!
Do
we
care
about
supporting
like
one
by
six
right
now
for
these
new
TP,
ours
or
CRTs,
we're
adding?
And
if
we're,
assuming
everybody's
already
on
1.74,
what
we're
doing
a
no
dot?
Six
then
I
think
we
just
shouldn't
worry
about
this
leg
and
I
should
just
name
the
CRTs
I
want
to
make
them
I.
A
E
D
E
One
thing
is
that
out:
I'm
too
arguing
like.
Why
is
this
a
big
concern,
because
use
can
use
couplet,
TL
and
then
can
use
any
case
until
it
was
to
get
a
list
of
their
objects?
This
is
just
visible
when
the
user,
when,
when
you
when
the
user
I'll
push
the
Yama
file
for
that
for
that
TPR
or
CRD,
and
it
says
kind
and
it
you
see
the
rebirths
comma
case,
but
for
you
know
when
you
want
some
to
listing
and
using
cube
CTL
and
a
stop
is
no
problem.
So.
B
D
E
E
E
Or
about
documentation
is
that
it's
about
implementation?
We
will
have
to
have
two
passes
pretty
much
on
our
code.
We
have
say:
oh
if
you're
one
point
six.
This
is
the
structure
how's
it
going
to
look
like
this
track.
They
see
our
different
registration,
sorry,
the
TPI
registration,
the
way
you're
marshalling
or
marshalling,
so
we're
gonna
have
to
pass
it.
So
we
have
a
if
this
one.
E
A
E
A
E
A
Well,
given
a
CPR
is
going
to
become
legacy,
I
mean
I,
see
the
point
and
coding
wise
is
this.
Is
this
something
that
you
know
could
be
factored
in
the
code
in
a
way
that
you
don't
have
to
repeat
it
like
get
get?
The
name
have
a
function
that
gets
the
name
based
on
1
6,
+,
1
7,
and
have
the
recipe
for
boiler
plate.
E
A
A
E
D
C
So
having
this
support,
then
for
both
teepee
artisan
series
across
the
different
versions
of
kubernetes
and
making
them
pleasing
to
the
eye
as
well.
Does
that
is
that
what
8:19
is
going
to
be
tracking?
Are
we
still
trying
to
do
an
automated
in
place?
Migrate,
live
migration
of
TBR's
to
see
our
DS
when
somebody's
upgraded,
their
kubernetes
cluster
I?
Don't.
A
D
A
A
A
A
E
D
C
A
E
D
A
D
D
A
A
E
E
A
E
A
D
A
E
A
So
so,
whatever
mr-
is
the
way
we
are
going
to
support
different
seats,
different
CR.
These
are
going
to
have
different
statuses
around
also
data
and
I.
Think
you
know
when
we
say,
oh
in
0.6,
the
block
stores
can
become
beta,
I,
think
that
means
that
cluster
and
pool
become
beta
and
that
they're
expected
to
work
and
upgrade
and
do
everything
everything
related
to
the
cluster
in
the
pool
is
in
beta
quality.
A
A
If
it's
actually
kubernetes
got
this
right,
I
mean
if
people
are
using
state
sponsors,
they
know
that
they
are
in
beta,
where
they
say
we're
in
beta
done.
Whole
feature
controllers.
Whatever
is
required
to
make
that
work
is
labeled
as
beta
and
for
us
it's
similar
concept
now
pool
homeland
cluster
will
be
beta.