►
From YouTube: 2018-04-24 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
is
started.
This
is
the
April
24th
to
2018
rook
community
meeting.
So
let's
go
ahead
and
jump
right
on
into
it.
I,
don't
think
that
there's
any
lingering
0.7
issues
besides
this
issue
that
we've
seen
a
couple
times
before
I
saw
a
few
updates
on
this
issue.
Recently,
Travis
and
Wayman.
Do
you
have?
Can
you
summarize
those
for
us?
You.
B
Know
I
spent
the
day
or
two
looking
at
it.
You
know
the
last
couple
days
and
trying
to
see
all
because
this
is
a
pretty
serious
data
loss
issue.
Of
course,
I
think.
The
summary
is
that
the
detaching
or
unmounting,
the
the
volume
RBD
volume
fails,
and
while
it's
failing,
the
fencing
has
been
removed
and
then
a
new
request
comes
into
attached
and
the
attachment
happens
and
thinks
it's
not
formatted,
and
so
it
goes
ahead
and
formats
again
and
the
what
it
really
looks
like
is
well.
B
First,
we've
got
to
make
sure
it
doesn't
format
again.
We
only
ever
want
to
format
once
and
I
can't
find
a
way
around
that
without
well.
Ideally,
we
would
find
the
root
cause
of
why
our
media
is
getting
to
this
state,
where
I
won't
tell
us
if
it's
formatted,
but
if
we
can't
find
that
that
root
cause.
B
If
we
found
the
root
cause
for
it,
I
think
we
could,
but
at
the
same
time
we're
getting
closer
to
no
doubt
eight.
That
may
be
a
little
late.
If
we
don't
know,
since
we
got
another
root
cause
yet
yeah
and
in
1.10
I
would
have
you
know
what
the
change
to
block
ID
it.
It's
not
clear.
Yet
if
it's
repros
less
often
or
someone
did
say
they
had
a
repro
in
one
night
n,
which
means
walk,
ID
didn't
solve,
it
did.
B
A
B
A
B
A
A
A
Okay
and
then
that
you
added
that
to
the
agenda
doc
I
saw
so
we
can
talk
about
that
when
we
get
there
I
suppose,
yeah
okay
sounds
sounds
good,
then
may
keep
us
informed
of
any
further
updates
on
the
issue.
15:53
yeah,
all
right.
So
let's
go
ahead
to
0.8
I.
Believe
from
last
the
last
meeting
that
we
had
a
follow-up
for
updating
the
roadmap
and
this
meeting
is
going
to
have
a
theme
of
me
personally
being
fairly
useless
since
I've
been
gone
for
two
weeks.
A
B
A
You
want
to
get
together
with
me
later
on
today
or
tomorrow,
or
some
sometime
this
week
to
scrub
the
0.8
milestone.
So
we
don't,
you
know,
bogged
down
the
entire
community
meeting
with
that.
You
know
bookkeeping
effort,
yeah,
definitely,
okay,
so
we
we
can
do
that.
Let
me
see
if
there's
any
interesting
things
to
talk
about.
40.8
is
this:
this
is
zero
at
a
project
board
up-to-date
as.
A
A
Sounds
good
yeah,
so,
let's,
let's
find
some
time
then
to
just
scrub
the
milestone.
So
you
have
everything
kind
of
up
to
date
with
what
what
we've
defined
in
the
roadmap.
Is
there
any
particular
issue
in
0.8,
either
in
the
milestone
or
on
this
project
board
here
that
anyone
else
on
this
call
wants
to
bring
attention
to
for
either
severity
or
importance
or
risk
reasons.
B
C
A
Alright,
let's
take
a
look
at
the
next
item
on
the
agenda,
which
is
about
cube,
God,
very
exciting,
I.
Think
a
fair
amount
of
folks
from
the
community
will
be
there
in
Copenhagen
and
I
will
be
there.
Bassam
will
be
there.
Tony
will
be
there
as
well.
Tony
Allen,
a
new
contributor
for
rook,
so
I'm
excited
to
you
know,
have
him
meet
some
of
the
rest
of
the
community
as
well.
A
C
A
C
A
Permissions
to
create
the
Jag
create
a
channel
Alex,
okay,
excellent,
so
please
do
follow
up
on
that
then,
and
just
and
then
send
put
a
little
link
to
it
in
the
general
or
announcements
channel.
Whatever
people
can
join
cool
and
then,
in
addition
to
that,
yes,
so
Alex
I,
don't
think
that
we
need
a
specific
meetup
to
be
scheduled,
because
we
actually
have
a
few
I
added
these
the
agenda,
doc
that
we
have
a
few
scheduled
meetups
and
Rooke
specific
things
at
that
good
for
everyone
to
meet
up.
A
If
we
want
to
do
something
more
informal,
you
know
like
go
out
for
some
drinks
or
something
like
that.
Then
you
know
we
can.
We
can
decide
on
that,
but
we
do
have
some
dedicated
time
slots
for
the
community
as
a
whole
to
meet
together
one
of
the
ones
that
is
a
new
one.
That
we
just
learned
about
is
the
CN
CF
Dino
cloud
native
computing
foundation.
A
They
are
running
a
little
lounge
thing
where
maintainer
z--
all
of
the
CN
CF
projects
like
Prometheus,
you
know
the
linker
D
Brooke,
all
those
projects,
the
maintainer
z'-
will
be
there
for
the
community
to
you,
know,
meet
and
come
chat
and
stuff
like
that.
So
on
Thursday
at
10:30
in
the
morning,
Bassam
and
I
and
I
think
Tony
will
be
there
as
well
now
except
Alex,
of
course.
So
that
should
be
fun.
And
then
you
know
this.
Is
it
some
more?
A
You
know
more,
like
salon-style
I'm,
not
excited
that
keeps
seeing
the
word
salon
use,
but
I'm
not
entirely
sure
what
it
means,
but
I
believe
it's
more
informal
than
some
of
these
more
you
know,
lecture
presentation
ones.
This
is
the
more
formal
talk
that
I'll
be
giving
on
Friday
if
anyone's
still
around
and
then
Friday
afternoon
at
the
conference.
How
many
attendees
are
there?
Yes,
there's
four
different
opportunities
here
to
scheduled
opportunities
for
the
road
community
to
get
together
and
talk
and
hopefully
learn
some
things
from
each
other
as
well
so
cool
ox.
B
C
D
A
Cool
well
we're
looking
forward
to
seeing
anyone
from
mix
into
that
weekend.
That's
for
sure
all
right!
So
then
that
there
weren't
any
community
topics
or
questions
that
got
added
to
the
edge,
and
so
we
can
skip
over
that
and
Travis.
You
have
a
pull
request
that
you
added
that
you
want
to
discuss
today.
Yeah.
B
There's
not
a
big
discussion
point
here.
I
just
wanted
to
make
sure
everyone
was
aware
that
removing
communities
1.6
support
is
going
forward.
The
pull
request
is
open,
just
interview
it
once
you
get
settled
Jared.
If
you
could
take
a
look
at
that
and
Alexander
or
others
great
it
moves
the
TPR
support.
It.
B
A
A
A
A
E
A
One
thing
that
I
still
have
pretty
high
confidence
of
and
oh
and
I
also
need
to
follow
up
on
this
more
completely.
But
the
survey
that
we
had
sent
out
a
couple
of
weeks
ago
to
get
a
sense
of
the
rook
landscape
of
you
know
how
big
deployments
are.
You
know
what
kubernetes
version
people
are
using
and
things
like
that.
A
A
C
A
B
Could
we
maybe
just
talk
through
what
are
the
big
features
remaining
408
and
just
a
quick
status
on
those
I
think
to
see
if
I
mean
are
we
it'd
be
nice
to
think
about
a
timeline
like?
Are
we
a
month
out?
Is
it
longer
if
408?
So
it
would
be
good
just
to
get
a
quick
pulse
for
that
yeah
I
had
to
go
through
all
the
tickets
by
any
means
we're
gonna
go
through
and
scrub
them
right.
A
A
Try
to
keep
that
in
mind,
then
so
for
the
for
multiple
storage
backends,
the
design
has
been
merged
into
master
and
a
refactor,
as
is
pretty
much.
You
know
ready
for
consumption
for
other
storage
providers
in
a
dev,
since
there
is
more
work
to
be
done
on
that
for
before
it
goes
to
master.
So
things
like
migration
support,
where,
if
you
have
an
existing
storage
cluster
that
was
created-
and
you
know
stuff
is
running
with
the
version
1,
you
know
CRD
types.
We
have
written
migration
logic
to
migrate
to
those
CR
DS
2.
A
You
know
new
CR
DS,
that
support
multiple
storage
providers
in
the
cluster
and
that
work
needs
to
be
thoroughly
tested
and
vetted
before
we
put
it
into
master.
So
I
am
purposely
not
rushing
on
getting
that
into
master,
because
I
want
and
want
to
make
sure
that
it's
going
to
be
as
minimally
disruptive
as
possible
to
folks
that
already
have
deployed
clusters
and
then
integration
tests
and
documentation
all
needs
to
be
updated
as
well
to
support
to
be
cohesive
with
this
refactor.
A
So
the
the
plan,
though
the
prioritization
right
now,
is
four
cubed
on
next
week
and
what
we
want
to
demo
and
show
off.
So
we.
What
we're
going
to
do
we've
already
started
doing
is
that
the
refactor
is
is
in
a
particular.
You
know,
dev
completed
status
in
my
Jared
fork
and
Tony
and
I
are
going
to
be
using
that
to
do
a
couple
of
other
sorts
provider.
A
Implementations
like
Tony
is
gonna
be
working
on
many
o,
and
so
we
will
be
using
that
private
for
that's,
not
private,
it's
public,
but
that
my
personal
fork
to
be
driving
that
work
and
then
after
a
coupe
gone
after
everything,
settles
down,
then
we're
all
back
in
the
country.
We
will
then
focus
again
on
getting
that
refactor
and
the
migration
and
everything
to
master
in
in
a
status
or
a
state.
That's
ready
to
be
consumed
by
end
users.
A
A
B
A
Yeah,
that's
that's
pretty
much
it
right
there
and
that
so
those
arbitrary
TVs
were
kind
of
those
got
incorporated
into
the
design
for
supporting
multiple
storage
backends.
You
know
the
whole
concept
of
a
storage
scope.
You
know
arbitrate
v's
in
addition
to
you
know
disks
and
directories,
and
things
like
that.
I
was
incorporated
into
that
design.
A
May
be
something
that
you
know
doesn't
necessarily
make
it
into
0.8
if
we're
going
to
be
more
date
driven.
But
it's
it's
something
that
has
been
requested
and
has
has
a
lot
of
value.
Still
the
Brooke,
API
and
CLI
is
has
been
removed.
That's
the
master!
That's
completed,
I
I
that
seem
to
have
gone
fairly
smoothly,
as
well
as
that
correct
Travis,
yeah.
C
A
Yeah
I
think
the
the
work
that
we
did
to
update
all
the
documentation
for
all
of
our
scenarios
so
that
we
didn't
really
lose
scenarios
and
we
have
a
new.
You
know
documentation
that
shows
people
how
to
do
that
with
without
the
Brooks
ETL
was.
You
know,
obviously
a
very
good
idea
that
helped
make
that
more
smooth.
So
a
bunch
of
work
was
done
on
the
lease
privilege,
design.
I
know
we,
you
know
Illya
factored
out.
The
creation
of
CRD
is
to
remove
some
permissions
from
that
the
operator
needs.
A
B
B
Well,
an
open
shift
has
all
the
security
features
enabled
by
default,
that
you
can
think
of
all
the
you
know:
reduced
privileges
for
our
back
and
anyway,
a
few
different
things
then,
and
I'm
not
clear.
Yet
why
open
shift
has
this
issue
with
setting
the
block
on
our
deletion
on
CR
DS
because
it
works
in
mainland
communities,
but
it
seems
to
be
one
of
those
differences
between
open
shift
at
mainland
and.
B
B
B
A
A
This
is
in
progress.
Yours
are
in
review,
you
have
a
pull
request
open
about
it
and
we
already
talked
about
that.
That's
good
yeah,
these
they're
I,
think
I,
think
I've
saw
it
seem
like
three
or
four
pull
requests
that
are
open
by
women
that
are
around
all
these
topics
deserve
a
quick
summary
for
all
that
work.
Yeah
well,
Minh
are.
D
E
D
Going
to
be
a
some
overlap
in
but
I,
don't
think
that
it's
complete
coverage
well,
the
certain
things
local
water
will
have
given
to
you
is
quite
limited.
You
could
have
the
local
volumes,
but
you
still
have
to
go
through
the
awkward
or
computing
labs
to
look
up
the
volumes
and
then
that's
essentially,
what's
a
storage
cost
and
just
look
at
the
epic
maps
and
do
the
same
thing
for
you
and.
E
D
Think
that's
could
be
a
discussion
with
that.
I
hope.
That's
going
to
be
been
to
be
the
case,
because
local
volumes
I'm
in
general,
the
storage
craft,
not
give
you
a
lot
of
choices
in
terms
of
which
device
you,
but
we
want
to
match
and,
as
you
adduced
tricky
things,
that's
like
annotations
you're
not
able
to
match
the
devices
you
desire.
The
other
thing
you
have
paper
is
the
capacity
as
we
now
start
for
storage
service.
That's
it's
not
enough,
but
once
we
have
this
config
Maps
does
have
all
the
things
about
the
device
available.
D
D
D
E
D
E
D
E
I
guess
I
guess
I'm
trying
to
say
is
that,
if
what
would
be
helpful
is
trying
to
understand
kind
of
what
we're
doing
in
the
short
term
to
fill
in
the
gaps
that
for
local
volume.
But
it
also
will
be
helpful
to
understand
kind
of
on
a
trajectory
of
you
know
where
which
of
this,
where
it
goes
back
upstream
to
local
volume
and
and
and
which,
if
it
just
goes
away,
yeah
I,
don't
feel
like
I.
E
D
D
E
D
E
C
B
E
D
E
D
E
A
E
Better
position
for
us
to
be
at
that's,
that's
that's
another
thing
that
kind
of
weighs
on
my
okay
yeah,
but
yeah
I,
mean
I,
think
I,
think
you're.
Think
thinking
about
is
as
general
device
discovery
helps
staff
and
others,
and
maybe
there's
a
path.
If
you
could
just
maybe
start
a
discussion
of
Michelle
or
try
to
understand
how
this
Oh.
D
D
D
E
E
D
Case
can
covered
because
the
you
taste
never
comes
to
a
storage
service,
and
so
maybe
they
haven't
got
that
straight
part
yet,
but
if
we
are
able
to
address
you
some
of
the
communities,
the
diversities
cover
is
one
of
the
things
the
config
map
is
emeritus.
If
the
functionality
of
the
despised
discovery
can
be
generalized
in
kubernetes
as
possible
and.
E
I
think
that
would
be
helpful
to
the
community
and
what
would
help
us
in
general
I
see
no
problem
with
us
doing
something
in
the
interim
to
unblock
our
scenarios
mm-hm,
but
I
would
hope
that
there's
a
path
like
this,
especially
with
the
presence
of
local
William,
take
take
some
portion
of
that
upstream.
Yeah.
D
D
So
the
current
config,
my
it
just
take
what
movies
from
you
it's
going
to
be
a
super
set.
It's
actually
a
show
set
of
the
local
volume
okay
for
option
to
consume
it
that's
at
forward,
and
we
have
steered
you
what
our
business
here
and
often
Canadians
can
use
the
same
discoveries
and
same
configure
haps
and
they
all
work
for
their
storage
across
a
concept.
I,
don't
see,
there's
our
conflicts.
A
B
E
E
D
A
I
think
the
first
first
step
here
sounds
like
kind
of
getting
our
thoughts
together.
You
know
written
down
somewhere,
it
would
be
the
first
step
kind
of
figuring
out
where
the
intersection
is
or
what
we
would
like
to
possibly
go
upstream
and
local
volumes
or
what
local
volumes
isn't
providing
for
our
scenarios.
Yet
getting
that
capture
just
gonna
be
the
first
step.
You
know,
whoever
brings
it
up
to
six
storage
or
you
know
drives
it
after
that
we
can
discuss,
but
getting
it
written
down.
Yeah.
E
I
think
that
getting
the
rook
community
on
the
same
page
is
it's
definitely
the
first
step
and
then
any
of
us
can
carry
the
torch
and
you
know
drive
it
up,
so
we
could
start
with
with
that,
what
the
what
we're
doing
the
interim
and
that
how
we
think
that
goes
so
you
know
the
relation
to
local
local
law
could
be
next.
Okay,.