►
From YouTube: 2019-08-13 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
has
started-
and
this
is
the
August
13th
2019
rook
community
meeting
and
let's
go
ahead
and
do
a
sync
up
on
status
for
our
current
milestone,
which
is
1.1
and
also
any
potential
tax
releases
that
we
want
to
do.
Let's
bring
up
the
1-0
project
board
and
there's
nothing
here,
that's
in
progress
or
in
review.
Is
there
any
update
on
this
particular
issue
here?
I
think
that's
what
that's
been
in
there
for
I
think
since
last
time,
so
I
don't
know.
If
there's
big
progress-
or
this
is
an
important
issue-
there.
B
Has
not
been
progress
and
I
think
it
needs
an
owner,
as
is
the
problem
we
haven't
assigned
it
to
anyone.
So
you
know
if
if
this
is
a
thing-
and
it
looks
like
several
people
are
actually
hitting
it
to
be
able
to
mount
our
volume
on
multiple
by
multiple
pods
that
can
cause
of
corruption
and
that's
the
problem.
So
even
if
the
Flex
driver
is
not
the
preferred
way
anymore
like
we
need
need
to
investigate
this
still
and
yeah,
it
needs
to
be
a
priority.
B
A
Far
as
its
kind
of
interesting
here,
though
Travis
is
that
you
know
we
also
have
a
long
lingering
issue
with
the
Flex
driver
implementation
around.
You
know
a
volume
being
locked
on
another
node
and
not
being
able
to.
You
know,
relinquish
that
and
mount
it
onto
a
second
node.
So
there
seems
to
be
an
issue
both
ways
at
certain
times
in
certain
conditions,
yeah.
B
That's
true
and,
and
that
other
one
that's
a
really
hard
one,
because
even
talking
to
the
RVD
guys
it's
like
yeah
well,
what?
If
that
node
comes
back
online,
if
we've
moved
on
and
allowed
it
to
be
mounted
somewhere
else,
it
could
cause
data
corruption.
So
do
you
really
want
to
release
that
lock?
You
know,
so
even
the
CSI
driver
I
need
to
confirm
this,
but
may
have
that
same
issue.
If
you
can't
really
the
RBD
volume,
how
do
you
mount
it
to
a
new
place
without
risking
corruption.
B
A
C
C
What
if
this
maybe
came
up
with
slight
changes
that
maybe
are
like
I,
don't
know
happening
because
of
more
well,
not
more
today,
with
the
latest
cover
newest
versions,
maybe
something
like
I
don't
know.
Maybe
some
timing
is
like
well.
Timing
includes
this
off
by
a
she's,
a
few
milliseconds
or
something
what
causes
all
it
causes
the
Flex
volume
in
our
case
kind
of
well
misbehave
in
this
case.
Basically.
C
B
C
C
A
Got
you
Alex?
Don't
worry,
okay,
so
this
is
so
hard.
So
Before
we
jump
to
that
issue
that
real,
quick
yeah.
It
sounds
like
you
know.
If
we
don't
necessarily
have
it
sounds
like
the
the
priority
of
this
41.0.
It
doesn't
seem
like
this
is
something
that
you
know
we
have
a
signed
donor
on.
It
doesn't
seem
like
this
is
the
you
know
the
you
know
focused.
This
doesn't
seem
like
there's
a
focus
on
this,
for
one
bateau
should
be
considered
putting
this
and
one
about
one.
D
So,
like
I
know,
the
kubernetes
has
a
timeout
on
a
volume.
We
wouldn't
necessarily
have
to
do
that,
because
the
sense,
if
you
used
well
you'd,
have
to
be
using
NVDA
suppose
if
you
were
using
NVDA
with
exclusive
log
and
well
no
can't
care,
because
it
works
with
exclusive
lock.
Now
to
the
idea
is
that
you
could
have
the
block
bounce
into
two
devices
and
basically
the
lock
lock
for
writing
will
move
to
whoever
most
recently
did
I/o.
D
So
it
might
be
a
good
like
failover
mechanism,
like
a
lot
of
that
machinery
was
put
into
place
and
stuff
to
be
able
to
do
live
migration.
So
it's
kind
of
the
same
as
if,
if
a
pod
is
on
an
instance
that
dies-
and
you
know
being
able
to
quickly
reschedule
that
pod
on
another
host,
we
could
potentially
yeah
that
could
be
useful.
They're
like
we
can
have
like
a
fast,
fast
recovery
of
a
beauty
to
a
different
host.
B
B
Okay,
I
thought
there
was
still
some
risk
there
with
with
corruption.
If
but
at
what
level
can
clarify
at
what
level
that
block
is
visit
in
RB
D,
so
that
it
it
can
make
sure
there's
no
corruption,
because
I
thought
there
was
still
some
debate
there
about
whether
that's
a
good
thing
to
to
allow,
but
I
might
be
thinking
of
something
else.
D
B
A
C
C
C
So
yeah
and
rifted
when
I
got
and
and
I'll
something
to
you
or
well
I'll.
Let
him
let's
Maximilian
post
them
in
the
ticket
and
then
let's
see
if
we
can
reproduce
and
hopefully
fix
it,
because
at
least
till
now,
from
I
looked
at
a
couch
for
well
a
few
hours
and
I
don't
know
like
it
doesn't
seem
I'm
not
getting
behind
we're
just
ready
issues
coming
from
there,
but
yeah
I'm,
hoping
that
yeah
yeah.
B
B
C
You
know
they
haven't
heard
most
I
wanted
to.
Try
must
understand,
but
s
of
the
time,
their
time
if
my
time
was
kind
of
limited,
when
we
had
to
call
together,
we
simply
try
to
check
which
1.0
version.
It
was
the
problem,
and
we
at
least
came
down
to
the
point
that
1.0,
free
and
1.04
are
having
this
issue
and
not
one
not
0
2
&,
1,
&
0.
A
But
yet
so
Travis,
your
your
instinct
on
that
particular
issue
and
garbage
collection
and
not
being
an
explicit
action
or
a
failure
in
logic
by
our
operator
is
sounds
correct
to
me
as
well.
I
faced
an
issue
with
this
recently
with
another
controller
I
was
writing.
Where
I
had
the
owner
references
set
incorrectly
like
I,
was
doing
across
namespaces
and
the
the
objects
would
disappear.
There'd
be
garbage,
collected,
Kazarian,
valid
owner
references.
A
So
if
there's
something
weird
about
these
owner
references,
you
know,
maybe
it
might
be
related
to
the
particular
kubernetes
version
as
well
that
the
opener
the
person
who
opened
this
particular
issue
is
on,
and
maybe
that's
why
you
know
we're
not
seeing
it
in
in
icon
master
for
using
newer
kubernetes
versions
like
1.14
or
1.15,
but
yeah
the
known
references.
It's
a
very
much
Travis
sounds
like
the
right
path
to
be
investigating
right.
C
It's
really
at
one
point
just
empty
busy
and
it
sounds
plausible.
What
you're
saying
I
worked.
The
I'll
take
a
look
at
the
output
there
and
if
the
user
ID
has
changed,
and
we
basically
have
our
kind
of
confirmation,
at
least
for
my
perspective,
that
this
might
be
a
problem
there
and
I
said
like
Travis.
As
we
said,
I'll
try
to
build
an
image
for
them
without
the
owner
reference
fix
based
on
the
1.04
and
if
they
still
have
the
issue
after
that,
then
well,
then
we
have
the
confirmation,
yeah.
A
B
So
just
talk
about
the
release
date
so
we're
last
time
I
met.
We
talked
about
August
23rd
as
being
our
feature.
Freeze,
that's
a
week
from
this
Friday.
So
it's
coming
up
pretty
quickly
and
we
talked
about
then
shipping
a
couple
weeks
later
on
September
10th,
it
so
happens.
I
had
a
trip
come
up
that
second
week
of
September,
so
depending
on
how
we're
looking
around
feature,
freeze
and
stabilization
I'd
like
to
either
maybe
even
move
up
the
ship
date
to
the
first
week
of
September.
B
A
That's
what
I
would
think
yeah
so
well.
I
guess
we'll
have
to
see
what
state
wearing
when
we're
getting.
You
know
down
to
the
code,
Free
State
and
then
seeing
what's
left
there
and
then
be
able
to
make
a
more
informed
decision,
but
I
would
expect
that
not
to
be
an
issue
if
we
are
honoring
the
feature,
freeze,
sort
of
timeline.
Well,
that's
the
caveat.
I.
A
A
Okay,
so
let's
let's
go
ahead
and
move
on
to
the
community
topics
in
question
section
so
I
wanted
to
bring
up
the
you
know
that
we
will
be
starting
down
the
application
process
for
graduation
in
the
CN
CF.
You
know
we
last
year
about
this
time
of
year,
we
we
went
from
the
sandbox
to
the
incubation
stage
in
the
CN
CF
and
the
last
stage
is
full
graduation.
A
So
we
will
be
starting
that
effort
soon
and
there's
two
main
big
items
that
we'll
need
to
accomplish
what
the
one
of
them
is:
a
security
disclosure
process.
So
we'll
need
to
have
you
know
a
process
in
place
for
for
reporting,
and
you
know,
fixing
and
disclosing
any
security
issues
that
anyone
finds
you
to
the
security
researchers
or
community
members.
Whoever
it
may
be
will
have
to
have
a
process
around
that
and
there's
some
models
in
place
from
other
CN
CF
projects
that
we
can
follow.
A
The
other
big
part
of
the
graduation
process
that
we
did
a
similar
thing
for
when
we
were
applying
for
incubation,
is
around
getting
information,
user,
testimonials
data
surveys,
etc.
Around
production
usage
of
rook
and
so
we'll
be
reaching
out
to
people.
You
know
on
slack
on
Twitter,
you
know
in
various
means
there
to
try
to
connect
with
people,
and
you
know
learn
about
how
people
are
using
rook
and
production.
You
know
what
is
some
of
the
data
or
stats
about
their
deployments.
A
D
A
B
We
move
off
the
sense
of
graduation
kind
of
technically
unrelated
to
it,
but
in
the
same
timeframe,
just
that
way
you
know
we're
working
through
the
Rope
charter
and
clarify
the
Charter
and
governance
updates.
So
just
just
as
an
FYI
I
think
we
have
some
clarification
do
around
what
what
does
it
mean
for
a
storage
provider
to
join
the
Rope
community
and
things
that
that
I
really
want
to
make
sure
we're
clarifying
by
that
graduation
timeline?
B
B
E
B
Just
as
I
was
looking
through
the
1.1
issues
last
night
and
updating
the
roadmap
PR
again
just
you
know,
we
have
a
lot
of
issues
and
PRS
that
really
need
attention.
We're
up
there
like
85
Plus
pr's.
That
number
seems
to
go
up
by
about
10
every
month.
These
days
it
seems
like
just
hard
to
keep
up
with
it.
B
So
just
as
far
as
hygiene,
if
everyone
can
make
sure
we're
getting
the
right
tags
for
storage
providers
on
them
and
just
overall
the
more
help
we
get
triaging
these
things
and
commenting
on
them,
the
more
the
more
we
can
stay
on
top
of
them.
So
it's
really
just
a
general
comment
around
it
that
yep
we
have
lots
to
do
and
people
love
the
project,
so
they're
opening
issues
and
PRS
and
want
to
make
sure
everything
gets
attention
that
needs.
That's
all
for
that.
In.
A
B
That
remove
the
bottleneck
really
from
who's
able
to
merge.
So
now
we
can
merge
without
being
blocked
when
someone's
out
of
office
or
whatever
it's.
This
is
more
I
think
the
velocities
are
already
also
included
creased
in
the
last
couple
of
months,
so
we've
been
doing
more
and
more
communities.
Members
opening
VRS
and
I'd
say
just
in
general.
A
B
A
That's
that's
huge
storage
provider
is,
can
be
a
you
know,
empowered
to
drive
that
process
a
little
bit
more.
We
know
we'll
need
a
you
know:
approval
and
employment,
so
we
have
that
process
fully
defined,
but
you
know
you
could
drive
that
if
there's
you
know
community
members
that
you're
identifying
that
you
know
or
started
gaining
trust
and
showing
value
and
a
consistent
basis,
then
you
can
could
definitely
add
that
the
dragnet
scope,
there's.
B
B
A
Yeah
being
more
explicit
about
a
site
like
a
signing
ownership
and
having
accountability
there
and
such
yeah
gotcha,
okay,
all
right
thanks
for
bringing
that
up,
Travis,
Vishal
and
Samira
I
think
you
guys
are
on
the
call.
Vishal
I
see
you're,
not
muted.
Here.
Do
you
want
to
go
ahead
and
speak
about
this
item
here.
D
E
A
Great
progress,
Samir
English,
although
that
that's
great
to
hear
I,
had
brought
up
the
design
one-pager
in
a
topic
later
on,
but
it's
it's
applicable
right
now.
So,
let's,
let's
bring
that
up
right
now.
I
all
my
feedback
that
I
had
provided
on
the
one
design,
one
pager
I
believe
was
addressed
and
incorporated.
A
A
D
A
D
B
I
just
took
a
couple
of
big
features,
just
wanted
to
make
sure
the
community
is
aware
on
these
that
were
merged
in
the
last
few
days.
So
the
a
long-standing
feature
request
we've
had
is
that
for
basically
to
run
SEF
on
top
of
TVs,
instead
of
being
tied
to
their
nodes
they're
on
with
host
path,
so
that
is
now
officially
in
you
can
and
Alexander
is
going
to
try
running
Saif
and
Saif
and
Saif.
C
B
I'll,
let
you
know
yep
and
the
CSI
driver
is
enabled
by
default
now,
there's
no
extra
configuration
required.
Just
storage
class
looks
a
little
bit
different
for
CSI
than
our
Flex
driver.
The
documentation
now
lists
both
Flex
and
CSI
as
an
option,
but
it
is
there
CSI
drivers
enabled
by
default.
You
can
also
disable
either
of
the
drivers
with
an
operator
setting.
If
you
don't
want
to
run
flex
or
CSI
yeah.
A
I
think
I
think
both
these
issues
here
are
great
progress.
Travis,
the
you
know,
being
able
to
use
arbitrary
TVs
as
a
backing
store
has
been
something
you
know.
We've
had
an
issue
open
for
for
a
long
time.
We
made
steps
on
it
in
previous
milestones,
but
making
these
more
final
steps
is
great
progress
and
then
driving
the
the
CSI
implementation
to
be
enabled
by
default.
I
think
is
a
huge
user
experience.
A
When
we're
you
know,
if
you
wanted
to
use
CSI,
you
had
to
do
some
extra
manual
stuff,
but
just
having
as
part
of
the
default
flow
I
think
is
fantastic,
so
yeah
I'm
gonna
take
that
one
for
a
test
drive.
You
know
on
my
mini
cube
cluster
here
and
see
see
if
there's
any
you
know
any
issues
or
feedback
on
that
as
well
before
one,
not
one
yep.
B
A
A
B
B
B
B
B
A
B
B
B
The
manifest
itself
as
the
object
store
tests
failing
in
the
multi
clustered
tests,
the
the
operators
seems
to
restart
in
the
middle
of
that
test
for
no
apparent
reason
as
if
it
dies,
but
I
can't
find
any
previous
logs
for
the
operator.
I
did
all
sorts
of
blog
collection.
There's
there's
just
nothing
that
points
to
why
the
operator
is
restarting
like
if
they're
gonna,
crashed
or
anything
and.
A
A
A
C
A
C
D
One
time
Jared
I
had
a
bond
thing
as
well.
Every
artist
we
get
close,
hey
how're
you
as
we
get
closer
to
you,
know
graduation,
and
what
about
things
coming
it'd
be
good
to
like
start
ramping
up
blog
posts
on
the
website
to
get
a
bit
more
content
going?
Doesn't
you
only
have
any
interest
or
any
suggestions?
I,
don't
know
if
you've
done
this
in
the
past,
like
share
a
dog,
have
some
ideas
and
get
some
sort
of
rotating
schedule
for
blog
posts.
A
Yeah,
like
our
cadence
for
blog
post,
normally
has
been
around
there
like
specific
releases
in
the
past.
That's
that's
been
the
way
we've
done
that
so
far,
but
we
are
always
super
welcoming
and
happy
to
have
you
know
anybody
who
wants
to
has
a
good
topic
or
something
that
they
want
to
share
to
be.
You
know
a
an
author
on
the
blog
and
to
you
know,
write
up
a
post
and
won't
be
kind
of
I'd
feedback
and
to
publicize
it.
B
D
A
B
A
E
So
on
the
topic
that
you
and
Travis
are
just
discussing
before
before
you
know,
you
finished
all
these
in
items,
so
you're
discussing
an
issue
with
tests
right
and
right
now,
I'm
in
probably
the
same
same
place,
I
didn't
fully
understand
your
conversation
there,
but
what
is
happening
with
me
is
the
intuition
test
that
I
have
for
my
gigabyte
in
DB
operator.
That
runs
fine.
E
It's
just
that
some
of
the
Cassandra
and
safe
in
division
tests.
They
did.
They
fail
so
I'm.
Looking
at
that
right
now,
so
I
just
wanted
to
understand
first,
if
in
the
master
branch,
if
all
the
integration
tests
succeed
right
now,
if
they
don't,
then
how
much
should
I
pay
attention
to
in
these
failing
tests?
I
haven't
really
touched
any
code
in
outside
of
you
bite
operator.
B
B
A
I
feel
free
to
like
the
particular
errors
that
you're
seeing
you
know
feel
free
to.
If
you
haven't
already
share
those
on
slack,
real,
quick
and
then
I'm
sure
you
know,
Travis
probably
has
those
particular
errors
that
he's
been
seeing
imprinted
on
his
in
his
brain
and
he'll,
be
able
to
look
at
yours
pretty
quickly
if
the
same
issue
or
if
it's
something
else
that
we
need
to
investigate
and
stuff
in
a
parallel
effort.
Okay,.