►
From YouTube: 2018-05-08 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
this
is
the
rook
community
community
meeting
for
May
8:28
een
and
let's
go
ahead
and
get
started
so
I
believe
that
in
the
0.7
milestone
that
we
do
not
have
any
lingering
issues
now
Travis,
can
you
speak
to
the
issue
about
the
agent
formatting
and
in
use
volume?
That's
no
longer
on
this
board.
Yeah.
B
I
definitely
wanted
to
speak
to
that
the
you
know
there
was-
or
we
talked
to
the
last
meeting
about
if
this
was
really
an
issue
in
1.10,
since
they
updated
the
the
calls
from
Ellis
block
to
the
the
new
one
buck,
ID
block
ID
exactly
and
the
repro
that
that
was
supposedly
on
110.
He
tried
it
again.
They
tried
a
bunch
and
he's.
A
A
B
1.10
so
the
I
left
the
ticket
open.
But
if
it's
fixed
in
1.10
I
think
we're
there's
nothing
more
for
us
to
do
there
and
always
have
to
I
think
well.
I
think
we
need
to
post
a
warning
that
on
1.10
you
run
into
this
issue
or
previous
to
one
that
10
and
if
it
does
repro
in
one
night
and
then
we'll
have
to
look
at
it
again.
But
we
don't
have
that
solid
and.
D
A
B
A
He
mentioned
that
he
did
still
see
it
happening
somewhere,
which
I
don't
know
the
context
of
so
could
you
you
know
mention
him
on
that
issue,
just
to
confirm
that
you
know
they're
not
seeing
it
either
on
Monday,
10
and
greater,
because
that
would
definitely
help
me
sleep
better
at
night
and
then
then
I
would
agree
with
everything
you've
said
so
far.
All.
A
Right,
so
that's
0.7,
then
so
in
0.8,
Travis
and
I
was
that
last
no
not
last
week,
but
after
the
last
community
meeting
you
Travis
you
and
I
went
through
and
scrubbed
0.8
milestone.
You
know
to
have
the
issues
in
there
that
are,
you
know,
in
accordance
with
what
we
have
had
defined
in
the
roadmap,
so
I
believe
that
this
list
is
fairly
scrubbed,
I,
don't
I,
don't
know
I'm
kind
of
catching
back
up
from
everything.
B
A
A
Hey
Alexander
areas,
thank
you
for
joining,
so
I.
Think
right
now,
I
think
a
good
question
to
pose
to
the
community
would
be
about
which,
from
this
list
that
we
have
scrubbed
are
at
a
risk
or
ones
that
were
concerned
about
and
then
potentially
as
an
offline
exercise.
Travis
you
and
I
can
figure
out
more.
A
A
I
really
I
really
have
liked
that
one
we've
gotten
a
lot
of
feedback
from
people
we've
talked
to
in
the
community
as
well
that
that's
something
that
we
want
to
do.
There's
the
design
for
hit
supporting
multiple
storage
providers
and
storage
types
does
have
an
affordance
for
this
as
well,
but
it's
not
really
in
depth
behind
it's
something
that
has
been
on
the
radar
and
we
still
want
to
do
this
soon,
but
we
don't
really
have
anyone
who's
taking
ownership
of
it
specifically.
A
D
A
C
A
Right
so
I
think
that
the
you
know
that
I
Travis
I,
don't
know.
If
you
can
comments
on.
You
know
some
of
the
guys
on
your
team
in
terms
of
what
their
priorities
are
in
the
near
term.
I
know:
Wyman
is
working
on
the
one
pod
for
OSD
and
it
would
add,
adding
and
removing
you
know.
Disks
and
directories
be
a
next
thing
for
him.
Yeah.
B
I
think
that'd
be
a
good
transition.
There's
a
Dan
make
also
may
be
gain.
I
direction,
he's
been
looking
at
chef
volume
stuff
to
which
isn't
in
the
o,
dot
eight
board,
but
I
yeah
I
definitely
see
yes
from
Red
Hat.
Taking
of
that
that
one
I'm
not
sure
if
it'll
make
the
o
date
timeframe,
but
there
have
been
a
couple
requests
for
that
feature
for
sure
so,
I'd
like.
A
And
this
is
the
rest
of
the
the
items
in
here.
There's
really
not
that
much
I'm
concerned
about
considering
how
much
of
it
is.
You
know
either
in
progress
or
in
review,
or
you
know
really.
You
know.
Good
progress
has
been
done
on
a
lot
of
them,
potentially
how
much
we're
gonna
scope
improving
the
security
model.
That
may
be
something
to
talk
about
more,
but
at
least
Travis.
You
said
that
you
have
ownership
of
that
going
forward
mm-hm.
So
that's
good.
C
A
A
D
D
D
C
C
A
C
B
A
Yeah,
it
was
a
very
from
my
perspective.
It
was
a
very
good
experience
where
we
got
to
meet
up
with
and
have
some
really
good
conversations
with
a
fairly
good
showing
from
the
rook
community.
You
know,
Alex
did
a
good
great
job
too,
of
continued
kind
of
organizing
and
getting
guys
together
as
well
to
kind
of
you
know,
keep
keep
a
little
posse
of
rook
folks.
You
know
hanging
out
together
during
the
week,
so
that
was
really
nice.
The
we
had
three
talks
about
rook
topics.
One
of
them
was
an
intro
to
the
project.
A
That
he's
running
in
the
even
more
usages
of
storage
from
applications
that
I
even
thought
he
had.
So
that
was
really
good
to
see
all
that
stuff.
And
then
we
had
a
third
talk
about
the
portability
of
data
and
some
ideas
for
building
truly
portable
storage
solutions
and
kubernetes
that
we
are
looking
to
start
building
in
Rooke
as
well
and
I.
Think
bassam
opens
some
issues
about
those
and
it
looks
like
there's
some
good
support
from
some
people
that
have
real-world
scenarios.
A
They
want
to
be
solved
from
I
mean
we're,
adding
more
abstractions
for
storage
on
the
like
consumption
side,
like
data
bases
and
buckets,
and
things
like
that.
So
all
those
talks
weren't
really
well.
They
have
all
been
published
to
the
CN
CFS
YouTube,
channel
and
I.
Will
leap,
put
a
link
here
in
the
doc
to
that,
so
that
we
can
you
can
people
can
follow
up
and
watch
the
talks
later
on,
I
put
it
in
slack
as
well.
Yes,
anybody
else
have
any
thoughts
from
their
experiences
in
Copenhagen
this
week
there
were
some.
C
Pretty
good
ideas
coming
out
of
people
during
the
meet
the
maintainer
x'
session.
I
guess
some
people
were
talking
about
like
what
they
wanted.
As
far
as
like
the
testability
of
storage
goes
things
like
benchmarking
and
stuff,
like
that,
I
thought
that
was
pretty
good.
I
asked
them
to
follow
up
or
join
the
slack
and
I.
Don't
know
if
they've
had
a
chance
to
do
that
or
if
they're
going
to
tell
me
something
we
should
think
about
in
the
long
term.
A
D
A
D
Well,
after
Cooper
was
also
good,
pretty
in
person
pretty
good
that
we
had
to
deep
dive
in
a
project
intro
and
we
could
kind
of
see
how
well
people
are
joined.
More
people
joined
after
the
after
the
intro
already
and
after
the
deep
dive
and
a
good
amount
of
people
there.
Well
they
had
a
question.
We
could
found
problem
going.
Hey.
Could
you
please
join
us
like
they
already.
You
know
on
have
a
few
people
that
were
already
following
up
on
and
so
yeah.
It
was
totally
awesome
when
yeah
it
was
Jenna,
come
again.
A
C
A
All
right
cool,
so,
let's
move
on
to
the
community
topics
in
question,
so
the
first
one
we
have
on
here
is
that
we
have
a
new
contributor
to
the
project.
Rohan,
who
is
a
university
student
who's
got
accepted
by
the
google
Summer
of
Code
program
to
work
on
work
this
summer,
hi
Rohan,
oh
you're,
on
the
call
right,
oh
I,
think
I,
muted,
you,
you
felt
my
mute
wrath.
I'm,
sorry.
D
You
sure
he
just
microphone,
that's
off,
you
know
Rohan.
Can
you
try
talking
into
chat.
C
C
A
We're
looking
forward
to
the
the
work
that
you're
going
to
be
working
on
this
summer
with
adding
in
FS
as
a
storage,
back-end
I,
think
there's
some
interesting
things
to
do
there
and
the
first
phase
of
Rohan's
work
is
a
community
bonding
period,
which
Rohan
has
actually
already
got
a
very
good
head
start
on
that,
like
a
month
ago
by
working
on
NATO
requests
and
getting
his
developer
environment
set
up
so
Rohan,
say
a
go-getter
and
has
already
got
a
good
good
head
start
on
all
that.
The
next.
A
A
C
A
Yeah,
that
sounds
like
great
details.
Definitely
all
right,
so
we
kind
of
talked
about
the
zero
at
a
milestone
earlier
above.
But
one
specifically
specific
thing
that
I
wanted
to
talk
about
was
the
work
that
we
had
done
to
add:
support
for
multiple
storage
fighters
and
storage
types.
We
need
to
get
that
into
master,
as
you
may
have
noticed
at
cube
con.
We
gave
demos
that
was
showing
the
new
operators
and
we
have
a
blog
post
to
talking
about
the
new
operators
and
types
for
a
cockroach
DB
and
for
a
mini.
A
You
know,
and
they're
they're,
based
off
of
the
big
refactor
for
supporting
multiple
storage
types,
it's
not
yet
in
master.
So
that
is
the
my
personal
highest.
Priority
is
getting
that
into
getting
that
to
a
state
where
it
is
ready
to
go
to
master
the
one
of
the
big
things,
for
that
is
ensuring
that
we
do
not
progress
any
existing
deployments
from
from
master
or
previous
releases,
and
so
one
aspect
of
that
is
supporting
migration
of
the
existing
types
like,
for
instance,
you
have
a
you
know,
a
v1,
SEF
cluster
or
cluster
type.
A
That
was
only
supporting
staff
before
and
now
we
have
specific
types
for
you
know
it's
different
specific
storage
providers,
and
so
the
operator
we've
written
code
for
the
operator
to
migrate
those
types,
but
we
need
to
ensure
that
that's
well
well
tested
and
then
we
don't
own
the
lease
some
thing
or
put
something:
it's
a
master,
that's
going
to
break
existing
deployments.
So
that's
a
big
part
of
the
work.
That's
left
there
as
some
integration
testing
making
sure
all
the
docs
are
updated,
helm,
shard
support.
A
All
those
items
are
still
outstanding
and
I
am
the
owner
of
those,
but
there
may
be
some
possibility
here
for
getting
support
or
help
from
other
developers
at
the
community
as
well
Travis.
You
may
be
a
candidate
that
Tony
may
be
a
good
candidate
for
that
as
well.
Since
you've
been
elbow
deep
and
a
lot
of
that
code,
working
on
the
mini
Oh
operator,.
A
C
Did
you
say
that
the
migration
is
automated,
or
is
that
we're
gonna
give
instructions
for
how
people
do
that?
That.
A
Is
automated
by
the
operator
because
it's
basically
a
type
conversion,
there's
no
real
functionality,
conversion!
You
know,
because
once
you're
done
you're,
so
you
end
up
with
the
stuff.
You
know
at
least
for
stuff.
You
end
up
with
the
same
OSD
and
monitor
pods,
but
it
types
the
CRT
types
that
represent
them
are
different.
So
it's
really
a
type
can
in
place
type
conversion,
so
that
can
be
automated
fairly
easily.
C
It
almost
feels
like,
given
that
you
know
that
this
is
such
a
fundamental
change
and
you
know
we're
trying
to
get
a
0.8
release.
Oh
I
completely
agree.
We
should
get
this
master,
but
it
almost
feels
like
we
should
all
get
involved
and
try
to
get
up
to
mastery
so
that
now
we
could.
We
could
make
the
process
a
little
easier
first
to
release
yeah.
A
Have
a
fear
what
I
believe
to
be
a
fairly
exhaustive
list
of
remaining
work,
items
and
I
do
absolutely
see
opportunities
for
parallelism,
and
you
know
so
have
multiple
assignees
on
them
as
well.
So
Travis
are
you
as
as
a
resource
and
a
human
being.
Do
you
have
availability
to
help
out
with
that
this
week,
yeah.
B
A
C
C
A
Actually
may
be
a
really
good
way
to
do
that.
Travis,
cuz
they're,
not
you
know
massive
things
that
necessarily
need
their
own
issues,
perhaps
tracking
it
in
the
work-in-progress
PR
with
those
that
task
list
might
actually
be
a
very
good
way
to
do
that
right,
yeah!
Well,
so
we'll
follow
up
offline
on
that.
Then
that
sounds
fine.
This
list
here,
I
had
there
sorry
of
this
item.
I
had
hear
about
merge
conflicts
with
some
of
the
big
stuff
changes
that
are
going
on
right
now.
A
B
A
A
C
A
So
yeah,
let's
I
I,
will
add
the
remaining
items
to
as
work
items
to
the
to
the
work
of
progress,
PR
and
then
you
know,
get
some
assignees
and
started
kind
of
parallelizing.
That
effort
yeah,
okay,
PR,
is
to
discuss
that
need
attention.
Travis
you
have
this
PR
hear
about
our
rook
agent
flex
volume
supporting
multiple
clusters.
That's
why
I
took
a
look
at
that
earlier
this
morning,
I
believe
didn't
I
yep.
B
So
so
this
is,
this
came
out
of
the
investigation
I'm,
getting
the
integration
test
to
run
on
1.10,
which
in
theory
was
going
to
be
a
simple
thing,
but
it
turns
out.
There
was
a
change
that
exposed
some
conflicts.
We
have
in
our
integration
tests
and
and
really
in
production
code
with
the
Flex
volume
driver
and
the
to
explain
what
that
is
in
a
nutshell,
our
flex
volume
driver
because
it
uses
the
RPC.
B
Channel
and
the
anyway
there's
a
single
name:
it
uses
the
for
the
rook
flex,
volume
driver
for
the
unix
unix
sockets
yeah,
and
I
used
that
same.
You
know
specific
path
for
for
that
socket,
and
so,
if
you
have
multiple
root
system,
namespaces,
basically
multiple
operators
running
with
multiple,
so
multiple
agents
running
on
the
same
node
is
what
it
comes
down
to.
Those
agents
would
be
starting
up
the
same
flex
driver
and
using
the
same
UNIX
socket
and
what's.
B
C
C
B
So
the
you
know
the
way
to
avoid
that
conflict
is
if
we
have
a
unique
driver
per
agent
which,
based
on
namespace
and
like
the
clearest
way
to
do
that,
then
we
end
up
with
a
more
dynamic
flex,
valium
driver
name
which
Jared.
Then
you
pointed
out
in
that
code
review
that
it
seems
kind
of
odd.
If
you
have
to
specify
that
namespace
in
the
Flex
driver
name
right
for
dynamic,
dynamic,
plugins,
it's
not
you
don't
ever
see
that.
But
right
now
in
the
in
the.
C
B
C
D
B
A
Will
go
ahead
and
continue
discussion
on
the
pull
request,
then
yeah,
okay.
So
the
next
pull
request
here
that
this
motocross
tradition
we
were
just
talking
about
is-
was
the
remaining
elusive
test,
braking
issue
that
was
discovered
by
removing
kubernetes,
wonder,
support
and
adding
support
for
1.10
inner
integration
test,
correct
right,
yeah
right,
so
those
are
highly
related.
Yeah.