►
From YouTube: 2018-08-14 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah,
so
the
first
point
is
at
the
roof
to
the
Ranger
khatola
I
kind
of
stumbled
across
the
issue.
After
some
time
again,
and
what's
more
or
less
wondering
if
the
people,
the
community
of
Rochus
well
still
interested
in
adding
ropes
to
the
Ranger
code,
personally
I
don't
see
it
doesn't
match
things
to
me
like
that.
I
see
the
Murphy's
opportunity
for
up
to
well
against
more
audience
by
simply
adding
them
to
a
catalog.
If
someone
could
do.
B
A
So
we
know
we
don't
have
a
very
clear
idea
of
the
process
yet
got
it
and
has
there
been
so?
This
is
a
very
old
issue.
This
is
you
know
almost
a
year
old
now
is
there?
Has
there
been
recent
demand
that
anyone
has
heard
about
this?
Just
out
of
curiosity
I,
don't
think
it's
a
bad
idea
to
do
this,
but
I'm
curious.
If
there
has
been
any
recent
debate,
not.
C
A
Call
it
so
Alex.
Did
you
say
that
you
would
take
a
look
to
see
what
what
it
would
take
to
be
added
to
that
catalog?
So
we
just
get
a
better
sense.
Okay,
yeah
see.
If
you
could
follow
up
on
that,
then
then
then
you
know
I'm
sure
they
don't
have
any
problem
with
this,
but
it
would
be
good
to
know
that.
B
Yes,
basically
kind
of
feature
requests
in
further
increase
the
possibilities,
the
placement
possibilities
of
the
Asian
and
discovery
demons
they
use
us
in
that
case,
earth
only
want
to
run
the
discovery
agents
on
certain
notes,
but
what
kind
of
current
aspect
they
would
run
everywhere,
but
only
not
run
on,
or
especially
also
around,
the
notes
with
chains
and
the
user
wants
to
have
a
certain
affinity
with
labels
like
the
normal
placements
back.
We
have.
C
B
A
Know
Alex:
this
is
something
that
you
had
seen
the
community
talking
about
like
on
slack
and
stuff
to
write.
There
had
been
demand
from
from
users
for
this
type
of
functionality:
yeah,
okay,
alright,
so
we
can
go
ahead
and
follow
up
to
make
sure
that
an
issue
is
tracking
this,
and
we
talked
about
this
yesterday.
Alex
I
also
was
under
the
belief
that
there
was
already
an
issue
tracking
it,
but
I
was
not
able
to
find
it
either.
A
Guess
this
plane
took
off
alright
yeah
Alex,
if
you,
if
you
can
hear
us
just
let
us
know,
if
you
had
more
issues,
you
need
to
talk
about
before
before
you're
taking
off
to
Berlin
all
right
he's
gone
yep.
Let's
move
back
to
the
top
of
the
list
here
and
see
a
chat
that
could
be
Alex.
Nothing
else
have
a
good
meeting.
A
A
C
C
I
have
not
seen
or
I'm,
not
sure,
I've
seen
real
urgency
for
this
to
be
a
no
date,
so
I'm
wondering
if
I
would
have
been
nice
I
guess
if
alex
is
still
under
discuss,
but
you
know,
is
it
really
needed
in
ODOT
8?
Maybe
is
the
next
question?
If
it's
the
only
issue
that
we
need
to
get
in
so
that
I
think
I.
A
C
C
So
then,
1921
I
just
added
a
reference
to
a
comment
from
Fabian
who
was
investigating
this
issue.
So
he
found
that
in
OpenShift
and
containerized
deployments
that
this
is
where
this
hits
and
it
looks
like
it
fix
or
workaround
is
needed
in
on
the
OpenShift
side.
And
so
what
this
is
is
a
documentation
issue.
Now
that
he's
going
to
submit
appear
for
soon
and.
A
A
A
C
A
All
right,
so
that's
it
for
0.8
board
and
let's
go
ahead
and
take
a
look
at
there.
The
0.9
milestone
so
I.
You
know
we're
obviously
pretty
early
on
in
this
milestone.
So
let's
actually
take
a
quick
look
here
to
at
the
roadmap,
because
I
think
we
updated
the
roadmap
since
the
last
community
meeting
I
believe
so.
Let's
talk
about
that
right,
quick,
so
the
so
the
rookie
roadmap
has
been
updated.
A
You
know
and
there's
a
realistic
possibility
that
not
everything
that's
currently
in
the
0.9
milestone
on
the
roadmap
will
be
included
in
the
0.9
milestone
that
we
ship.
So
you
know-
hopefully
maybe
some
people
on
this
call-
can
take
a
look
at
this
roadmap
and
get
excited
about
some
of
these
features
that
we
want
to
want
to
be,
including
so
I
think
that
we
also
at
the
same
time,
have
done
a
thorough
job
of
including
all
of
these
issues
from
the
roadmap
into
the
milestone
itself.
A
C
A
Awesome
so
yeah,
so
at
this
stage
you
know
we're
pretty
much
only
at
the
at
the
point
where
you
know
we
want
to
make
sure
we
have
the
themes
defined
and
the
goals
defined,
and
you
know
from
a
project
tracking
perspective.
We
have
all
the
issues
included
in
the
milestone
and
that's
about
it
for
what
we
have
right.
A
You
know
at
this
point
right
now
right,
so
there's
a
bunch,
a
bunch
of
things
on
this
board
I,
don't
think
we
necessarily
need
to
be
talking
about
every
single
issue
right
now,
because
we're
not
anywhere
close
to
converging
to
you
know
having
everything
completed
and
shipping
this
release.
But
if
there's
anything
on
the
board
here
that
somebody
wants
to
bring
up
right
now
for
a
discussion,
then
the
floor
is
absolutely
open
for
that.
C
A
We
have
done
a
decent
job
of
including
Help
Wanted
in
the
past,
but
I
think
that
that
might
have
been
something
that
was
overlooked.
Bassam
recently
here
when
we
did
this
big
milestone
push
so
I
will
follow
up
on
that
again
to
make
sure
that
we've
got
those
tags,
because
that's
something
that
we
want
to
keep
keep
a
good
handle
on.
E
A
E
C
D
D
E
C
A
In
general,
then,
we
are
very,
very
happy
to
have
new
contributors.
You
know
working
on
features
in
the
projects
here
and
you
know
we're
happy
to
provide
the
support
that
we
can
on
the
probably
on
the
dev
channel
to
talk
about.
You
know
what
the
architecture
is
or
give
you
the
resources
you
need
to
be
able
to
effectively.
You
know
right
test
code
for
rook,
so
we're
always
happy
to
do
that
and
thank
you
very.
E
A
A
Is
that
Rohan
is
you
know
he
has
been
working
as
part
of
the
google
Summer
of
Code
program
all
summer
on
adding
NFS
network
file
system,
support
to
rook,
and
just
yesterday
heard
of
last
night
Senate
today,
as
his
final
final
project
deadline
in
evaluation
that
he
has
completed
so
Rohan
is
just
about
all
done
with
his
summer.
Google
Summer
of
Code
effort
for
us
and
the
NFS
support
will
be
merged
into
master,
probably
today,
I
believe
so.
A
C
F
C
A
C
C
Seems
good?
Okay,
let's
try!
This
no
tell
me
if
I
start
cutting
out
again,
okay,
so
the
yeah
a
couple
of
challenges
with
the
version
of
Ceph
which
is
tied
to
Brooke.
Right
now,
you
know
it's
the
version
of
stuff,
it's
embedded
in
the
rook
in
the
rook
image.
Its
luminous
I
only
was
12
to
7
today
and
had
a
number
of
areas
where
it's
like.
C
C
You
know
an
important
clustered
production
clusters.
They
want
to
control
what
version
of
their
data
path
is
running
and
you
know
roof
being
an
Orchestrator.
Could
if
it's
separated
from
the
Ceph
version
rook
can
be
upgraded
more
aggressively
whenever
rook
releases,
but
SEF
doesn't
need
to
be
updated
whenever
Roku
is
updated.
C
So
the
idea
is,
if
we
split
or
decouple
the
version
of
set
from
rook,
the
admin
can
make
decisions
about
what
version
of
SEF
they
want.
They
want
to
deploy.
So
that's
what
this
design
doc
discusses
is
how
we
can
do
that.
Basically,
by
putting
the
version
of
SEF
the
stuff
container
in
the
cluster
CRT
and.
C
Kind
of
reminds
me
of
the
design
we
had
originally
actually
a
year
ago
when
the
version
was
there,
but
that's
when
we
we
put
it
all
in
the
operator
version,
since
we
only
had
one
container,
but
now
we're
saying
we
actually
have
two
containers
that
we
would
support
this,
and
this
will
admittedly
I
mean
it'll,
be
more
complex
to
support.
You
know
the
versioning.
C
A
C
Yeah
that
I
wouldn't
describe
it
as
dissenting.
It's
just
you
know
what
about
this
or
this
you
know
comments
on
the
complexity
or
how
can
we
make
sure
we
only
make
sure
we're
supporting
the
version
of
upgrades
that
we
really
should,
because,
if
you
jump
like,
if
you
went
from
luminous
all
the
way
up
to
the
release,
that's
probably
too
big
of
an
upgrade
at
once
cetera.
So
what
yeah?
What
we're
gonna
support
with
the
upgrades
right.
A
Yeah
it
definitely,
it
immediately
introduces
a
lot
more
complexity
than
just
a
single.
You
know
version
that
the
operator
manages
and
everything's
coupled
together
there.
This
introduces
much
more
complexity
about
mismatched
versions,
and
you
know
blocking
you
know,
operations
or
scenarios
that
we
don't
want
to
support
and
all
that
sort
of
stuff,
so
it
becomes
much
more
complicated
yeah.
A
What's
a
quick
question
about
like
what
the
user
experience
would
be,
for
you
know
invalid
sort
of
scenarios,
like
is
that
you
know,
because
there's
in
general,
there's
not
great
user
feedback
when
something
goes
wrong,
you
know
that's
just
kind
of
a
you
know,
because
kubernetes
controllers
they
reconcile
loop,
is
an
asynchronous
approach.
You
know
you
have
to
kind
of
go
like
look
at
the
logs
and
stuff.
You
don't
get
immediate
feedback
to
your
actions.
Justice
airily.
C
I
mean
this
is
a
place
where
you
need
to
use
the
CRT
status
so
that
they
can
see
well,
what
version
is
actually
using?
For
example,
we
could
have
a
version
field.
Well
that
says
this
is
what
rook
is
actually
running
and
the
version
of
staff
and
they
would
have
to
go.
They
could
bill.
Look
at
that
and
see.
Oh,
that's
not
the
same
as
what
I
put
in
the
CR
de.
So
let's
go
look
at
the
logs
and
see
I
didn't
pick
it
up
and
yeah.
There's
the
error.
A
And
another
question
I
had
to
is
just
if
maybe,
but
we
obviously
don't
to
get
too
into
the
design
right
now,
but
if
I
could
do
it
really
quickly
of
what's
the
general
imaging
image
approach
like
if
we're
gonna
have
different
versions
of
Ceph
versus
rook?
Does
that
make
you
know?
The
SEM
is
completely
different,
like
completely
distinct
now
like
for
all
the
demons
at
will,
that'll
all
be
running
the
safe
image,
as
opposed
to
a
rock
image.
A
C
C
C
C
They
all
the
unique
containers,
so
all
the
demons
will
run
in
the
containers
that
run
work
in
them
and
then
the
demon
containers
would
only
have
the
self
image
you're
right
about
that
at
build
time.
The
way
it's
designed
right
now,
we
would
want
to
build
whatever
containers
we
need,
so
that
runtime,
we
don't
need
to
move
the
the
binaries
around
or
copy
Dominican
the
container,
for
example.
C
E
D
A
C
So
this
B
comment
on
upgrades
first
so
related
to
this
PR.
With
the
mod
moments
I
wanted
to
get
some
upgrade
testing,
because
the
Mons
now
will
be
changing
from
a
replica
set
to
a
deployment
and
the
only
her
perm
on
right,
Herman,
yeah,
yeah,
okay,
so
the
only
sane
way
to
do
an
upgrade
will
be
is
if
we
automated
so
I,
have
in
this
PR
automated
that
upgrade,
so
the
Mons
will
be
replaced
appropriately.
A
C
A
A
C
A
C
A
C
A
D
C
D
Not
sure
it
is
a
good
thing,
yes,
I'm,
saying
I'm
saying
if
you
take,
the
changes
to,
you
know,
add
to
add
more
integration,
testing
for
upgrades
to
master
first.
That
would
be.
That
would
be.
That
would
be
really
helpful.
Even
in
testing
zero,
eight,
two
zero,
nine
or
zero
seven
to
zero,
or
you
know
all
the
upgrade
scenarios
yeah.
C
C
Exactly
they
I
did
open
an
issue
on
that
and
it's
linked
from
the
the
Ceph
versioning
design.
This
discussion,
2003,
it
looks
like
go
up
in
the
main
content
of
the
issue.
Jade
I
saw
it
so
do
that,
there's
a
link
to
2003
there
so
that
that
issue
says
yeah.
All
demon
should
start
with
a
demon
starts
without
Brooke
in
it
do.
C
That
with
the
init
container,
I'm,
not
I'm,
not
sure
I've
seen
it
as
a
prereq,
but
it
may
be
a
good
thing
to
do.
First
to
let
me
think
about
that,
because
the
with
the
Mons
at
least
right
now
with
that
upgrade
path,
since
we're
changing
from
a
replica
set
to
deployment,
it's
so
disruptive
that
the
init
container
probably
wouldn't
be
much
changed
there.
It's
just
another
change
to
the
pod
spec,
which
then
the
upgrade
would
take
care
of
later
when
the
pod
that
changes
anyway.
A
Yeah
I
firm,
you
know
my
initial
reaction
to
a
lot
of
them.
Is
that
they're,
you
know
they're
pretty
they're
they're
good
changes,
I,
think
that
are
a
good
step
towards
you
know.
Reliability
and
you
know
stability
for
the
Ceph
orchestration.
You
know
and
some
stuff
support
and
rook
so
I
think
those
are
all
really
good
steps.
A
A
F
F
F
A
Okay
I
think
like.
Why
would
that
not
be
just
part
of
the
build
process
in
here
in
building
then,
if
our
NFS
image
I,
don't
think
it
doesn't
need
to
be
a
from
that,
it
doesn't
need
to
be
something
that
we've
already
published
their
word
that
we're
using
as
the
room
at
the
from
instruction
I
think
we'd
just
be
building
the
whole
thing
that
we
need
the
right
image
that
we
need
from
you
know
from
this.
This
directory,
okay,.
F
C
A
A
C
G
Yeah
so
latest
version
of
that
floor
request
has
a
sort
of
simplified
approach
where
we
just
have
a
only
managed
daemons
boolean
for
the
file
system
and
the
objects
tool
space,
so
I
think
we've
sort
of
passed
the
the
detail,
discussion
and
back
to
the
sort
of
more
fundamental
question
of
our
folks
comfortable
with
having
this.
This
change
in
rug,
notably
bassam,
I,.
D
G
Thanks,
it's
probably
in
future,
there's
gonna
be
some
relation
as
well
between
whether
work
is
creating
pools
and
the
sort
of
ol
time
managing
PG
numb
and
that
kind
of
thing
I
know
you
guys
have
an
open
issue
for
doing
a
better
job
of
guessing
P
genomes,
but
we
also
have
a
whole
bunch
of
new
code.
That's
going
into
the
military
analyst
for
doing
that
kind
of
thing
so,
as
as
Seth
gets
smarter
at
doing
that
stuff.
This
some
of
the
lines
might
need
to
be
reconsidered.
A
little
bit
and.
A
I
mean
John.
My
opinion
on
that
is
that
you
know
in
general.
That's
something
that
the
Ceph
platform
is
doing
itself
to.
You
know
be,
keep
things
healthy
and
manage
things
effectively
on
its
own
that
that's
a
great
place
for
that
to
live.
You
know
like
rook,
doesn't
do
we
need
you
know,
rebalancing
rate.
You
know
that
stuff
does
that
immediately
on
its
own.
D
Just
to
you
know
the
other.
The
other
piece
of
context
here
is
I,
don't
think,
there's
any
kind
of
resistance
to
having
partially
managed
stuff
say
you
know
only
moms
or
endowments
deeds
and
not
pools.
Or
you
know,
any
kind
of
partial
management,
I
think
would
be
interesting.
It's
just
I
those
are
in
general.
D
G
This
we're
not
really
gonna
have
a
lot
of
cases
like
this.
It's
it's
just
the
distinction
between
using
kubernetes
to
run
demons
versus
using
like
work
code
to
manage
multiple
SEF
resources.
It's
from
my
point
of
view,
it's
not
about
like
ever
or
more
granular.
Distinctions
is
just
really
just
that
one
thing.