►
From YouTube: 2017-10-23 Rook Community Meeting
Description
Rook Community Meeting
A
C
C
B
B
D
B
C
E
B
E
B
C
C
D
B
C
B
E
B
A
Something
I
understand
here
still
is
that
for
things
that
don't
really
have,
where
does
it
make
sense,
like
just
metrics
in
general,
you
know
any
sorts
of
stats
and
stuff
you're
not
going
to
have
you
know
customer
resource
definitions
and
objects,
those
they
don't
pull
out
of
the
system.
You
know
it's
like
like
this
does
so,
if
you
get
rid
of
a
REST
API,
how
do
you
expose
things
like
that?
Would.
B
C
What
you're
saying
is
that
with
the
C
or
D,
so
we
get
an
API
that
basically
has
a
bunch
of
operations
line,
yeah
and
you're,
asking
where
the
corresponding
get
operations
and
the
right
on
this
sort
of
thing
that
well
you
get
some
of
that.
By
going
around
the
corner
of
Prometheus,
there
isn't
a
orthogonal
get
operator
to
those
foot
operators
to
the
set
operator,
and
maybe
that's
what
this
API
group
this
hands
at
I.
B
Think
I
think
you're
right
and
then
you
don't
you
see
our
needs
to
get.
You
know
status
or
detailed
status
of
curve
stuff,
but
the
path
for
that
should
be
monitoring
and
logging,
and
you
know
they're.
Those
are
the
that's
where
I
would
go
for
that
kind
of
stuff
see
are
these
are
truly
for
declarative,
management's.
So.
B
C
B
F
B
D
D
D
B
I
guess
what
I'm
trying
to
figure
out
is:
can
we
arrive
at
a
place
where
we're
using
the
CR
DS
and
the
Ceph
API
and
set
tools?
Initially
it
and
nothing
else,
I
mean
maybe
Prometheus
too,
but
if
we
can
get
to
a
place
where
we're
doing
that,
then
I,
don't
we
can
remove
the
API
moves
through
of
CTL
and
then,
from
that
point
onwards
we
can
figure
out
how
whether
we
need
to
add
more
CR
to
use
or
improve
the
monitoring
or
whatever
else.
C
B
A
B
C
B
C
D
B
B
So
and
it's
not
just
the
read-only
back
there's
actually,
we
have
to
our
agent
has
to
figure
out
where
to
drop
the
flex
volume
driver
in
a
place
that
it
could
be
picked
up
and
then
the
mound
propagation
issue
you
know
the
sheriff
on
propagation
is
another
one
that
gets
in
the
way
as
well.
But
that
seems
like
it's,
that
is,
that
solved
in
1.8,
Steve.
D
D
B
A
B
We
want
that
the
jerod
that's
important,
because
if
we
are
going
to
dynamically,
make
the
decision
of
how
to
mount
and
what
to
use
and
how
to
format
on
even
on
a
per
per
deployment
basis
or
per
pod
basis,
we'd
have
to
move
that
logic,
or
else
we're
gonna
keep
maintaining
the
driver
we
have
to
drop
by
every
time.
We
change
a
strategy.
We
have
to
up
go
upgrade
drivers.
D
D
B
D
B
A
B
D
I
mean
CSI
is
coming
and
CSI.
He
has
all
the
things
that
all
did
the
same
thing
that
we
have
only
agents
with
this
attachment.
You
know
math
amount,
all
those
words
I
mean
yes,
we
can.
We
can
actually
implement
it,
I
ourselves
right,
I'm,
sorry
this
morning,
but
I'm
not
sure.
If
that
every
is
very
studious.
B
D
D
B
D
There
is
a
wide,
a
yes,
the
English
only
because
they
were
kind
of
reluctant
of
introducing
a
new
entry
route
plug-in
because
that,
if
that
happened,
that
mean
that
they
must
support
an
API,
because
that
would
be
coverage.
They
were
lacking
about
that
and
also
the
relaxed
about
it.
Oh
API,
layering
on
and
so
forth.
Almost
an
attorney
that.
D
We
can
see
site
OCS
episode.
His
data
we
can
use
in
commerce
are
going
to
discuss
in
this,
so
flex
Monique
was
kind
of
in
the
intermediate.
You
know,
you
know
short
term
solution
for
this
and
then
we'll
say:
okay,
we
can
know
we
can
know,
usually
solely
because
of
you
know,
things
like
we
have
to
be
when
you.
B
D
A
We
get
CSI
or
in
its
tables.
Well,
I,
don't
see
why
if
we
can
identify
a
concrete
set
of
like
fixed,
targeted
fixes
that
will
help
us
realize
our
overall
goal,
I,
don't
see!
Why
do
we
do
just
say
down
with
that
right
away
like
if
you
have
a
concrete
set
of
things
to
fix?
Let's
take
it
to
the
six
words
or
that's.
You
know,
I
think
we
could
make
forward
progress,
yeah
and
not
just
our
hands
I'm
just.
D
A
D
B
I'm
strongly
of
the
opinion
that
we
need
to
get
this
story
fixed
on
flex
and
when
CSI
is
ready,
we
can
and
and
the
community
decides
that
they
are
no
longer
going
to
support
flex.
Then
we
should.
We
should
switch
over,
but
I
don't
see
why
we
don't
complete
our
initial
goal
and
given
our
existing
investment.
A
C
B
B
So
well,
okay,
what?
What?
What
are
those
issues
sounds
like
the
handoff
of
the
driver
is
an
issue
still
mound
propagation
is
an
issue
maybe
that
one's
already
fixed
or
about
to
be
fixed
the
path
and
then
supporting
cubelet
in
a
container
that
was
identified
as
an
issue.
Even
when
we
started
this
session,
the
safe
storage
looks
like
it's
still
an
issue.
A
B
Yeah
and,
for
example,
supporting
some
things
in
downstream
API
would
make
it
if
the
volume
plug-in
Durer
or
even
the
cubelet
baster
was
supported
in
the
downward
api.
That
means
that
you
could
start
up
the
pod
and
not
have
a
guess
where
to
drop
the
flex
volume
and
where
to
mountings.
I
I
really
would
like
us
to
think
about
how
to
make
the
fixes
in
kubernetes
such
that
this
scenario
just
works,
like
literally
just
works
and
works.
Well,
every
single
every
single
time.
C
C
B
B
D
B
D
B
What
my
concern
is
we
started
with
1.8
as
possibly
the
the
solution
for
our
flex
and
deployments
of
pro/con
everywhere
now,
I'm,
not
sure
we
even
on
a
path
for
1/9
to
be
to
solve
this
problem,
I'd
like
to
get
to
a
point
where
we
get
to
rook
running
everywhere.
Kubernetes
runs
in
some
version
of
kubernetes.
That's
not
in
the
distant
future.
Is
that
1:9?
If.
B
D
B
A
B
A
A
D
B
E
B
E
B
E
B
E
C
E
B
Question
is
if
there
are
a
couple
of
things
that
we
can
do
to
enable
the
use
of
local
storage
to
remove
data
hoster.
You
know
which
is
not
enabled
by
default
and
that's
the
my
biggest
concern
is
that
people
start
using
rooked
and
it
loses
a
state
and
if
we
can
help
drive
that
forward,
for
you
know
in
a
quicker
time
frame
and
have
it
apply
to
older
versions
of
kubernetes,
that
would
be.
B
B
So
how
do
we
get
involved
so
that
we
can
get?
You
know
normalize
on
our
pods,
OS,
DS
and
others
just
use
persistent
volumes
for
storage,
not
have
to
make
any
assumptions
about
the
host
directory
or
really
anything.
Just
we
use
a
persistent
volume
and
as
it
presses
the
volume,
if
you
wanted
local
storage,
then
you
just
use
the
kind
of
a
persistent
volume
that
you
know
says.
Little
storage.
E
E
C
C
B
B
B
B
Yeah
I
mean
it's
hard
to
tell,
because
the
failure
you
know
shows
that
can't
mount
the
persistent
volume
or
cannon
can't
bind
it
and
it's
usually
the
root
cause.
Is
that
but
it's
a
really
common
thing.
It's
like
right
now,
people
show
up
and
say,
is
it's
not
working
I
guess
even
Jared
asked
you
know
is
this?
Are
you
on
core
OS
or
something,
and
that
typically
answers
yes,
but
I
can't
I'm,
judging
mostly
by
slacking
people
on
slack,
saying
that
they
can't
get
it
working,
but.
B
F
Mean
that's
one
thing:
I'm!
Definitely
thinking
that
it's
covered
rook
and
the
first
thing
you
know
I
go
through
the
docks
and
if
we
put
like
a
big
red
mark
there
and
say
hey
if
you're
on
continued
Linux,
you
need
to
do
something
extra
I
think
we
should
be
good
off,
because
if
a
user
Danny
creates
the
ticket
we
can
be
like
hey.
We
have
a
docs
for
that.
Have
you
read
the
dogs?
Please
you
have
to
oxide
look
at
it.
It.
D
C
F
Maybe
impossible,
are
you
Kurtz
at
least
the
docks
of
Korres,
say
something
about
mounting
your
oval
AFS
over
the
user
partition
over
the
user?
Lip
partition
our
directory,
and
if
we
could
do
something
like
this
from
the
agent
container,
we
could
simply
mount
an
overlay
overlay
use
on
the
exact
directory
and
then
simply
put
our
binary
on
it
and.
B
D
B
B
C
C
C
A
C
B
F
A
A
F
B
So
that's
encourage
the
holy
will
be
a
similar
one
in
atomic
and
a
solar
one
and
wherever
yeah
I
really.
I
my
gut
says
that
kubernetes,
one
nine
should
look
and
etsy
kubernetes
plugins
as
well
as
VAR
live
exec,
but
that
should
be
an
alternative
path
just
to
support
scenarios
where
cubelet
runs
in
green
I.
B
D
B
A
C
C
F
C
F
C
C
F
A
F
B
One
of
one
of
the
things
that
we
should
capture
in
issues
and
imagine
if
you
had
a
heterogeneous
cluster,
let's
say
I'm
going
to
throw
this
out:
half
the
nodes
are
running
atomic
half
the
nodes
are
running
rancher
and
the
other
half
are
running
rancher
I
made
this
up
and
they
have
different
volume,
plugin
ders
on
some
nodes
than
others.
How
do
we
handle
those
cases?
I
I
right
now,
we're
assuming
a
completely
homogeneous
cluster
like
everything
about
every
node
is
identical
and
not.
B
B
So
40.6
it
sounds
like
these
issues
are
on
reflects.
We
need
some
either
documentation
or
some.
We
need
to
make
a
call
on
whether
we
should
move
forward
with
tagging.
0.6,
local
storage,
I.
Think
Travis.
You
should
figure
out
what
the
path
for
it
is,
but
it
doesn't
look
to
me
like
that's
a
0.6
thing.
It's
not
fair.
E
B
A
B
A
B
E
A
F
Maybe
a
small
thing
because
it'll
Oracle
Sturridge
from
my
side,
it
should
definitely
be
able
that
we
can
move
from
the
host
year
path
right
now
to
the
local
storage.
Just
a
migration
should
be
possible,
it
should
be
possible,
it
should
be
possible
or
are
we
saying
yeah
0.6
is
yeah,
it
works,
but
0.7
is
local
storage
and
we
break
some
stuff
well,.
B
That's
what
I'm,
trying
to
figure
out
I'd
like
to
say
that
we,
if
we
can
make
the
changes
I,
ideally
a
for
0.6.
We
are
local
storage,
we're
only
no
more
dirt,
nor
no
more
any
host
deaths.
We
don't
have
any
dependency
on
those
paths
and
if
we
need
to
carry
it
forward,
we
would
carry
that
forward.
If
we
don't
you
got,
then
you
know
we
call
0.6
to
beta
and
then
we
have
to
migrate
one
to
the
other.
B
F
F
E
E
B
F
Maybe,
if
it's
possible,
if
it's
really
possible,
maybe
half
a
script
or
something
get
words,
I,
don't
know,
call
a
function
in
the
in
operator.
That
tends
replaces
one
OSD
you
move
to
data,
call
an
x
one,
move
it
or
something
like
that:
Janet
dad,
yeah
the
backwards
computer
compatibility
go
for
me,
but
there
was
0.6
releases
like
yeah.
F
B
E
B
E
B
D
B
F
A
A
B
F
Then,
depending
on
how
we
go
with
the
migration
and
my
credibility,
yeah
I
think
this
moral
for
comes
on.
If
we're
like
yeah,
you
can
migrate.
If
we
will
work
on
that,
you
can
migrate
it.
Even
if
you
have
to
put
in
some
manual
work,
then
I
think
we
can
call
it
beta,
but
foofie
I'm,
not
if
you're
Mike
yeah.
We.
D
B
I
took
away
is
that
there
are
issues
that
need
to
be
all
open
with
about
enough
detail
and
some
offline
work
sounds
like
looking
at
an
overlay
and
others
to
understand
where
we
stand
with
the
Flex
volume
and
then
the
local
storage
issue,
at
least
some
investigative
work
to
see.
If
which
path
we
should
take
40.6
both
of
those
feed
into
you,
know
declaring
0.6
ready
or
how
we
move
forward.
So
I
don't
know
if
we
can
make
the
call
right
now.
It
sounds
like
there's
some
homework
to
be
done.