►
From YouTube: 2018-01-16 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
The
next
phase
of
this
is
going
to
be
essentially
publishing,
set
up
stream
will
publish
in
base
images
that
just
have
second
them
base
OS
and
stuff,
and
we
will.
The
plan
is
that
we
will
just
use
those
images
as
base
images,
so
the
upstream
stuff
will
have
little
clean,
pristine
images
that
we
could
use,
and
then
we
just
add
rooking
to
them,
and
then
the
toolbox
and
everything
else
can
just
base
it
can
be
based
on
those
images
and.
B
A
Sitting
in
a
separate
directory
doesn't
really
interfere,
so
okay,
but
I
talked
to
Sebastian,
and
the
plan
is
to
essentially
migrate
towards
that
approach
and
over
time,
and
then
he's
he's
on
point
now
for
finishing
that
work,
him
and
I'm,
forgetting
the
other
generals
name
but
they're
they're
gonna
finish
up
that
work
and
let
us
know
when
they
have
on
docker
hub
images
that
we
could
use
so
I
think.
The
idea
is
that
the
on
docker
hub
will
be
safe.
Slash,
daemon,
base
and
daemon
base
will
be
logically
equivalent
to
our.
A
B
A
It
might
there
I've
asked
Sebastian
to
help
with
that.
They
might
publish
tags
that
have
specific
point
versions
of
stuff,
so,
like
twelve
to
three
or
something
as
tags
in
in
the
repo,
the
problem
would
they
have,
and
the
problem
that
we
have
is
that
if
you
were
to
say
if
something
in
Debian
or
in
the
bun
two
or
one
of
the
base
source
as
security
update
happens
and
you
need
to
rebuild
it
and
the
Debian
repositories
have
now
published
newer
versions.
C
A
The
wave
where
what
I'm
thinking
about
this
and
I'm
open
to
feedback
is
that
imagine
if
we're
running
lustre
also
has
managed
by
self
like
some
of
the
early
design
work
happened
there
and
assume
assume
for
a
second
that
a
cluster
image
is
a
hundred
Meg's
and
a
second
edge
is
two
hundred
minutes.
Let's
just
throw
out
a
number
right,
we
can
decide
to
have
a
combined
image.
A
That's
three
hundred
minutes
or
we
can
decide
to
have
two
images
that
are
a
hundred
and
two
hundred
and
where
I
think
I
landed.
Is
that
it
actually
might
the
scenario
where
people
want
to
run
multiple
backends
in
the
same
kubernetes
cluster
is
going
to
be
infrequent.
Most
people
will
probably
choose
a
back-end
in
a
given
kubernetes
cluster
and
as
such,
having
multiple
root
images
is
not
gonna.
A
A
Versioning,
also
updating
bluster,
doesn't
update
self
updating
self,
doesn't
mess
with
cluster.
So
so
on
that
path
we
will
essentially
have
multiple
date.
Multiple
images
for
rook,
where
the
rook
binary
is
the
same
inside
all
of
them.
But
what
base
images
are
different
and
then
the
operator
when
it
creates
a
cluster
it
will
for
a
given
back-end.
It
knows
what
the
default
image
is
for
that
background.
I.
A
So
the
harpy
looks
like
there's
a
base
OS
which
is
minified
and
then
there's
steph
base,
which
is
just
adds
the
binaries
for
Seth
ACEF
demons
actually
and
then
from
there
it's
Forks.
There
are
two
children,
there's
rooked
and
then
there's
a
container
which
adds
a
bunch
of
scripts
and
entry
points
that
are
used
for
set
container
project.
D
C
A
A
E
D
A
The
our
entry
point,
our
image,
is
rook
rook
and
that
doesn't
change
right.
How
it's
built
is
really
not
that
important
for
customers.
What
is
it?
What
is
cool,
though,
is
that
in
mastering
now
you
don't
actually
have
to
build
stuff,
so
even
like
having
the
memory
and
the
cores
to
build
it.
All
that
all
that's
fun.
B
C
B
C
A
Be
it'll
be
an
image
for
each
backends
and
you
know
so
that
the
end
at
a
minimum,
the
rook
operator
goes
in
its
own
image
just
to
start
things
up,
but
we
don't
have
to
worry
about
that.
4.7
I'm,
just
I'm
talking,
so
that
you
guys
can
get
update
on
us.
So
so
that
was
one
of
the
big
architectural
things
that
are
changing.
The
other
is
the
use
of
one
OSD
per
pod.
I
think
would
be
good.
Jared
II
didn't
do
a
little
update
on
that
yeah.
B
But
you
know
those
concepts
are
coming
to
the
surface
now,
so
you
know
I've
been
doing
some
thinking
about
them
and
what
I
want
to
accomplish
is
getting
add
or
remove
at
the
node
level
into
master
so
that
we,
you
know,
call
that
done,
and
it
is
a
functional,
useful
piece
of
work
so
that
people
can
update
the
CRT
and
you
know
to
add
a
node.
That's
been
the
major
requests,
you
know
not
we
don't.
B
We
have
more
requests
for
that
than
we
do
for
being
able
to,
you
know,
add
or
remove
specific
devices
over
time.
So
I
really
want
to
finish
that
work
off,
and
then
the
very
next
thing
that
I
would
want
to
do
is
to
go
ahead,
and
you
know
finish
off
the
refactor
and
architectural
change
to
have
one
OSD
running
per
pod.
You
know
to
do
is
to
centralize
to
orchestration,
and
you
know
a
centralized
device
discovery
all
that
sort
of
stuff.
So
that
is
the
next
thing
that
I
want
to
get
into.
E
E
B
Yeah
I
write
up
on
it.
It
would
be
the
first
step
in
this
meeting
two
weeks
ago,
I
took
an
action
item
for
kind
of
a
wider,
broader
scope.
One
pager
for
some
of
these
are
all
related
features
for
that
we're
talking
about
40.7.
So
there
is
an
issue
there.
This
guy
right
here,
that's
just-
has
not
started,
but
you
know
because
I've
had
competing
priorities
and
I
will
continue
to
have
competing
priorities,
but
a
big
focus
of
that
which
is
very
relevant
to
them.
B
A
D
B
I
have
not
had
an
architectural
conversation
with
Sebastian.
He
had
asked
me
about
if
I
had
started,
that
one
pager
and
the
answer
was
no,
and
he
wanted
some
advice
about
how
to
run
brook.
You
know
in
a
dove
scenario
that
he
went
ahead
and
did
a
write-up
on.
So
that's
the
only
conversations
that
I
have
had
with
Sebastian.
C
A
A
Steve
are
you
tracking
network.
E
A
You
mean
from
I,
didn't
I
didn't
hear
that
a
second
part
I
heard
this
is
a
good
early
start,
gets
parody,
and
then,
let's,
let's
figure
out
how
to
make
this
better
on
separate
lineage.
So
what
I?
What
I'm?
What
I'm
curious
about
is
that
I
mean
we?
The
flex
volume
was
an
intermediate
step,
and
it
has
a
bunch
of
issues
still
sounds
like
CSI.
Is
that
what
we
should
build
on
going
forward?
A
What
I'm
curious
about
is
what
does
it
require
to
say,
get
CSI
to
work
in
the
new
CSI
thing
to
start
working
in
rook
scenarios
in
the
way
that
we
expect
right.
So
what
I
was
thinking
through
is,
maybe
you
know,
maybe
it
looks
like
a
set
of
features
that
are
well
authored,
that
get
opened
up
in
this
FCS
repo
or
it's
a
it's,
a
bigger
design
dog
that
can
describe
what
how
we
hope
to
accomplish
tack.
A
Whichever
way
I
think
in
a
way,
I
think
we
need
to
get
to
a
point
where
we
feel
like
CSI
is
gonna
run
well
for
rough
scenarios,
no
secrets,
no
Mon
IDs.
No,
you
know
none
of
that
stuff
needs
to
show
up
for
CSI,
yet
it
has
to
you
know:
we
need
to
get
that
to
do
that
in
the
new
project.
The
fact
that
it
now
lives
out
of
tree
means
that
and
then
we
get
to
collaborate
and
contribute
to
that
project
that
that's
that's
all
goodness.
E
A
As
very
you
know,
open
up
issues
just
for
each
of
these
topics,
if
you
want,
but
if
it
makes
more
sense,
let's
create
a
design
doc
around
it.
A
Personally,
would
love
to
see,
even
though
like
a
two
paragraphs
on
with
the
direction
but
I
I'm,
all
just
so
that
we
are
all
kind
of
clear
about
it.
A
I'm
starting
a
little
design
doc
on
this
with
a
proposal,
but
I
I
just
want
to
bring
it
up
here
because
I,
don't
think
what
is
you
know
deployable
in
production
environments
as
it
stands
today,
because
it
is
so
awful.
No
I
understand
that
I'm
just
bringing
it
as
a
you
know.
If
somebody
wanted
to
deploy
this,
they
Shh,
they
would
probably
be
worried
about
how
about
how
much
just
are
giving
to
it.
Yeah.
A
E
A
I
think
there
was
balancing
least
privilege
model
we
we
can.
We
can
have
that
discussion
when,
when
the
design
doc
is
out,
I
don't
think
we
should
have
it
right
here
right
now,
but
like
you're
you're
on
the
you're
on
the
right
page
here
so
I
have
a
ticket
on
this
I.
Don't
know
if
it's
marked
as
0.7
I'd
like
to
design
at
least
a
design
and
design
feedback
to
be
part
of
one
selling,
but
it
doesn't.
A
A
E
We
generalize
it
on
host
path
right,
but
this
actually
the
discussion
on
what
will
get
rid
of
our
look,
a
host
path
for
things
like
that
again
host
right.
That's
that's
one.
Discussion
discussion
is
to
leverage
local
host
so
that
we
can.
We
can
specify,
like
the
node,
you
know
as
as
a
local
storage,
so
now
for
the
for
the
second,
this
acting
the
second
more
interesting,
because
that
way,
the
user
don't
have
to
modify
as
CRTs
we
whenever
they
want
to
specify
device
local
storage
device,
but
that
that
means
two
things
to
be
available.
E
Hopefully
we
can
alpha
feature.
One
is
local
volume,
which
is
alpha
feature
and
raw
block
devices.
We
also
have
another
feature.
Those
are
you
need
to
enable
it
be
a
citric
aid.
From
the
last
heard.
Those
two
are
being
considered
to
be
beta
on
one,
that
10
I,
think
local
body
will
get
there
because
we
have
a
head
start.
E
B
A
D
A
B
B
E
A
So
for
blue
store
scenarios,
I
I
see
your
point
for
file
store
scenarios,
maybe
not
so
important,
I
guess
I'm
trying
to
figure
out
how
to
get
you
know
not
I,
don't
want
to
wait
for
a
local
volume,
everything
else,
but
I
I'm
curious
of
how
we
could
start
removing
the
dependency
on
host
path
and
get
closer
to
just
using
kubernetes
resources.
It
sounds
like
this
is
a
foundational
piece.
That's
on
that
path,
regardless.
D
E
D
E
A
Right
and
then
and
then
but
but
I
I,
don't
that's
it's
somewhat.
Independent,
even
running
on
mini
cute
mini
cube,
has
a
provision
ER
for
persistent
local
volumes
right.
So,
instead
of
having
to
do
mount
SDA
one
you
should
we
should
use
their
provision
on
mini
cube
right.
I
look
should
run
on
top
of
kubernetes
resources,
not
on
top
host
resources.
A
D
A
A
You
know
one
represent
more
of
the
Ceph
area,
south
configs
surface
area
and
to
potentially,
if
there
are
changes
that
are
made
by
the
sub
CLI,
how
that
interacts
with
Brooke
right
run
and
managing
those
resources.
But
it
became
clear
to
me
in
that
discussion
that
we
do
need
to
make
a
pass
on
the
CR
DS,
especially
before
they
go
to
beta
and
I'm.
A
C
What
would
be
about
the
idea
to
to
kind
of
separate
every
back-end
from
each
other?
So
we
don't
have
one
cluster
because
they
were
just
first
after
I.
Don't
know
five
specific
settings
for
class
today,
I,
don't
know
seven
other
specific
parameters
you
could
set
and
I.
A
Suspect
that
having
a
common
cluster
is
actually
a
big
value,
because
there's
a
number
of
things
that
the
rook
operator
will
do
in
terms
of
defining
a
scope
for
storage
notes
and
where
its
storage
notes
should
run
right,
selectors
all
our
stuff
that
will
be
common
across
every
back-end.
That's
that's
one
of
the
coolness
about
harmless,
but
I
agree
with
Travis
until
you
get
the
strong
second
back-end.
Some
of
this
discussion
seems
abstract,
but
I
I'd
encourage
us
to
even
look
at
something
like
NFS
or
something
is
really
simple.
As
a
use
case.
What.
C
A
D
A
C
A
D
C
I'm
currently
working
on
1386,
which
is
almost
in
the
top
of
the
list,
UFO
I,
would
like
to
add
1387
to
the
list,
because
topics
are
kind
of
close
together.
The
1386
is
about
that
months
should
well,
never
be
placed
on
a
more
than
one
Maanshan.
It
should
never
be
placed
on
one
note
and
the
1837
is
that
months
get
a
more
stable
identity
per
note,
and
it
goes
because,
at
least
from
my
perspective
right
now,
that's
well
almost
everyone
that
has.
C
Yeah,
it's
the
monthly
atom,
a
biggest
issue
so
for
death.
I
have
created
us
design,
not
a
lot
of
full
design,
doc
more
for
Google
Docs,
where
we
can
kind
of
collaborate
on
that
I've
already
sent
it
to
Travis
and
I
would
like
to
get
everyone
together
for
tomorrow.
What
can
you
put
it
in
the
issue?
Yeah
well
and
and,
as
such,
I
hope
everybody's
interested
that
we
kind
of
sit
together
tomorrow
and
discuss
how
we
can
change
the
monster.
Yeah
have
a
better,
more
stable
identity
and
how
we
do
to
failover
and
placement
better.
D
C
D
D
E
E
E
C
D
A
He
wrote
me
too,
and
it
sounds
like
he's
down
a
path
now
where
he
built
a
private
image
and
you
could
use
that.
So
if
so,
then
I
don't
think
we
need
to
bother
okay
tax
reliefs,
maybe
maybe
verify
with
him.
But
I
I
saw
that
he's
now
built
his
own
image.
Yeah
I.