►
From YouTube: 2019-01-15 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
has
started
this
is
the
January
15th
2019
rook
community
meeting,
and
we
will
go
ahead
and
get
started
with
looking
at
the
0.9
patch
releases
that
we
may
that
we
are
we're,
definitely
going
to
be
needing
to
do
some.
But
though
the
organization
on
this
board
here
looks
like
we've
got
a
fair
amount
of
things
in
the
backlog
that
don't
even
necessarily
have
owners
Travis.
Do
you
have
a
I
think
you're
more
on
top
of?
What's
going
on
with
the
0.9
patch
release,
can
you
speak
to
that?
Please
yeah.
C
We
have
had
a
few
things
merged
since
the
9.1
release
and
we've
got
a
couple
other
pending
this
one
in
review.
There's
a
backward,
PR,
I
think
the
build
is
completed,
so
I
can
merge
that
one
right
after
this,
the
and
as
I,
was
kind
of
being
fairly,
maybe
aggressive,
putting
things
in
209
because
hey
users
reporting
these
issues.
It's
gonna
be
three
months
or
whatever,
before
the
next
release.
Just
to
get
some
of
these
fixes
in
there
was
was
kind
of
my
general
approach.
C
C
B
C
A
D
A
A
B
C
B
B
B
B
B
B
A
B
A
Yeah
I
think
yeah,
we
should
add
you
will
follow
up
on
that.
Then
awesome.
Thank
you
for
your
help.
Sebastian
I
appreciate
it
sure,
okay,
so
then,
alright,
so
that's
and
then
back
to
the
other
tick.
The
other
question
I
had
about
this
board,
so
the
first
one
was
is
the
state
of
the
board.
Where
fly
that
with
reflecting
the
state
of
the
issues,
and
the
second
question
is:
are
there
any
very
pressing
issues
that
we
want
to
get
the
0.9
that's--you
out
more
quickly
or
you
know?
A
A
Like
okay,
that's
good
that
at
least
we're
you
know,
we
don't
have
something:
that's
you
know
hot
and
pressing,
and
you
know
we're
just
waiting
around
so
awesome.
That
sounds
great
okay,
so
we'll
continue
following
up
on
that
patch
this
and
targeting
into
the
week
to
get
0.9.0
alright.
So
with
a
1.0
release,
we
I
noticed
there
is
a
comment
further
down
in
the
agenda
here
that
we
will
see
very
obviously
here
that
the
board
has
not
been
updated
yet.
A
So
we
need
to
follow
up
on
that
and
get
this
make
sure
that
the
1.0
milestone
accurately
reflects
the
agreed-upon
roadmap
and
scope
for
1.0
and
that
the
project
board
reflects
the
those
issues
that
are
included
in
the
milestone
as
well.
So
I
don't
know
if
there's
a
whole
lot
to
talk
about
right
now
with
this,
unless
somebody
wants
to
bring
up
the
general
1.0
comments
or
discussion,
yeah.
C
A
Let's
do
that:
okay,
great
yeah
and
then
I,
don't
remember
the
scope
of
who
this
idea
has
been
floated
around
to,
but
we
had
discussed.
I
know,
bassam
and
I
had
talked
about
it
that
getting
back
to
a
quarterly
release.
Cadence
would
be
nice,
so
sort
of
targeting
the
end
of
March.
Early
April
timeframe
for
a
1.0
seems
like
a
good
idea,
but
you
know
we're.
Definitely
we
can
have
discussion
about
the
timing
for
that
and
requirements.
E
As
far
as
like
one
dot,
oh
I,
think
getting
to
a
quarterly
release,
schedule
sounds
good.
Something
that
I
was
considering
was
that
our
last
release
was
early,
December
and
I.
Think,
especially
since,
like
Thanksgiving
the
Thanksgiving
holiday
here
in
the
US
and
then
like
Christmas
Hanukkah,
it's
a
trial
holidays
a
new
year,
all
kind
of
falling
within
a
month
timeframe.
E
C
A
A
Really
strict
strictly
required
sage
is
a
good
question,
but
you
know
the
really.
The
the
the
goal
I
have
here
in
mind
is
is
really
just
about
getting
a
little
bit
more
of
a
predictable
cadence
and
then
also
a
more
frequent
Acadians,
because
I
think
I
can
remember
foods
between
0.7
and
8
or
8
and
9.
We
went
a
good
fair
amount
of
months
without
a
major
release
going
out.
So
there
are
a
lot
of
fixes
that
people
who
are
waiting
on
that
you
know
they
didn't
necessarily
get
access
to
unless
they
wanted
to.
A
C
A
A
Okay,
all
right,
so
we
will
follow
up
on
that.
Then
all
right.
So,
let's
move
on
to
the
community
topics
then
first
one
we
have
here
is,
if
we
could
add
Anton
as
a
member
is
so
annoying.
It's
on
did
a
lot
of
the
implementation
for
the
edge
of
s
new
opera,
fest
operator
or
any
of
the
edge
of
fest
folks
on
the
call
today,
I,
don't
think
I
see
any
of
them
well,
yeah!
We
definitely,
we
certainly
can
to
be
able
to
assign
issues.
Dance
on
and
you
know
have
him
be
able
to.
A
D
A
A
B
C
Haven't
thought
about
this,
for
so
today
we
want
to
sign
a
release.
We
publish
the
builds
to
alpha
beta
or
stable
channels
and
the
that
release
are
those
release
tags,
don't
really
they
match
how
we're
using
them
or
how
users
using
them
we're
just
using
the
version
image
tags
like
Rho,
V,
dot,
o
dot,
9b9
diode
anyway
9.1,
for
example,
and
we
don't
really
have
the
concept
of
alpha
and
beta
and
stable
it's
that
status
is
in
our
C
or
D
versions,
so
I'm
not
sure
where
we're
using,
if
any,
where
we're
using
that.
A
G
G
I
thought
about
it:
can
we
just
chat
about
it
separately?
I
do
think
we
need
to
remove
them,
but
I
do
think
that
having
two
channels
there's
still
a
useful
concept.
I
I
do
think
like
this
is
a
bit
of
a
legacy
from
wherever
we
we
were
doing
in
the
past,
with
alpha
beta,
unstable
but
III.
Think
there
is
a
you
know,
sort
of
like
some
of
the
releases.
A
B
G
Cooper
Nettie's
doesn't
but,
like
you
know,
docker
does
I,
think
you
and
Steph
has
LTS
releases
versus
you
know,
nom,
LTS
releases
and
so
thinking.
Maybe
there
is
a
channel
that
is
designed
to
be
more
stable
versus
one
that
gets
updated
more
frequently
independent
of
the
version
numbers
that
are
used
so
I
mean
we
can
also
talk
about
going
to
a
single
channel
and
remove
the
designation
completely
well
I.
My
guess
is
having
at
least
two
channels
as
useful.
G
B
C
G
G
G
G
A
And
I
think
Travis,
just
maybe
add
a
little
clarity
about
I.
Think
what
you're
getting
at
is
that
you
know
if
somebody
has
chosen
to
use.
You
know,
let's
just
call
it
like
LTS
and
experimental
someone's
using
the
LTS
channel.
It's
is
it.
It
may
not
necessarily
be
a
specifically
clear
to
the
operator.
What's
a
specific
version
that
happens
to
be
you
have
the
container
images
that
are
running
in
the
cluster
right
is
that
you're
concerned
Travis,
so
the
operator
wouldn't
necessarily
be
able
to
make
a
informed
decision
of
oh.
C
Back
to
what
version
is
inside
the
container,
because
they
we
know
what
versions
are
inside
the
container,
but
we,
the
image
names
themselves,
have
to
change
between
releases
or
else
the
registries
will
think
they're
up
to
date,
when
they're,
not
things
like
that,
yeah
got
it:
okay,
yeah,
let's
we
can
take
it
out
flame.
All.
A
Right,
Blaine,
you,
you
disagree
with
rooks,
behavior,
dot,
dot
dot
and
you
might
be
muted,
lame.
E
B
E
But
I
don't
know
if
there
are
concerns
with
changing
behavior
that
is
legacy.
I
feel
fairly
confident.
That
upgrades,
because
of
the
way
that
Brooke
does
store
information.
Those
discs
wouldn't
be
lost
for
clusters
that
do
upgrade,
but
that
behavior
would
would
change
and
it's
a
it's
a
change
that
could
be
noticeable
and
is
something
that
we
rely
on
in
our
tests.
So
there
would
have
to
be
some
changes
to
the
integration
tests,
not
that
that's
like
affects
users,
but
it
could
be
something
that
they
find
different.
A
In
Blaine
I'm,
actually
a
little
surprised
at
the
repro
scenario
that
you
described
there,
because
from
my
recollection,
if,
if
you
say
you
use
all
devices
like
the
set,
that's
a
true
in
the
cluster
CRD
that
if
there
are
no
available
devices
that
the
OSD
you
know,
demons
will
treat
that
as
oh
I
can't
fulfill
this
request,
you
asked
for
devices,
but
I
can't
do
it
so
I'm
not
going
to
do
anything.
That's
what
I
thought
the
behavior
would
be
the
only
time.
A
I
thought
a
just
a
you
know,
a
ephemeral,
you
know,
pod
directory
would
be
used
for
an
OSD
is
if
you,
if
you
specify,
do
not
use
all
devices,
and
you
also
don't
specify
any
data.
Sorry
any
directories
either
that
it
would
just
give
you
something
so
that
you're
not
left
with
a
completely
empty
cluster.
So
the
Reaper
scenario
you
mentioned
I'm
actually
I-
was
surprised
at
that
happening.
E
I
had
a
thought
for
what
could
like
if
we
want
to
have
the
sort
of
rock
out
of
the
box.
Experience
be
the
same
where
users
like
can
get
a
workable
Brook
cluster
with
nodes
that
don't
have
any
discs
out
of
the
box
would
be
to
change
the
CRD,
not
definitions
with
the
CRD
specs
in
the
cluster.
Examples
to
have
to
have
them
set
that
a
to
use
the
data
to
house
path
as
a
directory
based
OS
d
and.
D
What
about
simply
adding
the
data,
the
others
which
is
normally
valid
route
to
the
directories
and
making
it
work
again,
that
you
can
have
an
OSD
in
the
data
daeho's
path
or,
for
example,
making
it
like
valid
bruger
sedate
on
the
host
pattern
and
heavy
ballad
group
/os
des
the
territory
entries
in
the
cluster
wide
ring
a
classified
research
list
wouldn't
have
already
basically
mitigated,
except
for.
I
think
we
have
some
kind
of
check.
D
D
E
E
So
if
a
user
were
to
just
like
take
that
and
create
a
requester
to
try
it
out,
you
don't
remember
what
the
defaults
are
set,
but
the
user
would
get
a
disk
on
nodes
that
don't
have
discs
if
which
might
help
their
out-of-the-box
experience
with
rook,
because
they
would
have
something
that
was
working.
I
think
we
could
change
this
behavior
and
still
keep
the
same
out-of-the-box
behavior
by.
E
F
For
from
the
stuff
perspective,
I
think
anything
that
creates
a
nose
to
you
that
isn't
backed
by
an
actual
device
by
default,
makes
me
very
nervous.
So
I
like
that
suggestion.
Where
the
example
you
know,
for
the
like,
get
it
up
and
running
in
a
toy
mode
or
whatever
just
has
an
extra
property
that
like
specifies
the
director
you
do
it
I,
wouldn't
want
that
to
do
that
by
default.
B
I
C
A
That's
definitely
a--from
a
lazy
developer
perspective,
which
can
be
argued
to
be
a
valid
viewpoint.
I
think
that
you
know
I
just
used
like
the
the
default
clustered
IMO,
that's
checked
into
the
source
tree
and
what
that
does
is
it
creates
an
OSD
and
a
default.
Sorry
in
a
like
a
ephemeral
pod
directory.
So
you
end
up
with
an
OSD
without
having
to
modify
the
animal
at
all.
I
think
that
the
behavior
I,
think
that's
being
discussed
here
is,
would
be
aligned
with
still
supporting
the
lazy
dev
scenario.
A
While
you
know
that
we've
already
had
this
separation
between
with
the
checked-in
manifests
or
simple
work,
for
you
know
a
single
node
cluster,
you
know
lazy
developers
and
then
the
documentation
has
you
know,
redundancy
or
erasure
coding
and
multiple
nodes
and
all
that
sort
of
stuff.
So
I
think
this
is
all
still
in
alignment.
A
I
think
that
everyone,
what
everyone's
mentioning
here
I
think,
would
still
be
aligned
and
then
also,
if
we
make
changes
to
the
committed,
manifests
they're
like
the
clustered
animal
I,
don't
usually
immediately
see
a
backwards
compatibility
issue
they're,
actually
because
those
rolling
get
used.
You
know
if
the
first
cube
control
create
that
should
not
affect
existing
clusters
out
there.
So
I
think
this
might
all
work.
It
requires
some
more
thought,
but
I
think
we
can
read,
maybe
reach
everyone's
goals
here.
A
A
E
A
I
think
so,
let's
follow
up
on
2288,
then
great
Thank,
You
Blaine.
Thank
you
already.
So
an
FYI
to
everybody.
Since
we
already
talked
about
this,
an
FYI
is
that
the
cube
con
Barcelona
call
for
proposals
deadline.
Is
this
Friday?
So
if
you
want
to
submit
a
talk
on
rook
or
whatever
your
heart
desires,
you
must
have
those
in
before
I
think
it's
end
of
day,
Friday.
D
D
D
Yeah
ipv6
is
a
stupid.
Okay,
no
I
tried
to
debug,
it
seems
like
I
had
something
we
had
to
kind
of
corruption.
That
was,
for
whatever
reason,
so
I
already
looked
a
bit
into
it
for
sage,
but
I'm,
not
sure,
if
really
completely
got
behind
it
already
Anastasia
yeah.
It's
not
a
lot
of
people.
Yeah.
Sorry.
B
F
I
have
here
that
we
need
to
make
probably
rethink
the
logging
situation
to
make
sure
that
we
don't
ever
get
in
a
situation
where
we
don't
have
logs,
like
that.
Let's
see
very
disturbing
and
then
figure
how
to
reproduce
it
or
something
find
somebody
who
has
log
so
I
can
figure
out
what
that
what
happened.
F
Need
to
see
a
log
the
first
time
the
OSD
failed
I,
the
an
apparent
when
you
destroy
the
container,
the
logs
go
away
and
something
destroyed
the
container
and
tried
to
recreate
the
OSD
or
something
it
wasn't
fully
following.
Let's
see
what
was,
but
basically
I
don't
have
the
OSD
output
from
when
the
first
time
it
failed
to
come
up
and
precise
I
suspect,
there's
a
rocks
to
be
replayer
in
there,
then
I
want
to
see
what
it
was
and
what
it
like,
I
don't
have
it
woulda.
F
A
Okay,
cool.
That's
unfortunate,
though
definitely
separate
topic.
Yes,
so,
and
then
also
a
note
to
maintain
errs
that
there
is.
We
need
I,
think
as
part
of
the
call
for
proposals
here
by
Friday.
We
need
to
submit
talks
for
the
maintainer
x'
track.
I,
don't
I
didn't
read
the
full
email
there
I,
don't
think
it's
any
different
than
the
intro
and
deep-dive
format
that
we've
been
accustomed
to
over
the
last
few
cube
cons.
A
A
H
I
just
wanted
to
say
before
we
get
started
on
this.
There
is
someone
in
the
CNCs
that's
willing
to
look
over
your
CFP
before
it
gets
evaluated.
I'll
copy
this
list
in
an
effort
to
get
a
more
diverse
set
of
end
users
into
the
the
mix.
So
if
anyone
feels
like
they
need
a
secondary
review
to
CN
CF
is
offering
that
up.
Oh
that's.
I
This
is
Jeff
hi,
guys,
hi
everyone,
the
one
that's
on
the
screen
right
now
is,
is
an
enhancement
to
the
object
user,
CRD
and
then
john,
my
coworker,
my
coworker,
and
on
eros
team
has
a
proposal
up
for
a
dynamic
bucket
provisioning.
We
want
to
use
rooks
F
as
the
infrastructure
for
this
we're
trying
to.
We
have
experienced
a
lot
of
experience
with
kubernetes
storage
and
we
bring
that
to
the
table,
but
but
definitely
much
less
experience
with
seven
and
looks
F
together.
I
We
want
to
follow
what
the
community
is
doing
in
the
patterns
that
the
community
is
using,
so
we're
trying
to
kind
of
marry
these
two
concepts
together
with
these
with
these
proposals
and
Travis
I
know,
has
offered
some
great
feedback.
We
also
know
about
crossplane
and
bucket
provisioning
done
there
and
I
I'm,
not
I,
don't
have
a
clear
understanding
of
how
we
should
kind
of
collaborate
on
this
yet
so
I
Travis
suggests
to
just
bringing
it
up
this
meeting
and
who's
ever
controlling
the
screen.
So
this
is
the
object,
store
enhancement
which
is
more
minor.
I
It's
just
it's
some
tweaking
to
the
existing
object.
A
Seth
optics
store
user,
CR,
D
and
then
John
has
a
brand
new
proposal
for
bucket
provisioning,
but
I
did
go
through
some
earlier.
Seth
issues
looks
F
issues
and
I
see
that
that's
come
up
before
and
there's
been
some
dialogue
in
that
area.
In
the
past
I.
G
G
A
Yeah,
that's
good
idea.
Bassam
and
I
could
say
just
right
now
in
the
immediate
term
that
I
took
a
pass
through
both
of
these
proposals
earlier
today
and
I
was
trying
to
look
at
it
from
the
perspective
of
you
know
the
user
experience
first
off
for
users
that
want
to
specifically
focus
on
Saif
and
get
buckets
provisioned
for
for
their
usage,
but
also
look
at
it.
From
a
perspective
of
you
know,
a
general
bucket
provisioning
solution
that
you
know
that
an
integration
with
the
problems
that
cross
flane
is
trying
to
solve.
A
You
know
a
workload
portability
of
applications,
so
I
did
not
initially
see
any
incongruence
or
any
incompatibility
between
those
I
believe
that,
with
the
approach
you
guys
are
taking
should
be
a
decent
approach
for
the
individuals
that
want
to
focus
on
just
stuff,
but
also
could
be
aligned
well
with
people
that
want
to
have
a
more
general,
perfect
bucket
provisioning
scenario.
So
we'll
talk
more
details
to
this
make
sure
that
we're
further
aligned.
A
F
And
just
that,
just
a
quick
comment
on
that.
I
think
that
the
goal
in
adopting
this
style
was
to
enable
them
I
think
what
you're
talking
about
north
across
plane,
where
you
would
have
a
storage
class
that
might
be
linked
to
a
stuff
cluster
with
rgw
or
might
be
linked
to
Mineo
or
might
be
linked
to
yeah,
s/3,
s/3
region
or
whatever,
and
and
the
experience
from
the
application
perspective
would
be
the
same.
So
that.
A
G
H
And
it's
definitely
a
request
from
the
community.
Lots
of
people
in
the
past
have
used
service
broker,
catalog
approach
to
try
to
do
buckets
I'm
in
trade
to
treated
it
as
a
different
large
entity,
and
the
idea
is
to
get
this
working
and
viable
to
go
back
to
the
kubernetes
community
as
a
possible
overall
approach
to
bucket
provisioning,
because
I
many
people
I've
talked
to
in
community
feel
like
we
need
to
establish.
Best
practice
is
around
this,
and
this
I
think
this
is
an
excellent
footprint
for
that.
B
C
A
At
least
the
legal
lessons
I
learned
from
wiring
up
that
Postgres
dynamic
provisioning
demo
that
I
gave
a
cubic
on
that
uses.
You
know
the
rook
cockroach
operator
in
CR,
DS
kind
of
gave
me
a
lot
of
clarity
about
how
how
this
whole
thing
can
work
and
how
you
can
do
a
general.
You
know
provisioning
approach
with
a
specific
provider.
Implementation
have
everything
kind
of
work
well,
together.
G
A
G
C
You
know
one
of
those
minor
things
this
just
hasn't
been
finished
yet,
and
it's
only
more
painful
when
we're
releasing
but
well
also
master
builds
really
hit
this
issue
where,
if
there's
any
random
failure
in
the
integration
tests,
you
don't
have
a
published,
build
that
matches.
You
have
to
rerun
the
build
until
you
get
a
green,
build
to
succeed,
yeah.
F
C
But
besides
the
integration
test,
it's
we
want
I
feel
like
we
want
the
published
built
to
match.
What's
in
the
branch
and
it,
if
that's
out
of
sync
for
any
time
longer,
then
several
minutes
that
from
that
perspective,
it
feels
like
it's
broken
to
me
and
we'll
always
have
integration
tests
issues,
no
matter
how
hard
we
try
to
fix
them
all,
and
so
that's
it
is
a
it's
a
short-term
thing
with
integration,
just
each
test
issues
and
a
long-term
stability
thing
too.
So
it
I,
don't
know
what
are
your
thoughts
just.
G
B
C
G
Have
a
discussion
with
works
and
see
if
we
could
set
a
bit
of
a
plan
for
what
we
want
to
do,
not
even
just
for
one
point
out,
but
just
what
we
want
to
do
to
improve
the
testing
of
all
this.
I
I'd
love
to
kind
of
even
dig
into
how
we
do
performance
testing,
and
now
we
do
more
long-haul
testing
around
these
releases.