►
From YouTube: 2020-10-20 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
the
recording
has
started-
and
this
is
the
october
20th
2020
rook
community
meeting
it's
a
lot
of
20s
there.
We
will
talk
about
it
a
little
bit
more
later
on,
but
we
are
now
a
graduated
project
with
cncf,
which
is
super
cool
super
awesome
and
we
worked
long
and
hard
to
get
there.
So
we'll
talk
more
about
that
later,
but
that
is
definitely
the
biggest
news
of
the
most
recent
times
since
the
last
community
meeting
I'd,
say:
let's
jump
ahead
or
jump
into
the
milestones.
A
Now
we
do
not
have
any
patch
releases
planned
for
1.3,
but
what
1.4
looks
like
we
do
have
a
patch
released
plan.
There.
B
A
B
B
A
Yeah,
I
think
ahmad
had
some
documentation
fixes
that
he
was
working
on
to
kind
of
streamline
to
make
the
end-to-end
like
nfs
operator
smoothed
out
a
bit,
but
overall,
like
the
code
was
updated
and
we
never
had
been
in
there
and
it's
it's
being
actively
maintained
so
yeah.
The
nfs
operator
itself,
I
think,
is
in
good
shape.
That's
good.
C
B
A
Okay,
cool
so
no
sense
of
urgency
to
get
a
patch
release
out
here.
You
mentioned
that
the
target
date
is
the
5th
of
november,
so
got
a
little
bit
of
time
for
before
that,
actually
would
be
shipped
out.
A
All
right,
1.5,
we
are,
would
be
targeting
the
first
that
minor
release
1.5,
but
you
know
the
week
before
kubecon
and
it's
targeting
november
10th,
and
I
think
there
will
probably
be
a
lot
more
active
development
on
that
one
than
there
are
on
some
of
the
older.
A
You
know
back
port
releases
that
are
in
in
maintenance
mode.
B
Yep
I
mean
there's
a
lot
of
feature
working
in
progress.
I
don't
know
how
to
speak
talked
about
here.
Yeah.
A
It's
well
speaking
of
the
nfs
operator.
This
is
one
of
the
final
features
that
ahmad
had
that
hadn't
been
merged.
Yet
I
just
took
a
note
myself
to
go
ahead
and
review
this
one
to
get
it
hopefully
finalized
and
merged
in
so
that
could
be
included
in.
C
A
C
D
A
C
B
B
Yeah,
I
think
that's
all
I've
got
for
the
1.5
so
november
10th.
What
is
that
three
weeks
out
or
so.
A
All
right,
let's
keep
on
moving,
then
yep.
So
that's
all
the
milestones
there
we'll
do
a
1.4
patch
release
in
a
couple
weeks
and
then
1.5.
We
will
be
targeting
getting
that
out
before
kubecon
by
november
10th,
so
we
can
move
ahead
onto
the
community
topics
section
and
so,
once
again
we
can
mention
that
graduation
was
successful.
A
It
took
quite
a
while
to
get
the
technical
oversight
committee
to
all
cast
their
votes
to
get
it
finalized
there.
We
did
not
get
any
negative
votes,
so
nobody
voted
against.
It.
Just
took
a
while
for
people
to
actually
go.
You
know,
read
the
diligence
and
get
through
figuring
out.
You
know
the
an
assessment
for
themselves,
but
that
was
passed
with
the
sufficient
number
of
votes
and
we
have
the
cncf
put
out
a
press
release
about
it.
We
did
a
travis
and
I
did
a
blog
post
about
it.
A
A
Yeah
exactly
smith
tower:
that's
where
now.
A
Seattle
yep,
so
we
also
did
a
like
a
pressed
briefing
with
a
journalist
from
container
journal
and
then
travis.
D
A
Already
do
your
your
other
press
briefing
for
or
the
like,
what
format
was
that
was
like
a
live
interview
or
what
was
that.
B
Oh
yeah,
there's
gonna
be
another
one
yeah
video
interview,
oh
it
hasn't
happened
yet
and
it
has
has
not
happened
yet,
oh,
like
in
another
week.
A
B
B
A
Well,
I
guess
it's
not
not
in
front
and
center
cool
site,
though
yeah.
That's
a
good
opportunity
too.
That's
exciting
travis!
So
yeah
good
luck
on
that,
not
that
you
need
it,
but
it
will
be
super
cool
but
yeah.
A
Basically,
you
know
the
the
graduation
is
you
know
a
long
time
coming
with
a
lot
of
effort
across
many
releases
from
the
entire
community
and
many
contributors,
hundreds
of
contributors,
so
great
work
to
everybody
for
investing
in
maturing
the
product
to
a
point
where
you
know
being
vetted
by
the
foundation
and
the
oversight,
technical
oversight
committee,
and
that
it
is,
you
know,
ready
for
you
know
it's
mature
enough
and
ready
for
broader
adoption
in
the
ecosystem.
A
So
that's
a
really
pretty
impressive
milestone
to
hit
to
reach
graduation
similar
to
other
projects
in
the
kubernetes
itself.
So
that's
that's
great
great
work.
I'm
really
happy
to
have
that
happen.
Yeah.
B
A
Yeah,
but
a
bit
of
a
snowball
down
the
hill
sort
of
thing,
maybe
right,
we'll
just
keep
doing
what
we're
doing
nope
cool
all
right,
travis
do
you
is,
is
so
taro
on
the
call
today,
no.
B
A
So
yeah
that's
awesome
to
have
some
more
expertise
on
the
team
and
more
more
people
power
to
keep
investing
in
the
features
and
keeping
pull
requests
merged
and
moving
forward
and
stuff
like
that.
So
that's
awesome.
A
Absolutely
alex,
did
you,
did
you
bring
this
up,
or
is
this
travis.
E
Was
that
from
last
week,
I
think
I
think
I
might
have
copied
everything
I
might
have
created
this
week's
this
week's
agenda
agenda
and
I
might
have
forgotten
to
remove
the.
E
A
B
E
B
E
Like
it's
more
or
less
a
should
we
do
a
separate
repository
or
not
and
well.
The
separate
repository
would
at
least
well
from
my
perspective,
just
make
it
easier
on
the
well
on
the
construct
we
currently
have
going
on
so
yeah
and,
as
I
think
I
said
last
time
like
as
an
argument
more
or
less
to
that
it
wouldn't
allow
to.
E
You
know
just
easier
push
the
helm
shots
further
and
not
make
them
too
dependent
on
the
rook
release,
especially
when
looking
at
like
what
people
want
more
or
less
in
the
ticket
that
I've
created
this
design
dock
for
this
one
where
it's
like
yeah.
But
what
if
there
would
be
a
helm
chart
which
does
like
you
know,
brings
like
defaults
or,
like
simple
toggles,
for
certain
features,
you
know
like
kind
of
a
wrapper
with
you
yeah.
E
Well,
I
said
like
realm:
just
can
it
in
itself
be
independently
released
from
release
from
software
releases
as
well
as
in,
though
I
think
I
should?
I
probably
have
mentioned
that
in
the
design
document
there
helm
even
has
like
a
app
version,
which
is
normally
like
the
indication
of
hey.
This
is
this
should
be
compatible
too.
This
isn't
the
software
version
at
least,
or
something
like
that.
Depending
on
how
it's
interpreted
and
well,
I
think,
with
us
being
able
there
to
move
faster
with
the
with
the
helm
charts.
E
This
might
even
at
a
certain
point,
be
it
well
a
possibility
to
well,
at
least
from
my
opinion,
deprecated
like
the
default
yammels
besides,
like
the
examples
and
to
just
render
them
through
the
hound
charts
and
have
just
one
place
to
worry
about
updating
in
the
end.
B
E
Yeah,
to
a
certain
extent,
we
would
use
the
helm
shots
then
to
render
the
manifest
so
that
updating
the
helm,
updating
of
structures
of
yamls
would
just
be
needed
to
be
done
in
the
helm
chart,
and
then
there
you
know
like
a
make
something
examples
or
so
command
would
then
just
render
the
helm
chart
and
put
it
in
the
example
files
so
that
the
example
files
are
always
up
to
date
on
how
they
should
look
from
structure
wise.
E
Well,
the
thing
the
thing
about
helm
is
that
for
a
lot
of
people
with
like
flux
and
other
like
github
operators,
it's
well,
it's
it's
easier
to.
You
know
just
point
it
at
the
helm,
chart
repository
and
give
it
the
values
yeah
to
install,
and
I
think
if
we,
if
we
do
it
right,
this
might
smallest
just
like
well
further
increase
like
the
usage
of
help.
E
If
we
have
proper
helm
shots
for
everything
yeah
that
is
like
the
idea
with
a
separate
repository
is
really
just
to
you
know,
be
able
to
move
forward
fast
and
even
be
able
to
release
helm
chance,
a
bit
like
as
like
independently
for
maybe
even
the
operators
which
currently
don't
have
any
helm,
chat
available
right
now,
yeah.
B
C
A
Which
is
completely
under
our
control,
and
you
know,
like
part
of
our
build,
automated
build
process
and
stuff
and
release
process.
E
But
I
think
I'm
I'm
not
sure
even
mentioned
it
like
that
in
the
descending
man.
I
hope
I
did
my
idea
of
oh.
I
just
see
your
comments,
then
my
idea
would
be
to
basically
say:
hey
we're
going
to
make
kind
of
like
a
cut
that
for
like
the
new
before
with
like
new,
improved
helm
charts,
we
would
like
start
to
use
like
calendar
rook.
E
I
o
to
just
you
know,
have
one
channel
really,
because
we,
I
think,
I
think
we
removed
the
logic
there,
but
technically
it's
still
there
for
old
versions
like
the
stable
beta
and
so
on
thing.
Does
anyone
remember
that
with
the
helm,
charts
and
like
the
releases
we
had
going
on
some
time
ago,.
B
E
Yeah
yeah
yeah,
but
I
think
well
basically,
is
it
like
my
ear?
Would
you
really
make
a
cut
to
say
hey
then,
since
from
version
1.6
or
something
on,
there
will
be
no
new
helm,
just
added
to
the
old
system,
to
the
charts
that
ruko
and
then
from
down
like
the
new
improved
charts,
then
hopefully
to
be
well
added
to
the
new
helm,
dot
root
io,
for
example,
david
well
like
that.
B
Right
yeah,
so
it
all
sounds
good.
I
like
helm,
is
kind
of
a
focused
area
going
forward.
E
Yeah
yeah,
a
few
people
also
from
customer
side,
are,
I
said,
like
using
githubs
and
all
that
and
it's
that
for
them,
it's
much
easier
to
you
know
just
say:
hey
install
this
helm
chart
with
those
values
instead
of
install
those
10
or
so
then
also
yaml
files,
because,
if
you
think
about
it,
we
have
the
common
yaml.
We
have
operator
yaml,
we
have
the
cluster
yaml,
we
have
a
storage
class
which
can
be
two
files,
the
block
pool
and
the
storage
class.
E
So
let's
go
just
over
five,
so
the
two
files
are,
we
already
had
five
files.
Let's
say
we
want
the
file
system,
six
files,
seven
for
object,
search
and
storage
classes
and
all
that
again
so
we're
well,
it's
not
many
files,
but
it's
from
a
is
it
like
from
a
home
in
all
perspective.
It's
is
it
just
easy
to
do
like
a
helm,
install
and
well
you're
done
with
it
so
to
say.
B
C
B
A
Having
helm
for
you
know,
objects,
you
know
to
actually
create
clusters
and
do
useful
things
beyond
just
having
a
home
chart
to
get
the
operator
itself
installed.
That's
definitely
a
gap,
and
I
definitely
support
that.
But
that
also
sounds
a
little
independent,
though
from
having
a
separate
repo
for
for
helm,
charts
like
a
dedicated
source
code
repository
like
it,
you
could
probably
I
maybe
the
design
dot
goes
into
details.
Why?
But
you
I
would
expect
you'd,
be
able
to
do
that
still
from
within
this,
the
single
rope,
road,
repo.
E
Yeah
it
should
be,
it
shouldn't
be
possible.
The
idea
is
more
or
less
to
kind
of
decouple
it
from
like
the
normal
releasing
with
jenkins
and
all
that
going
on
to
see
the
helm
chart
really
as
an
independent
yeah
out
of
rook
with
where
it
shouldn't.
You
know
like
it,
it
shouldn't
matter
only
what
helm
charge
version
is
used
if
well,
let's
say
1.6
rook
changes
everything
to
pull
like
that
like.
E
If
1.6
changes,
everything
like
how
the
sev
cluster
object
looks,
then
the
handshake
wouldn't
be
compatible,
but
just
being
able
to
give
the
user
say,
hey
this
helm
chart
is
for
this
rook
version
and
having
default
values
in
it.
E
According
to
the
helmet,
how
it
has
been
tested
for
somebody
even
as
well,
in
a
separate
repo
being
just
able
to
rely
on
well,
let's
say
github
actions
and
all
that
without
needing
to
go
through
the
jenkins
and
all
that
necessarily
well
from
our
perspective
would
make
it
easier
to
move
forward
with
adding
more
helm,
charts
and
moving
them
forward
faster
as
well.
A
Cool
all
right,
thanks
for
driving
that
alex
nice,
okay
and
then
so.
We
also
have
a
quick
note
here
on
kubecon
north
america,
2020.,
the
the
recordings
for
our
talks
are
due
this
week
on
october
22nd,
so
two
days
from
now
how's
that
going.
Did
you
all
already
finish
that
or
where
is
it
at.
B
Yeah,
well,
it's
not
finished,
but
so
sebastian
and
I
are
planning
on
recording
tomorrow
and
then
alexander
and
blaine
will
also
join
for
the
q
a
during
this
session.
Alexander,
if
you
felt
like
you,
really
want
to
speak
to
some
slides
too.
We
we're
officially
recording
tomorrow
is
the
plan.
E
At
which
time,
if
you,
if
you
need
me
in
it
well,
I
well,
I
don't
necessarily
need
to
be
talking
for
it,
but
if
you
need
any
help
with
recording
or
something
feel
free
to
reach
out
to
me,
like
I'm
available
but
right
but
necessary.
B
D
E
Recording
or
anything
or
adding
the
videos
together
or
something
I'll
gladly
support
that
process.
B
A
Thanks
for
doing
that
on
on
time
with
the
aggressive
schedule
proposed
by
the
kubecon
content
committee
this
year,
it's
still
like
four
weeks
away,
yeah
and
then
they
just
announced
like
who
got
a
talk
like
three
weeks
ago
or
something.
A
Yeah
maybe
next
week
for
some
people
too.
So
then
there's
a
call
for
project
updates
too
for
cncf
project
updates.
I
think
that's
probably
of
particular
importance
too,
since
we,
you
know
recently
graduated
so
we'll
get
some
some
focus
time
there.
So
there's
an
email
from.
I
can't
remember
who
who
it
was,
but
some
from
the
cncf
about
asking
for
recent
updates
from
rook,
so
that
we
can
be
included
in
that
part
of
the
keynotes.
B
A
Yeah,
exactly
okay
cool
awesome.
Thank
you,
travis
and
then
looks
like
the
only
other
item
on
the
agenda
today
is
around
the
conversion
to
using
v1
for
crds.
Probably,
I
think
right.
B
So
there
is
still
a
little
bit
of
time,
but
I
didn't
want
to
wait
until
1.22
comes
out
to
do
this.
So
what
this
means
is
so
I'm
hoping
to
get
this
in
for
the
1.5
release,
where
we'll
have
our
new
cr,
our
crds
defined
on
v1
now
and
they'll,
be
using
the
schema,
so
schema
will
be
required,
basically
for
all
the
settings,
but
for
backward
compatibility,
I'm
trying
to
make
sure
we
get
a
certain
flag
into
the
schema
that
will
ignore
what
what's
the
flag
called
preserve,
unknown
values.
A
And
what
is
what
is
the?
What
does
that
mean
for
migration
travis
or
upgrade
so
somebody's
using
1.4,
and
then
they
upgrade
to
using
rook
1.5
and
then
how
well
not
quite,
I
don't
yet
understand
the
way
that
the
crds
themselves
would
need
to
be
migrated,
or
is
that
like
a?
How
does
that
work.
B
Right
so
the
so
the
crs
that
already
exist
or
crs
being
instances
of
crds
right,
the
api
server
has
an
internal
conversion.
That
already
does
that
converts
into
v1
so
like
if
you.
So.
If
you
query
one
of
our
crs
today
and
you
look
at
the
out,
you
dump
it
to
yaml
it'll
show
that
it's
actually
v1
now,
even
though
we
were
still
defining
everything
with
the
v1
beta1,
so
it
converts
it
for
us,
basically
it
so
this
should
only
affect
creating
new
crds
or
new
crs.
B
A
I
think
so
yeah,
so
that's
that's
good
that
there's
kind
of
some
api
server
operation
behind
the
scenes
there
that
kind
of
automatically
updates
it
for
us
to
v1.
So
it's
it's
at
runtime.
It
doesn't
seem
like
a
a
big
impact
necessarily
for
existing
installations.
B
Right
right
and
people
today
who
are
creating
the
rook
crds
when,
basically,
when
you
create
common.yaml,
you
will
get
warnings,
saying
hey.
These
have
been
deprecated
uses
v1
instead
and
that's
why
we've
gotten
a
few
bug
reports
from
people
saying
hey,
we
should,
or
even
a
couple
of
pr's
is
kind
of
updated,
but
it's
a
pretty
big
change,
so
we're!
A
And
does
this
what
have
been
packed
to
people
that
are
using
clusters?
You
know
like
1.14
1.15
or
something
like
that.
B
Oh,
that's
right,
I
was
going
to
say
so.
People
who
are
using
1.15
or
older
will
need
to
use
or
create
our
crds
from
the
v1
beta
1.
So
we'll
have
a
separate
folder
under
our
examples
manifests
where
people
can
create
the
crds
from
there
and
instead
of
the
v1
definitions,
won't
work
on
on
those
older
versions,
so
we'll
still
support
it.
A
Does
the
helm
chart
contents
need
to
have
both
v1
beta
1,
crd
definitions
and
and
v1
ones
also
and
then
like
home
at
runtime
decides
okay.
This
is
kubernetes
1.15
or
less.
So
I'm
going
to
use
the
v1
beta
1
crds.
B
A
C
A
Cool
all
right,
well,
yeah
people
can
add
comments
to
the
to
the
pr
there
6424
linked
in
the
agenda
doc.
Here
all
right.
That
looks
like
that's
the
the
end
of
the
agenda
document
here.
Does
anybody
have
any
other
topics
for
discussion
that
they'd
like
to
bring
up
in
this
forum
right
now.
D
Hey
jared,
I
have
a
question:
can
you
hear
me?
Okay,
yep
can
hear
you
great
see.
So
let
me
know
if
I'm
in
the
wrong
place
with
this
question,
but
I
kind
of
wanted
to
go
over
the
cef
crash
collector
reconciliation
loop,
because
I
had
a
couple
questions
about
that.
D
Cool
yeah,
so
it
looks
like
it's
looping
through
the
name
spaces
that
have
nodes
of
stuff
deployments
on
it,
right
to
make
sure
that
those
have
crash
collector
pods.
But
the
thing
that
I'm
confused
about
is
if
something
happens,
to
fail
within
that
loop.
It
reports
the
error,
then
returns.
So
that
means
the
other
name.
Spaces
don't
get
reconciled.
C
D
Yeah,
that's
a
little
confused
because
I've
only
deployed
it
with
one
name
space
right,
so
you
never
really
run
into
those
issues.
The
other
thing
I
found
was,
if
you
disable
the
crash
collector
on
any
ceph
cluster
within
any
namespace,
the
code
disables
it
on
the
ceph
cluster
that
requested
the
reconciliation,
not
necessarily
that
tough
cluster.
D
So
I
just
wasn't
sure
if
that
was
part
of
the
logic
or
not.
C
C
D
Yeah,
I
just
wanted
to
know
like
if
I,
if
I
were
adding
something
to
that
loop
and
I
ran
into
an
error,
should
I
return
or
should
I
just
continue
in
the
loop
right
because
it
looks
like
the
current
way
is
returning,
but
it
seems
like
it
would
be
better
to
continue
report
the
error
and
then
continue.
C
C
C
C
And
just
so,
you
know
we
have
a
community
more
focused
on
work,
slash
code,
slash
questions
like
this
every
day,
I'm
going
to
send
that
to
you,
so
that
if
you
have
any
questions,
then
you
can
join
that
daily
meeting
as
well,
which
is
actually
daily.
So
you'll
have
to
wait
a
full
week
actually
two
weeks
to
get
on
to
that
meeting.
We
are
right
now.
B
D
B
D
A
Oh
sorry,
travis
yeah,
those
those
technical
focus,
questions
are
are
not
out
of
place
here
at
all.
It's
just
only
every
once,
every
two
weeks,
so
getting
getting
help
you
know
on
a
more
quicker
cadence
is
definitely
definitely
useful,
but
yeah.
Thanks
for
picking
that
up
today.
Renault
appreciate
appreciate
that.
A
If
no
further
topics
that
we
can
go
ahead
and
adjourn
and
good
luck,
recording
the
talk,
travis
and
sebastian
tomorrow
and
I'll
see
you
all
online.