►
From YouTube: 2022-05-03 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
according
to
started-
and
this
is
the
may
3rd
2022
rook
community
meeting,
so
we
can
hop
on
into
our
milestones
here.
We
got
a
couple
of
patch
releases
out
just
just
last
week.
That
is
so.
We
can
focus
on
what's
next,
though
1.8.10
anything
driving
this
this
one
or
we
don't
actually
have
something
scheduled
for
one
that
I
do.
B
No
nothing,
I
didn't
put
a
date
on
a
potential
release
there,
just
because
I
don't
know
that
we've
even
had
any
backboards
since
the
189.
I've
been
over
a
week
already
but
yeah.
I
think
things
are
pretty
stable
there
for
now
and
focusing
on
the
1.9
yeah,
I'm
sure
we'll
do
it
on
that
810
at
some
point,
just
no
driver
there
yet
for
it.
A
Yeah
and
then
we
did
a
1.9.2
that
was
just
just
like
very
recently.
Wasn't.
B
A
A
I
wonder
I
wonder
here
if
here
in
the
agenda
section,
we
should
have
like
a
latest
release,
or
let
me
know
really,
since
last
community
meeting
so
have
links
to
the
release
notes
directly
in
here,
so
that
we
have
the
most
recent
thing
easily
accessible.
B
Yeah,
that's
a
good
idea.
Even
under
the
milestone
checkup,
we
can
always
put
the
previous
release.
A
And
I'll
correct
the
annoying
auto
capitalization
of
the
b
right
anything.
You
want
to
highlight
some
of
the
the
release
notes
from
that,
though
any
special
to
mention
in
1.9.2.
B
B
Yeah
minimum
supported
version
was
updated
to
1.17
of
kubernetes.
1.16
just
has
some
limitations
that
weren't
worth
supporting
and
nobody
pushed
back
on
that
update,
so
we're
good.
We
were
good
there
and
then
yeah,
some
other
things,
fcsi,
updated
and
other
bug
fixes.
A
A
Yep,
that
sounds
like
a
good
one:
there
any
blocking
issues
or
issues
that
folks
need
some
resolution.
C
B
Oh
looks
like
we
need
to
clean
up
the
board
a
bit
there,
there's
that
one
in
the
blockchain
which
it
merged
yep.
So
if
you
want
to
drag
it
over
to
done,
okay,
basically
the
it
seems
well-
and
there
may
still
be
something
else
to
watch
for
here,
which
is
the
admission
controller,
is
now
enabled
in
a
lot
of
clusters
as
long
as
the
certificate
manager
is
is
installed
in
the
cluster
and
what
that
and
what
this
bug
is.
B
So
I
think
we,
I
think,
there's
still
follow-up
there
with
operator
stability
with
the
emission
controller,
but
at
least
increasing
the
memory
limit,
hopefully
will
help
that
the
most
in
the
meantime.
A
Yes,
okay
yeah.
It
looks
like
for
some
reason
I'm
not
signed
in
here
and
then
I'd
have
to
do
two
factor
off
and
and
whatnot
to
get
in.
Here's.
If
you
can
drag
that
would
be
possibly
easier
yeah.
Maybe
on
this
this
chrome
profile,
I
guess
I'm
not
signed
in.
A
Okay,
so
that
is
in
yeah
that
would
be
needed
for
1.9.3.
Does
that
in
your
opinion,
does
that
accelerate
a
need
for
1.9.3
or
normal
2b
cadence
is
okay?
That's
not
like.
We
need
to
push
this
out
right
away.
B
A
I
guess
we
can
move
ahead
into
the
community
topics
section
then
so
it
looks
like
we've
got
some
updates
on
the
recording
did
get
finished
and
submitted
for
the
introduce
dive
talk.
So
thank
you
for
doing
that.
Travis
and
blaine
definitely
appreciate
that
any
any
notes
or
feedback
or
experiences
to
share
on
that
one.
B
A
A
B
B
A
More
tuned
out
I'll,
be
tuned
out,
yeah
I'll,
be
sure
to
to
share
that.
If
I,
if
I
see
any
any
mentions
there,
okay
and
then
any
other
questions
about
kubecon
or
anybody,
that's
gonna
be
there
that
wants
to
shout
out
real
quick,
still,
not
playing
for
any
red
hat
folks
there
right
and
richard
you're,
not
you're,
not
making
it
the
the
long
trek
to
valencia.
I
assume.
A
A
Okay,
no,
I
saw
a
lot
of
discussion
just
today
and
yesterday
I
guess
on
the
no
no
loss
issues,
that's
been
something
that
is
unresolved
for
a
couple
a
while,
let's
just
say
so,
I
was
excited
to
see
some
some
some
movement
on
this.
It
is
now
I
solved
a
bit
of
a
proposal
here
as
well.
Is
that
possible
feasible
right
now
or
what's?
What's
the
is
there
a
resolution
to
the
next
steps
for
this
one.
B
Not
yet
I
mean
I
saw
some
responses,
but
I
haven't
had
a
chance
to
really
think
about
them.
I
respond
yet
so
yeah,
I'm
really
curious
to
or
interested
in
getting
some
movement
here,
because
it's
really
if
this
is
our
oldest
issue,
it's
a
kubernetes,
wide
issue.
It's
not
unique,
but
having
some
sort
of
medication
here,
just
people
need
something
to
improve
this
situation.
So
even
if
it's
not
a,
I
mean
it's
never
going
to
be
100
automated,
because
you
really
can't
risk
the
beta
corruption
with
multiple
writers.
B
So
that's
the
main
thing
we
got
to
make
sure
we
keep
the
data
safe
and
not
getting
corrupted
but
help
out
in
this
scenario.
So
we'll
see
yeah
I
just
put
in
the
agenda
as
fyi.
Yes,
we
we
need
to
do
something
here.
It's
been
delayed
long
enough.
That's
somebody
else's
problem,
but
it's
really
our
problem.
A
Is
there?
Is
there
any
like
updates
to
some
of
the
mechanics
in
ceph
itself,
travis
that
could
help
out
with
this
scenario,
I
know
we
when
we
looked
at
this
like
two
years
ago,
there
were
some
things
with.
Like
I
don't
know,
maybe
it
was
an
rvd.
I
don't
know
it
would.
You
know
not
really
make
it
feasible
to
even
enable
this
type
of
scenario.
Have
there
been
some
movements
that,
like
are
advancements
that
you
are
well
aware
of
that
specifically
could
help
here.
B
C
C
B
Could
be
duplicate
of
the
pods
that
were
failed
over
make
sure
we
don't
have
the
duplicates
and
then
re-enable
it.
So
at
least
as
a
first
step.
I
was
thinking,
let's
just
block
those
things
and
then
in
the
future
we
can
look
at
unblock
listing
things
if
we
really
find
it
safe.
In
some
scenarios,
sure.
D
From
my
perspective,
I
think
the
biggest
question
here
would
be
more
or
less
if
it
would
be
something
rook
should
do
or
if
sev
sees.
I
should
be
doing
that
because
fcsi
handles
the
volume
mapping
mounting
and
so
on.
So,
like
I
don't
know,
I'm
kind
of
seeing
this
more
of
a
csi
test
or
not
necessarily
say
about
sev
csi
driver
part,
but
well.
B
Yeah
since
then,
there
was
a
comment
from
niels
from
the
csi
team
about
why
he
saw
it's
not
or
can't
be
a
csi
responsibility
and
that
rook
is
a
good
place
to
do
it.
So
yeah,
it's
a
great
question.
It
feels
like
a
csi
thing
for
sure,
but
it's
I'm
not
sure
csi
has
the
context
either.
B
A
A
Was
there
other
notes
that
you
wanted
to
add
on
this
in
the
agenda
travis
or
just
kind
of
hey?
This
has
some
traction
now
this
is
what
the
thinking
is
and
then
you
know
feedback
from
folks
as
needed.
A
A
All
right
and
then
another
agenda
item
travis
for
south
telemetry.
B
Yeah,
so
just
before
this
meeting
seb-
and
I
were
in
a
meeting
with
someone
from
seth-
gave
a
presentation
on
telemetry
and
it's
like
wow.
This
is
really
powerful.
We
could
do
a
lot
with
this
rook
to
collect
data
about
rook
clusters,
and
you
know
add
rook
metrics
to
this,
so
we
can
see
what
versions
of
rook
maybe
are
influencing
crashes,
and
I
mean
the
focus
will
be
around
set
metrics,
of
course,
but
I
think
we
can
add
to
rook
and
and
make
it
yeah
just
an
integrated
experience.
B
I
don't
know
if
it's
even
working
well
with
rick
right
now
or
if
there's
even
a
way
to
really
tell
if,
if
the
telemetry
enabled
through
seth
can,
if
it
can
report,
even
whatever
cluster
is
so.
This
was
just
yeah.
I
brought
up
an
hour
ago,
so
I
just
wanted
to
mention
here
that
hey
there's
something
we
can
do
here
to
really
improve
tracking
and
even
feature
development
and
see
when
we
can
deprecate
features
if
nobody's
using
them
and
things
like
that.
B
C
D
C
Because
we
know
the
ui
is
kind
of
heavily
used
with
rook
as
well.
I
mean
there
has
been
a
lot
of
issues
with
the
ui
over
the
over
the
years
and
I'm
sure
there
is
a
lot
of
traction
and
people
using
the
ui
were
good
deployments
too.
So
we
would
hope
that
they
would
see
that
message
and
turn
on
telemetry
too,
although
we
wouldn't
be
able
to
distinguish
between
a
rook
cluster
and
a
regular
one.
C
B
B
Right
and
beyond
this
public,
these
public
charts
they
have
here,
there's
also
some
private
charts
for
analyzing
crash
dumps
and
yeah
and
things
I
want
to
get
access
to.
Not
that
I
want
to
look
at
crash
dumps
from
seth,
but
it's
a
lot
more
data
behind
the
scenes
than
what
they've
got
publicly
too.
A
Cool
and
travis,
what
would
you
mention
to
what
you
thought
the
experience
would
be
for
for
enabling
this
like
that,
would
be
just
an
option
on
on
a
custom
resource.
B
Yeah,
I
think
it
would
make
sense
generally
if
you
can
configure
something
with
seph.
We'd,
probably
have
a
an
option
and
a
cr
that
would
enable
it.
So
probably
the
cluster
cr
in
this
case
yep
yep,
some
sort
of
options
there.
A
Cool
that
seems
pretty
interesting.
Definitely
it's!
You
know.
I've
always
been
a
fan
of
this
sort
of
data
to
help.
You
know
you
know,
project
decisions
as
well
of
like
what
features
are
working
well,
what
like
you
know,
who's
adopt
like
not
who,
but
what
features
are
being
adopted
and
it's
like
when
things
can
be
deprecated
and
and
removed
and
stuff
like
that,
so
that's
definitely
useful
to
inform
the
project.
So
I
I
like
this
idea
for
sure.
A
C
A
Cool
we
can
link
that
there,
so
it's
it's
available
in
the
agenda:
okay,
cool,
that's
everything
that
was
on
the
dock,
any
other
agenda
items
or
things
that
folks
want
to
bring
up.
D
I'm
not
sure
if
you
have
talked
about
it,
because
it
was
a
few
minutes
late,
but
who's
going
to
kubecon
in
like
what
was
it
two
weeks.
I
think.
A
Me
alexander,
the.
B
B
So
mike
will
be
there
with
doing
a
work
booth
he's
about
the
only
one
from
red
hat.
I
think.