►
From YouTube: 2023-07-25 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Go
pretty
has
begun
for
this
Rook
community
meeting
of
July
25th
2023
I
think
we
have
a
small
agenda
today,
we'll
start
with
our
Milestone
checkup.
A
A
For
1.12,
we
did
just
release
July
18th.
A
And
yeah
I
think
1.12.1
will
probably
on
Thursday.
A
We
have
a
I
think
a
couple
PRS
yeah.
This
is
the
the
main
PR
that
I
we're
looking
at
is
that
the
operator
can
panic.
When
sensing
a
node
under
like
conditions
with
like
non
I,
guess
like
more
custom,
PVCs.
A
B
Issue
sure,
let
me
just
open
it
up
on
my
phone
now.
You
can
look
at
it
at
the
moment.
A
A
A
Okay,
yeah
there's
those
are
the
only
things
that
we
have
on
the
project
board
right
now
that
are
in
the
blocking
release.
I,
don't
think,
there's
been
anything.
A
A
A
C
Sure
howdy,
my
name,
is
Daniel.
Hicks
I
currently
have
worked
as
an
infrastructure
engineer
at
four
weeks,
I'm
a
part
of
the
storage
team,
and
we
run
The
Rook
operator
to
manage
that
CSI.
C
We
have
been
chasing
a
bug
where
we
have
what
we
call
zombie
zombie
mounts
on
nodes
where
there
is
the
global
Mount
staging
amount
on
each
node,
and
you
will
commonly
see
an
error
as
I
put
there,
where
they
get
device,
Mount
refs
check
for
volume,
and
it
will
give
information
about
a
specific
PVC
where
it
can't
unmount,
because
there's
a
mount
in
reference
to
the
said
Google
staging
volume
out.
C
C
C
Besides
hopping
onto
node
and
unmounting,
the
volume
by
hand-
but
this
poses
an
issue,
though,
that
we
still
get
this
error
inside
of
cubelet,
because
that
reference
is
always
present
even
after
it
used
to
be
thought
that
we
could
replicate
it
by
building
up
two
pods
in
the
same
node,
with
a
reference
to
the
same
PVC.
But
we've
also
found
that
it
just
happens
with
a
single
pod
as
well.
B
Did
you
try
to
see
if
maybe
matu
Madu
can
can
add
some
input
on
the
issue?
He's
the
one
of
the
stuff
CSI
people?
Maybe
it's
like
something?
That's
fcsi.
C
I've
I've
found
issues
inside
of
CSI
related
to
this
as
well,
and
it
links
back
to
a
cubelet
bug
and
then
the
cubelet
bug
was
closed
due
to
stableness
and
they
closed
the
sexiest
side
issue,
because
it
was
a
cube
problem.
B
C
I'm
current
I'm,
currently
digging
through
cubelet,
to
try
to
find
the
the
a
fix
for
this
I
I
was
suggested
to
attend
to
see
if
anyone
with
more
knowledge
around
this
interaction
had
any
ideas
and.
B
B
If
you
could
add
the
links
to
the
agenda
item,
that
would
be
good,
I
would
I
would
basically
keep
my
I
guess
advice
here.
If
you
can
post
it
in
the
CFC
side,
Channel
as
well,
umadhu
written
I
think
I'm.
Sorry,
he
used
to
said,
like
the
serious
person
like
that
CSI
person
from
what
I
know.
B
So
maybe
he
has
some
input
or
like
is
in
some
way
able
to
help
you
further
there
I
guess
besides
that,
I
don't
know
if
there's
like
a
CSI
stick
group,
but
if
there
is
maybe
it's
worth
trying
to
see
if
anyone
there
is
like
from
the
cubelet
beam
or
storage
kubernetes
team,
then
if
they
are
able
to
like
revive
the
issue.
C
Yeah
I
had
reached
out
on
the
kubernetes
slack
to
ask
questions.
Ask
questions.
I
was
informed
that
it
is
a
very
interesting
bug
likely
unless
I
had,
unless
we
had
some
form
of
support
contract
with
red
hat,
that
it
would
likely
not
get
worked
on.
B
C
B
It
yeah
so
I,
I
guess:
advice
still
stands.
If
you
try
to
reach
a
tomato,
maybe
he
can
like
like
he
is
at
least
a
self-season
person,
so
I
think
he
has
connections
with
the
people
from
Seaside.
Hopefully,
maybe
then
someone
from
the
cupid
storage,
team
or
kubernetes
storage
in
general
that
can
help
you
there
I
said
I
would
I
would
try
to
post
it
on
slack
and
on
The
Rook
slack
and
firstly,
sign
and
hope
that
matu
responds
to
it.
B
Yeah.
Okay,
if.
B
Free
to
bring
it
up
like
in
another
community
meeting
as
well
again,
maybe
the
right
person
is
the
right
place
or
maybe
like
someone
else
just
takes
a
look
at
it
and
I
guess
maybe
she's
the
obvious
or,
like
the
you
know,
when
you're
debugging,
something.
A
Sure
yeah
yeah
certainly
madhu
is
the
is
the
best
person
to
to
talk
to
about
this
issue.
A
C
Me
dig
through
the
I'll
dig
through
that
that
issue,
since
it's
reference
there
and
I'll
find
it.
A
Yeah-
and
that
is
certainly,
it
is
certainly
interesting-
I
I
will
we'll
give
it
that
and
curious
curious
that
this
hasn't
been
like
I
mean
this
obviously
has
been
reported
before,
but
it
hasn't
been
reported
more
often
I
wonder
if
there
are
configurations
that
make
this
case
less
likely
to
occur.
A
Virtue
of
it
using
the
kernel
driver
you,
you
could
potentially
try
working
around
the
issue
by
using
the
user
space
driver,
I
I
think
those
user
space
drivers
do
have
a
little
bit
less
throughput.
So
if
you
have
like,
like
bandwidth,
sensitive
applications
that
may
not
be
like
they're,
certainly
not
ideal,
but
if,
if
this
is
blocking
things
from
proceeding
at
all,
that
could
be
an
option.
C
Yeah
we
already
have
issues
with
throughput,
not
being
super
great
okay,
so
the
main
thing
it
causes
issues
is
just
we
already
have
it:
locking
issues
with
ffs
and
so
the
sort
of
compounds
it
when
it's
not
super
obvious
where
the
client
blocking
is-
and
it's
causes
just
an
exponential
spute
of
Errors
inside
of
cubelet.
A
Interesting
yeah
definitely
I
I
can
also
reach
out
to
Maru
and
like
mention
these
as
well
I
mean
I
would
encourage
you
to
as
well
yeah.
A
A
Yeah
I
think,
unfortunately
not,
but
hopefully
we
can
help
get
it
resolved
still.
B
See
many
people
hit
this,
but
it's
you
know
like
a.
They.
Don't
really
realize
that,
depending
on
like
what
the
effect
really
is
in
the
end,
yeah
I
I
guess
so
I
would
put
it.
For
example,
I
don't
know
if
it's,
if
it's
necessary,
cubelete
or
CSI,
or
what
whoever
there's
messing
up
depending
on
how
you
see
it.
B
C
Okay,
you'd,
specifically
only
really
ever
notice
this
if
it's
like
for.
In
our
example,
we
have
the
kernel
client,
that's
specifically
in
the
leftover
Mount,
that's
hanging
things
up.
If
you,
okay,
don't
ever!
If
you
don't
ever
see
that
then
realistically
yeah.
This
would
just
be
a
couple
of
log
lines
that
you
can
ignore.
C
A
Yeah
I
think
I
think
Daniel's
the
only
person
in
the
call
that
I
don't
recognize
from
our
normal
huddles.
Certainly
if
anyone
has
any
other
topics,
they
want
to
bring
up
or
questions
feel
free.
This
is
definitely
the
time.
A
Sweet
Sweet
Sound
of
Silence
right
well,
thanks
for
joining
us
Daniel,
but
we
we
really
do
like
hearing
from
from
users.
Even
I
mean
I
know.
This
is
not
like
a
great
a
great
issue
that
you're
having,
but
it's
good
to
see
users
getting
engaged,
then
that
we
can
help
you
out
and
no
worries.
I
guess.
That
concludes
our
July
25th
meeting
and
I'll
follow
up
with
Jared
to
figure
out
how
to
get
this
done.