►
From YouTube: 2019-07-30 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
the
recording
has
started.
This
is
the
July
30th
2019
rook
community
meeting,
and
we
will
go
ahead
and
start
the
meeting
here.
First
off
with
a
look
at
current
milestones
and
patch,
starting
with
the
one
dot
o
patch
I.
Don't
I
had
in
the
agenda
to
talk
about
this
high
CPU
usage
issue,
which
seems
to
have
a
lot
of
activity
in
the
past
couple
days
and
I'm,
particularly
subtraction,
or
a
solution
as
well.
So
I
guess
we
can
go
ahead
and
talk
about
that
now.
A
Did
it
look
like
there
was
a
potential
fix
around
you
know,
setting
limits
on
the
resources-
and
you
know
especially
around
memory
used
by
LS
DS-
was
that
actionable,
though,
to
be
passionate
lease
or
is
it
expected
to
be
like
a
user
workaround
where
they
can
update
their
manifests
before
they
deploy?
Or
does
anybody
have
a
good
take
on
that
yeah.
B
My
perspective
on
that
is
that
if
you
set
memory
limits
on
your
OS
DS
and
if
you
run
a
Nautilus,
build
that
it
will
work
in
your
environment,
I
think
so,
there's
a
few
people
still
validating
that
it
works
for
them
yeah,
but
it
does
look
like
if
you're
in
low
memory
conditions
it
just
gets
into
that
state
where
load
goes
through
the
roof
CPU
and
all
that
so
I,
don't
think
we
need
a
patch
release
for
this.
It's
assuming
that
that
is
the
the
workaround
basically
yeah.
A
B
A
A
A
B
A
I
don't
see
a
lot
of
activity
on
this
particular
unit
reminded
reminds
me
of
you
know
the
general,
like
the
issue
had
for
a
long
time
about,
could
cross
node
movement
of
a
volume
when
you
lose
contact
with
the
previous
node
and
you're,
not
necessarily
in
a
position
where
you
were
it's
your
that
the
old
client
is
no
longer
using
the
volume
which
I
think
is
1507.
If
I
remember
that
number
correctly,
which
has
been
open,
fro
quite
a
long
long
time,
yeah,
that's
a
little
okay.
B
I'm
actually
working
on
right
now,
enabling
it
by
default.
Oh,
we
are
open
yet,
but
both
flex
driver
and
CSI
driver
would
be
enabled
by
default
for
seven
deployments.
And
then
you
can
choose
your
storage
class
dragging
on.
If
you
want
to
CSI
or
flex,
at
least
for
the
1.1
release,
and
then
now
we
need
to
talk
about
what
the
path
looks
like
to
demarcate
slice,
but
for
now
supporting
both
yeah
yeah.
A
B
B
B
B
So
there
is
a
security
scan
issue
in
the
basis
F
image
for
the
OP
that
the
operator
uses.
So
our
1.4
release
still
has
this
issue,
since
we
were
waiting
for
the
714
to
to
release
which
is
now
out
with
that
security
fix
just
looking
at
this
issue,
I
I
couldn't
tell
the
severity
or
I've
not
so
it
didn't
seem
urgent
to
me
at
least
to
need
to
release
again
before
1.1
yon
isn't
here
on.
B
A
A
You
know
a
one
of
the
one
of
the
probably
the
biggest
unresolved
process
items
for
us
right
now.
I
spent
you
know
in
terms
of
working
towards
a
potential
CN
CF
graduation
is
around
our
security
disclosure
process
and
security
auditing
is
well.
We
do
not
have
that
in
place,
and
that
is
probably
the
biggest
remaining
item
that
we
need
to
tackle
over
the
next
couple
of
months
or
so
so
and
soon
in
a
short
term
timeframe.
B
A
C
B
A
B
A
B
A
A
B
Other
yep
he
just
right
up
in
our
settle
in
the
last
hour
about
this
put
together
for
generating
client
code
around
the
cr-z,
our
DS
for
Python.
So
if
the
Ceph
manager
has
a
need
to
use
Python
client
to
our
CRS-
and
it
seems
clear
that
you
know
any
of
our
source
providers
would
benefit
from
just
like
we
have
the
we
generate
go
client
code,
we
could
generate
Python
client
code
and
for
consumption
by
whatever
clients
want
to
access
it.
So
it
seems
like
a
valid
thing
to
put
this
in
the
repo
for
code
generation.
A
B
So
what
he
put
together
is
that
the
Python
code
would
be
generated
based
on
the
CRT
validation.
You
know
that
so
you,
instead
of
the
types
exactly
that
we
have
so
there's
you
know
the
validation
I'm
talking
about
in
the
CR
D,
the
yam
illumination
yep,
so
be
all
cogeneration.
Based
on
that
and
I'm
not
sure
that
about
the
cogeneration
upstream.
For
for
Python,
though,
based
on
the
types
because
it
would
be
optimal
to
be
based
on
the
types
which
we're
already
using
for
go
generation.
B
A
A
It
would
be
nice
that
if
there
is
some
effort
here
from
the
community
around
Python
client
generation,
it's
similar
to
the
effort
that
was
done
for
:
clients
that
what
sounds
like
that
would
be
really
nice
to
have
in
a
common
place,
like
you
know,
upstream
for
other,
if
there
hasn't
already
been
an
effort
upstream
that
you
know,
the
any
sort
of
effort
here
would
be
a
benefit
to
greater
ecosystem.
I'd.
Be
surprised
that
there
wasn't
something
already
for
Python
as
well.
A
C
C
It
like
parses
through
all
of
the
go
client
stuff
for
kubernetes
and
it
generates
like
Python,
and
it
generates
a
lot
of
stuff
I,
think
it
generates
like
Ruby
and
like
just
loads
and
loads
of
stuff,
and
it's
all
based
on
that
go
library
which
I
think
sounds
like
that
might
be.
What
we're
talking
about
here.
B
A
Yeah,
so
in
general
you
know
that
sounds
like
it
would
be
a
potentially
useful
thing.
Maybe
that
could
be
like
potentially
optional
like
if
you
know,
on
a
storage
writer
basis.
If
you
really
want
to
have
Python
code
generator
this
world,
then
you
know
that
could
be
opted
in
or
opted
out
whatever
it
may
be,
but
in
general
any
effort,
they're
being
done
with
you
know,
in
a
common
way.
For
you
know,
leveraging
upstream
efforts
on
this
sounds
like
the
right
approach.
B
A
Well,
yes,
I'm
soaping,
if
either
Giovanni
or
to
be
true,
we're
online,
which
I
don't
see
either
of
them.
Here
we
get
an
update
on
the
google
Summer
of
Code
project
and
effort
here
that
the
Dimitri's,
mentoring
and
Giovanni
is
his
undertaking
around
the
multihomed
cluster
Network
spec,
but
I.
Don't
think
we
have
you
through
them
online,
so
I
don't
think
we're
gonna
get
an
update,
at
least
from
the
horse's
mouth.
The.
B
A
Okay,
all
right!
So
let's
looks
like
we
got
vishal
added
this
topic
here
about
Yuka
bytes
DB
Vishal.
Are
you
online?
Yes,
I
am
excellence?
Would
you
go
check,
speak
to
this
sure.
C
So
its
initial
design
and
there's
an
issue
on
the
repo
the
actual
world
also
is
in
progress.
You
already
have
the
create
and
delete
working
with
you
go
by
DB
and
they
look
as
the
underlying
storage
engine
still
working
on
integration
test
and
some
of
the
update
parts.
So
we're
just
wondering
will
it
be
reasonable
to
assume
that
we'll
have
time
for
reviewing
this
for
1.1
or
will
it
be
too
optimistic?.
A
My
gut
feel
here
is
that
you
know
as
a
beginning,
alpha
level
effort
here
that
that
would
be
great
to
get
into
1.1
I'd
be
thrilled
to
see
that
and
I
don't
think
it's
you
know
if
you're
already
working
on
integration
tests
for
it
as
well.
I
think
that's
to
me.
That's
a
good
sign
that
you
will
be
able
to
reach.
You
know
something
of
an
alpha
level
quality
to
put
out
and
1.1
without
significant
concern.
B
I
agree
if
it's
already
in
progress
I
mean
just
to
think
about
the
timing,
we're
looking
at
feature
sort
of
a
feature:
freeze,
August,
25th
or
so
I
think
and
then
targeting
September
10th
as
a
release
date.
Potentially
so
that's
yeah.
We
just
want
to
make
sure
we
give
feedback
early
at
often
as
you
open
the
PRS
and
so
yep.
Let's
go
for.
A
B
B
B
A
B
It's
real
hot
in
here,
maybe
not
okay,
yeah
I'll
represent
this
one,
so
there
is
I
think
there's
a
general
pattern
here
that
can
be
used
across
storage
providers.
So
I
just
wanted
to
bring
this
up
as
interesting
for
the
community,
so
that
the
fundamental
challenge
here
is
that
during
the
upgrade
process,
you're
upgrading
or
terminates
cluster,
you
need
to
take
a
node
down.
So
you
court
on
it.
B
You
know
the
pods
are
evicted
and
the
way
you
can
manage
what
how
that
works
is
around
the
pod
disruption,
budgets
and
there's
machine
disruption,
budgets,
but
those
those
core
concepts
don't
really
seem
to
map
to
what
we
need
for
storage,
for
example,
pod
disruption
budgets.
You
need
to
tell
it
how
many
pods
can
be
disrupted
at
a
time
or
what
percentage
but
force
fos
T's
at
least
yeah.
B
We
might
have
a
different
number
of
OS
T's
per
node,
and
really
what
we
want
to
do
is
control
it,
how
many
nodes
do
being
disrupted
at
the
time,
because
that's
how
crush,
for
example,
is
designed,
let's
make
sure
we've
survived,
node
knows
going
down
or
or
whatever
the
or
multiple
nodes
in
the
same
zone
going
down
and
in
that
level.
So
this
this
has
a
design
here
around
how
we
use
pdbs
and
the
operator
managing
those
PD
bees
to
get
the
behavior
of.
B
Nodes
going
down
by
using
a
canary
pod
and
things
so
I,
don't
think
we
have
time
to
talk
through
the
whole
design
here,
but
there's
and
I
think
could
be
good
to
see
what
others
are
doing
in
the
storage
community
too.
As
far
as
how
how
to
manage
upgrades
with
where
storage
we
need
to
make
sure
storage
is
healthy
before
continuing
with
each
node
upgrade.
A
So
try
to
say
you
know
previously
in
the
in
the
operator
logic
for
OS
DS.
We
would
you
know
sequentially
walk
through
node
by
node
to
do
you
know
various
operations
like
you
know,
adding
new
OS
DS
that
node
or
you
know
bringing
them
down,
or
you
know,
potentially
for
creating
them
as
well,
so
that
was
kind
of
at
the
operator
level
itself
and
not
really
using
any
of
the
primitives
in
kubernetes,
such
as
the
pod
construction
budgets.
Do
you
do?
What
is
your
take
on
on
that?
A
B
A
Yeah
I
think
forgetting
yeah.
No,
no
I'm,
talking
about
kind
of
an
older
approach
that
didn't
really
did
was
not
involving
any
sort
of
coordinating
or
training
of
nodes.
So
I
think
that
that
may
be
kind
of
a
new
or
a
more
recent
addition
to
the
flow
here
that
the
you
know,
the
old
you
know
operator
based
logic,
wasn't
necessarily
equipped
to
handle
in
that
bed.
So
that
could
be
a
very
good
reason
here.
I,
you
know
at
the
operator
level
itself.
Is
it
that's
insufficient
to
safely
guide?
B
Yeah
well
and
and
to
be
clear
and
maybe
you're
already
clear
on
it,
but
so
this
is
talking
about
upgrades
happening
outside
of
the
storage
operator
itself,
so
inside
assuming
kubernetes
itself
is
not
being
upgraded.
The
operator
already
shouldn't
be
managing
the
upgrade
of
the
storage
platform,
and-
and
this
upgrade
procedure
is
talking
about
kubernetes
itself
being
updated.
Oh.
A
B
So
it's
a
different
level
of
upgrade.
We
have
to
use
these
PD
bees
to
to
basically
work
with
it.
It
feels
kind
of
hacky
to
some
degree
like
with
this.
The
other
way
we
have
feels
it
feels
like
workaround
and
feels
like
we
should
get
something
into
native
kubernetes.
Some
constructs
that
help
us
with
this.
You
know
allowing
a
storage
platform
to
really
be
stable,
while
you're
upgrading,
but
anyway,
this
is
a
solution
for
the
short
term,
at
least
until
some
bigger
solution
is
worked
out.
You.
B
You
know
as
needed,
the
operator
would
see.
Oh,
it
looks
like
we're.
Trying
to
upgrade
I
will
remove
a
PD
B
and
then
put
it
back
when
that
notes
done
and
so
managing
the
PBS,
and
it
requires
some
some
heuristic
around
trying
to
figure
out
when
a
note
is
being
drained
recorder,
mm-hmm,
cordoned,
so
I'm
curious
to
see.
If
that
heuristic
will
actually
work
out,
got.
A
It
refuses
ponder
suction
budgets
as
well
and
its
operator
implementation,
but
you
know
there's
not
much
of
a
but
I
wonder
if
you
know
this
could
be
useful
there
as
well.
So
at
least
there's
potential
that
this
could
be
a
general
storage.
You
know
have
general
applicability
to
the
wider
storage
thanks
gate
right
exactly.
A
C
Hey
Travis,
hi,
everyone,
no
Lonnie,
I,
think
John
and
I
are
monitoring
and
we're
working
with
with
Jeff
and
whenever
he
has
a
question,
it's
a
priority
but
I
think
he's
got
the
initial
rounds
figured
out.
So
it's
really
just
availability
on
our
part,
but
we're
not
doing
anything.
A
C
A
I
may
indeed
okay.
Well,
if
that's
everything
that
we
have
for
this
week,
then
then
we
can
go
ahead
and
adjourn
all
righty.
Something
just
came
in
the
chat:
Oh
Sharon,
yeah,
Erin,
Thank,
You,
Erin,
all
right
everybody.
We
will
talk
again
in
two
weeks.
Thank
you
sounds
great.
Thanks
talk
to
you
later.