►
From YouTube: 2018-11-20 Rook Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
the
recording
is
started.
This
is
the
November
20th
2018
or
a
community
meeting,
and
if
we
will
go
ahead
and
start
with
milestone,
checkups
since
it
looks
like
Ennis
still
doesn't
have
audio
to
do
his
Cassandra
demo,
so
you're,
not
eight.
There's
nothing
I'm,
aware
of
no
patch
releases
expected
right.
B
C
The
problem
wellness
that
I
have
every
use
of
saying:
hey,
just
drowning
the
best
method,
a
snippet
I
posted
fixed
it
for
him,
but
I
have
not
I
use
of
saying
that
it
didn't
work
for
him
and
only
after
I
gave
him
attached
really
h08
version
manually
patched
the
0.8
version
which
this
MP
I've
put
up.
It
began
to
work
for
him
too.
C
C
C
B
A
It
sounds
like
with
the
very
small
number
of
affected
people
in
the
you
know.
We
have
been
able
to
get
them
unstuck
with
either
script
or
you
know
just
kind
of
a
you
know,
personal
patch
build
I'm,
okay
with
not
doing
a
full
release
for
it,
mm-hmm
and
then
it
going
in
Syria,
which
is
the
next
milestone
coming
up,
which
upgrades
will
need
to
be
done
for
yeah.
C
A
B
In
general,
though,
I
put
a
note
there
in
the
agenda
for
what
does
the
end
game?
Look
like
409
right,
because
we've
got
a
lot
of
things
on
the
board.
Still
last
night,
I
I
took
a
few
things
out
about
nine
that
we're,
obviously
nobody
had
picked
them
up
and
they
were
on
the
safe
side
anyway.
Holdin
up
I
think
we
need
to
go
through
and
triage.
A
D
A
Cool-
let's:
let's,
let's,
let's
stick
with
the
0
to
9
for
right
now
and
then
you
will
be
the
first
topic
in
the
community
section.
Alright,
thanks
Annie!
Ok,
so
do
we
well,
let
me
ask
a
few
more
questions
about
some
of
the
issues
we
do
already
have
out
here.
So
the
I
got
a
couple
in
the
queue
conching
hi
I
got
some
questions
about
the
CSI
plugin,
and
so
what
is?
What
is
the
status
of
that
like
is,
that
is
the
inner.
Is
the
pull
request,
that's
open.
A
B
Looks
like
woman's
not
on
the
call
today,
so
yeah
he's
the
one.
That's
been
driving
that
and
it's
at
this
point
yeah
it's
a
documentation,
PRA,
it's
like
here's,
how
you
run
it
and
it's,
but
it's
really
is
a
separate
thing
right
now.
It's
not
integrated
like
we
wanted
to,
but
I'm
I
haven't,
looks
like
woman
hasn't,
had
the
bandwidth
really
to
to
do
the
next
step
of
that,
so
I
think
we'll
need
to
live
with
documentation
409,
like
so.
A
B
A
A
A
C
B
C
As
far
as
I
know,
they
needed
a
sub,
but
it
it's
kind
of
getting
on
my
nerve
slowly,
because
the
integration
tests
are
failing,
I,
think
if
you
run
them
all
at
one.
Well,
if
you
run
the
normal
make
test
fee,
but
if
I
run
them,
if
I
just
run
this
file
system,
just
waiting
for
your
seconds
then
run
the
next
one
I
think
it's
worked,
fine
I
think
I
did
somebody
back
out
put
there
and
just
didn't
really
get
to
it
again.
E
E
C
D
Okay,
I
had
some
Michael
now
I.
What
I
wanted
to
tell
you
was
that
for
the
integration
tests
that
are
failing,
I
had
some
of
these
myself
and
maybe
the
reason
is
that
some
functions
that
integration
test
use
have
a
timeout
and
don't
wait
for
things
to
be
deleted.
So
if
something
is
not
be
lately,
then
the
next
test
tries
to
create
it.
Let's
say
it
may
return
an
error
and,
if
we
cross,
so
that
might
be
the
reason
I'm
I
don't
know.
That's
how
it's
in
mind.
C
When
you
mention
it,
I
think
we
even
have
another
issue
on
the
roadmap,
which
is
about
making
the
test
parallel.
Do
you
guys
see
a
problem
with
if
we
already
go
ahead
and
split
every
integration
test?
Oh
yeah,
at
this
one,
and
at
least
in
the
PR
well,
I
need
to
164
there.
If
we
split
all
tests
or
well
I
split
all
the
integration
tests
into
their
own
namespace.
It's
because
I
think
something
some
integration
tests
are
still
using
the
same
namespace
and
possibly
with
that
I
could
reduce
the
issue
I'm.
Having
actually.
E
C
C
C
So
do
we
see
it
as
a
viable
solution
that
unless
we
really
require
to
have
a
test
which
deletes
something-
and
this
like,
that's
really
wait
for
it
to
be
deleted,
which
we
see
that
we
kind
of
trigger
like
a
like
a
go
routine
running
in
the
background
after
I'm.
Let's
say
you
know
the
test
run
through
and
then
it's
just
like
a
defer
cleanup,
which
is
while
running
as
a
go
routine
or
something
which
is
simply
not
blocking
the
threat
anymore.
A.
B
Word
we're
basically
using
crannies
to
do
that
for
us
where
we
say
delete,
but
don't
wait
for
it
to
go
away,
and
so
in
the
background
it
is
supposed
to
clean
them
up
and
theoretically,
since
it's
an
independent
namespaces
I
was
hoping
that
would
work,
but
it
does
seem
to
lead
to
some
random
issues.
Yeah.
C
I
think
it's
it's
really
the
problem.
There
was
a
time
out
what
yenna
said.
The
kind
of
the
question
is
where
the
timeout
is
is
loose
in
my
case
there,
because
I'm
I
figure,
yeah
I,
also
use
some
second
another
namespace.
For
that,
therefore,
the
amount
you
want
us
a
test,
but
it
seems
to
also
ironically,
which
I'm
always
mentioning
when
I
talk
about
the
spear.
C
C
A
So
I
was
just
scrolling
through
the
list
of
0.9
milestone
issues
here
and
there's
not
really
a
whole
lot.
Besides,
what
we've
kind
of
already
been
focusing
on
and
talking
about-
that's
you
know
I
think,
is
you
know
a
complete
a
milestone.
Blocker
really,
you
know
I
think
we're
kind
of
getting
close
to.
You
know
the
place
where
we
could
start
making.
You
know
pulling
some
of
these
issues
out
this
milestone
and
start
paring
it
down
for
the
end
game
here
right.
B
Yeah,
if
I
were
to
summarize
what
I
want
to
see
them,
so
the
Ceph
volume
integration
and
then
upgrades
I
need
to
make
it
pass
through
the
upgrade
documentation
and
I
think
with
a
small
change
there.
It'll
simplify
upgrade
paths
and
those
are
kind
of
the
two
things
that
I
see
is
critical.
There's
some
other
smaller
things
to
that.
But
that's
what's
on
my
mind,.
A
A
B
Yeah,
what
one
question
you're
thinking
about
nine
is
so
when,
when
we
change
the
types
the
Ceph
types
from
beta
from
alpha
to
beta,
there
was
some
conversion
effort,
and
you
spent
quite
a
bit
of
time
on
that
right.
So
when
we
do
the
same
thing
to
declare
it's
stable,
it's
gonna
take
some
amount
of
time
to
do
that
same
conversion
right.
B
A
Yeah
a
couple
things
as
the
effort
is
now
is
is
much
less
than
it
has
been
in
the
past
for
a
couple
of
reasons.
One
is
that
we've
been
through
this
before
we
have
code
that
you
know
knows
how
to
do
that.
We've
got
the
patterns
in
place.
That's
one
thing,
but
then
the
other
thing
is
that
CR
DS,
at
least
on
later
more
recent
version
of
kubernetes
CR
DS
support
in
place
migration.
A
A
E
A
Thought
we
went
13
was
was
migration
with
with
webhooks
that
you
could
specify
some
sub
conversions,
that
configuration
was
I,
think
1
to
12,
but
I
would
have
to
follow
up
on
that
right.
B
A
So,
let's,
let's
go
ahead
and
move
on
to
Genesis
update
his
demo
for
Cassandra,
it's
a
great
work
that
he's
been
doing
there,
Yanis,
which
account
which
of
your
accounts
there
do
I
need
to,
or
maybe
you
could
just
take
the
sharing
yourself
because
I
stopped.
My
sharing.
D
D
D
Is
it
better
like
wait?
Is
it
better
like
this?
Yes,
it
is
okay,
so
let's
keep
it
like
this,
so
okay,
so
what
we're
gonna
do
next
is
we're
gonna
see
the
user,
the
user
interface
that
the
user
of
the
cassandra'
operator
is
seeing.
So
what
we're
going
to
do
it
with
is
we're
gonna
use,
cout
control
describe
for
the
Cassandra
cluster
that
we
will
create
and
we
used
it
and
we
take
the
last
40
lines
because
we
can
fit
it
all
and
what
really
interests
us
is
seeing
the
last
lines.
D
So
we
get
that
running
and
next
we
create
the
Cassandra
cluster.
So
let's
take
a
look
at
what
we
need
to
create.
This
is
pretty
standard
stuff,
a
namespace,
a
role,
a
service
account
and
a
robe
I
think
for
our
buck
configuration-
and
this
is
the
mainland-
respond
Cassandra
cluster.
So
what
we
need
to
specify
is
the
version
of
the
Cassandra
cluster
that
will
be
used
to
put
the
right
image.
This
is
a
new
field
called
mode.
This
can
be
either
Cassandra
or
Scylla.
If
you
choose
Cassandra,
then
you
will.
D
It
will
give
you
a
Cassandra
cluster.
If
you
see
light
will
give
you
a
single
cluster,
then
that
data
center
is
the
data
center.
We
would
create
its
name,
and
now
we
we
specify
one
Rock
with
three
members.
Each
member
has
five
gigabytes
of
storage
of
local
disk,
one
CPU,
a
one
man,
two
gigabytes
from
so
this
is
our
configuration.
D
And
what
we
see
on
the
left
screen
is
immediately
what
the
operator
did
is
update
the
status
with
the
new
rock,
and
it
says
it
has
one
member
and
zero
members
already
also
below.
We
see
the
events
which
we
take,
the
user
in
debugging
the
operator.
Maybe
something
went
wrong
and
the
operator
will
inform
the
user
three
events.
We
see
that
the
first
member
was
created
and
now
the
second
and
if
we
do
a
good
control.
Yet
all
we
see
some
of
the
inner
workings
of
the
operator.
D
D
D
D
D
D
Things
about
what's
happening
for
in
this
resource,
so
the
thing
about
scaling
down
in
casada
is
it's
not
a
it's
not
fast,
and
it
has
two
stages.
First,
you
have
to
make
sure
that
the
because
Sundra
cluster
has
redistributed
that
members
data
and
they
and
removed
it
from
the
ring,
and
then
you
have
to
scale
down
the
stay
preset.
That's.
A
D
I
mean
I.
Think
most
of
the
databases
need
to
do
something
similar
to
this
and
we
pre
stopped
hook
that
we
people
said
provides
it's
not
good
enough,
because
it's
best
effort.
So
what
we
do
here
in
the
operator
essentially,
is
that
we
set
a
label
on
the
cluster
IP
service
of
the
member
and
remember
sees
this.
The
Commission's
then
set
the
label
to
another
value.
The
controller
sees
this
and
scale
down
the
stateful
set,
and
if
we
see
now
the
controller
also
does
the
cleanup.
D
If
we
do
a
control
get
on,
we
don't
see
anymore.
The
survey
the
member
service
of
the
decommissioned
member
plus-
let's
prove
that
remember,
was
cleanly
the
commission.
We
do
a
control
exec
on
the
first
member
known
to
status,
neutral
status
gives
us
the
state
of
the
cluster.
It's
a
sale,
I
comma,
it's
a
silly
tool
to
communicate
with
the
Cassandra
cluster
and
as
we
see
we
have
two
members.
If
we
didn't
it,
we
didn't
successfully.
D
D
So
I
think
those
are
aligned,
yeah
and
also
this
rocks
name
would
be
u.s.
East
1b
and
it
will
have
one
member
because
I
can't
fit
any
more
on
my
PC.
So
after
we
go
down
this,
we
save
it
and
we
can
see
here
in
the
status.
We
have
been
you
rock
with
one
member
and
it's
not
ready
yet
and
also
the
event
on
the
object
informs
us
that
the
new
rock
was
created
and
it's
held
one
member,
and
if
we
wait
a
little
bit,
we
will
see
that
this
will
be.
D
So
this
was
the
demo
for
this
time.
We
can
see
that
the
Cassandra
class
that
can
scale
up
scale
down
can
go
wherever
you
want.
You
cannot
do
cleaner
acts,
you
know
for
more
resilience
for
more
to
be
fault,
tolerant
and
for
the
next
steps
with
about
upgrades
in
the
versions
of
casandra's.
We
also
think
about
backups
restores
and
a
big
point
of
interest
is
running
called
local.
We
nvme,
because
the
performance
would
be
very,
very,
very
much
better,
and
that
was
the
demo.
Thank
you.
Awesome.
A
A
You
know
picture
of
what's
going
on
in
the
cluster,
you
know
what
the
racks
are
and
like
how
many
members
they
have
and
if
they're,
ready
or
not,
then
also
events
as
well
kind
of
logging
what's
going
on,
and
what
the
operator
is
doing
over
time
with
events
like
that,
that's
a
great
model
to
use
I
think
that
this
is
great
work.
Men.
C
D
So
we,
like
the
creation
of
the
scale
up,
is
pretty
much
ready.
I
mean
we
I.
Have
the
integration
test
running
I?
Have
everything
growling
I?
Should
you
you
know
we
should
use
another
round
of
code
review,
but
other
than
that
everything
is
running.
It
should
be
pretty
much
ready
to
match.
I
think
that.
E
A
D
Yeah
I'd,
like
that
to
very
much
if
that's
possible,
I
mean
and
so
about
Cassandra
and
the
integration
test.
A
Cassandra
is
kind
of
resource
hungry.
So,
for
example,
if
I
give
it
less
than
2
gigabytes
it
with
us
we'll
get
on
killed
so
that
guy
this
is
kind
of
restrictive
in
what
I
can
do
in
integration
tests,
for
example,
I
can't
really
test
scale-up
in
me,
I
can
yeah.
I
can
really
do
a
scale-up
integration
test
or
scale
it
down
or
things
like
that.
E
B
D
Yeah
that'd
be
great,
I,
agree
and
I.
Don't
it
would
be
really
great
if
someone
could
give
me
a
hand
with
integrator
I,
don't
know
if
the
framework
is
gonna
change
soon,
so
it
doesn't
really
matter.
But
I
was
having
some
issues
and
you
know
I'd
like
to
have
a
one-on-one
with
someone
and
maybe
give
me
a
few
tips.
What
to
do
here.
What's
best
how
to
move
forward.
I
know
I,
don't
think
yeah
I
discuss
it
here,
but
I
think
it'd
take
a
long
time
and
I
don't
want
to
waste
the
whole
meeting.
D
C
A
D
A
Everything
we've
seen
so
far
is
looking
very
good,
and
we
had
already
done
like
one
and
one
of
the
earlier
passes
on
it
too.
So
this
I
think
this
is
this.
This
has
been
really
good
work,
so
I'm
happy
to
see
this.
Can
progress
and
I
think
we
all
want
to.
You
know,
take
the
next
steps
to
get
it
into
masters.
This
is
great,
alright,
so
let's
go
ahead
and
move
on
to
the
next
inside
Jeff
s.
A
topic
here
is
Anton
still
on
is,
was
to
meet
you
there
or
just
Anton.
E
B
E
E
A
That's
where
we
want
it
to
go
to
be
merged
into
writers
master.
So
having
that
pull
request
for
against
masters.
What
we
want
to
have
you
know
happen
anyway,
so
that
makes
a
lot
of
sense:
okay,
okay
and
then,
and
then
yeah.
We
can
take
a
look
at
any
of
the
integration
test
failures
as
well
and
try
to
see
sort
those
out
from.
If
you
know
something
is
affected
by
the
code
that
you've
added
or
if
it's
something
you
know
flakiness
or
intermittent
issues
or
something
we
can
definitely
follow
up
on
that.
F
It's
my
new
pull
request
will
have
different
comets
but
the
same
Court
I'm,
already
a
rave
orchid.
It.
B
E
B
Look
at
the
integration
test
and
I
do
see
that
they
succeeded
on
one
of
the
turbinates
versions
and
usually
that's
an
indicator
that
there's
flakiness
going
on
and
then
the
other
builds.
So
anyway,
once
it's
rebased
on
master,
we
can
look
at
it.
Let's
see,
if
there's
any
okay
anything
we
can
do
about
that
or
if
it's
just
known
issues
yeah.
A
I
think,
as
a
community
that
you
know
it
would
be
really
amazing
to
be
able
to
get
both
Cassandra
and
Edgefest
into
0.9.
So
we
should
do
you
know
whatever
we
can
to
be
able
to
expedite
or
not
get
those
pull
requests.
You
know
integrated
and
feedback
incorporated
and
all
that
before
0.9.
So
in
the
next
two
weeks
that
sounds
great
to
me.
Awesome
Anton,
that's
this!
This
is
great
for
both
of
these
improvements
here,
I
love
it
okay,
Sebastian.
Are
you
on
the
line
for
the
support
for
straighter
face-to-face
meeting
in
Berlin?
Yes,.
G
Independent
of
rook,
so
it
is
supposed
to
to
support
rock
safe
and
civil
and
deep
sea
to
deploy
a
safe
caster.
So
in
details
it's
basically
a
chef
manager,
module
that
is
controlling
the
rock
operator
using
resource
definitions
and
yeah
right
now.
What's
working
so
you
can
get
a
list
of
services
from
from
the
orchestrator
module.
You
can
filter
parts
for
I,
think
route
of
gateways,
for
example,
and
you
can
also
create
services
by
creating
where
the
see
IDs
the
Roxie
IDs.
G
G
I've
created
a
pull
request,
a
documentation,
pull
request
that
basically
covers
our
result
of
that
meeting
said
that
is
supposed
to
end
up
in
the
official
set
documentation
so,
and
the
outcome
is
basically
that
the
orchestrator
is
first
off
for
focusing
on
day
two
operations
like
adding
overseas.
Replacing
last
is.
G
Installation
of
a
safe
cluster
is
a
something
for
the
future.
It
turned
out
that
the
SAF
dashboard
need
to
restart
some
services
like
the
NFS
service,
in
order
to
actually
control
the
deployment
of
the
NFS
gateway.
So
that's
something
that
we
need
to
support
from
within
the
operator
now
from
within
the
just
write
a
module,
then
we
decided
that
we
now
want
to
have
to
community
meetings
in
the
safe
community
calendar.
The
first
one
is
Monday
Mondays
at
4:00
p.m.
UTC,
and
second
one
is
Wednesdays,
9:00
a.m.
B
And
I
think
we're
gonna,
let
chef
volume
there's
a
couple
questions
around
that
integration.
That's
well
cover
later
in
the
agenda,
or
is
it
next
or
almost
next
so
yeah,
and
one
thing
maybe
did
point
out
about
the
orchestrator
interface-
is
that
it's
it's
a
place
where
there's
a
rook
module
that
drives
the
it's
a
way
for
the
dashboard
to
drive
the
operator
really.
So
it
gives
us
a
nice
UI
that,
under
the
covers,
will
work
with
the
operator
to
apply
the
desire
to
state
in.
A
A
E
C
To
increase
it
like
in
the
make
file
where
it's
calling
the
actual
go
test,
because
I
think
LM
a
four
there's
two,
no
no
time
on
set
anything,
you
transfer
that
somewhere
either
when
go
test
or
something
installed
or
as
a
general
job
done,
or
at
least
somewhere
they're,
adding
a
test
time
odd
flag
with
I.
Don't
know,
60
minutes
solve
the
problem
that
I
had
with
time.
Also
I.
A
Think
this,
you
know
very
much,
speaks
to,
as
we
add
more
storage
solutions
like
Sandra
and
extensa
Jeff
s
coming
down
the
pipeline
here.
That
did,
you
know
doing
the
work
to
just
to
separate
what
we
can
and
you
know
parallel
eyes
them.
The
things
that
can
be
run
concurrently
makes
a
lot
of
a
lot
of
sense,
especially
if
you
want
you
know
to
be
able
to
have
quick
turnarounds
on
pull
requests,
and
you
know
in
integration,
builds
and
stuff.
B
A
Help
is
so
you
know
with
I.
Think
Greg.
Is
it
that's
gonna
leave
that
effort
of
kind
of
doing
the
integration
RCI
with
the
CN
CF
is?
Is
he
a
resource
also?
That
would
be
able
to
kind
of
make
like
found.
You
know,
structural
four
foundational
improvements
like
that
to
to
parallelized
tests
and
stuff,
not
just
to
a
a
straight
port.
B
That's
a
good
question:
I
was
hoping
to
get
through
the
port
first
and
then
one
step
at
a
time
just
throw
be
for
your
hardware
right,
so
yeah
I'm
not
sure
how
much
like
he
would
get
into
going
and
if
we
have
to
refactor
things.
But
if
it's
a
matter
of
just
how
we
trigger
the
tests,
let's
call
this
sweet
and
then
that
sweet
than
that
sweet
in
parallel,
then
I
see.
A
G
B
It
this
topic,
such
to
summarize,
with
set
volume
so
set
volume,
will
allow
us
to
provision
OS
D's
on
rod
devices
or
on
a
partition
or
on
LVM
devices
that
have
already
been
created,
and
so,
where
the,
where
my
PR
stands
right
now,
I'm
only
doing
it
on
rod,
devices
and
I
want
to
get
it
working
for
partitions,
at
least
as
well
4.9.
So
I
think.
The
question
I
have
here
really
is
a
posto
dot
nine
issues
because
I've
got
to
focus
on
getting
the
rest
of
it
solid
first.
B
But
the
question
here
is:
if
somebody's
already
created
their
LVM
devices
and
volume
groups
in
the
CR
D,
we
want
to
allow
them
to
specify
they
put
the
the
wall
and
DB
on
on
this
LVM
and
or
this
lv,
and
the
data
on
that
lv
and
or
if
it's
file
store.
You
don't
put
the
journal
on
that
LV
anyway,
so
it's
a
it's
a
very
prescriptive
set
of
settings
and
that
will
need
in
the
CR
D.
B
A
Not
sure
if
I
fully
follow,
but
we
and
then
we
can
follow
up
more
offline
together,
but
my
general
my
gut
feel
on
this
is
that
in
the
general
rocío
types
capturing
you
know,
physical
layout
or
logical
layouts
of
the
devices
in
a
cluster
is
you
know
a
general
thing
that
can
have
applicable
because
you're
basically
defining
the
storage
substrate
of
the
cluster
right
and
then
I.
Don't
miss
I,
don't
quite
yet
see
how
you
know
like
a
little
config
map
like
key
value
properties
on
those
that's
logical
or
physical
mapping.
A
B
B
A
If
you
couldn't
yeah
mock
up
the
two
options,
so
we
can
kind
of
just
like
have
the
two
to
compare
and
contrast
that
would
be
helpful,
I.
Think,
okay,
cool!
Thank
you
all
right,
I'm
cool!
So
that's
everything
we
had
on
the
agenda
here.
I
think
you
know
the
big
themes
that
we
have
identified
and
want
to
proceed
on
is
you
know
we're
getting
the
last
two
weeks
here
for
0.9.
We
want
to
would
love
to
get
that
out
before
Q
Khan
in
December,
10th,
11th
12th.
A
The
next
community
meeting
will
be
December
4th
and
we
will
have
done
a
pass
of
the
issues
in
the
milestone
as
the
maintainer
team
to
remove
the
issues
that
we
don't
think
are
critical.
Anyone
from
the
community,
for
that
has
opinions
on
issues
that
they
think
are
important
for
0.9.
Please
weigh
in
on
those,
and
we
would
love
to
see
Cassandra
and
edge
of
s
be
included
in
0.19
as
well.
That
makes
the
the
release
all
all
that
much
more
impressive
and
exciting,
so
I
think
that's
a
summary
of
every
all
the
discussions
today.
E
Jared
I
have
a
question.
It
might
be
a
really
easy,
but
so
Yanis
mentioned
that
Cassandra
was
resource
heavy
and
SEF
when
it
rebalances
it
uses
a
lot
more
resources
than
normal.
So
how
does
a
rook
make
sure
that
these
pods
don't
conflict
with?
You
know
other
pods
on
the
kubernetes
cluster.
A
Yeah,
it's
a
good
question,
Brian,
so
I.
What
I
would
probably
say
is
that
that
is
not
something
specifically.
That
was
given
a
lot
of
attention
to
in
the
testing
that
we've
done
for
rebalancing
a
cluster.
There
have
been
mostly
smaller
scenarios
and
not
large
data
scenarios,
so
there
hasn't
really
been
a
lot
of
attention
to
you
know
resource
management
and
contention
beyond
like
throttling
it
by
in
the
sense
of
doing
it
kind
of
one
ôs
day
at
a
time
and
letting
it
settle
down
before
moving
on
to
the
next
one.
A
Like
there's
not
a
specific
resource
limit
on
the
pods,
but
basically
just
by,
we
do
one
OSD
at
a
time
all
that
rebalance
itself
and
then
move
to
the
next
ones
that
it's
not
a.
You
know
more
broadly,
scoped
lots
and
lots
of
placement
groups
moving
around
the
cluster
all
at
one
time
and
using
a
whole
bunch
of
resources
at
one
time,
and
so.
B
G
B
G
A
E
B
B
E
B
Because
the
the
challenge
with
kubernetes
resource
limits,
really,
is
that
if
you
said
well,
there's
two
of
them
there's
the
scheduling
limits
which
we
can
use.
But
then
there's
the
runtime
limits
which,
if
you
go
over
them
kurby
days,
will
just
kill
your
pod
and
that's
not
healthy
for
the
O's
to
use
either
yeah.
So
that's
where
we
need.
A
The
killer
does
right.
B
D
Lately
I
have
been
trying
to
get
prometheus
working
for
monitoring
with
Cassandra
operator
and
I
took
a
look
at
what
safe
has
done
and
I
see.
You
have
did
some
annotations
on
some
on
some
services
for
Prometheus
to
pick
them
up
so
I
don't
know
is
the
steel
of
the
way
to
do
it.
I
mean
I
sense
a
little
online,
and
it
seems
that
you
need
to
add
something
to
your
prometheus
config.
In
order
for
those
annotations
get
picked
up.
A
D
A
D
E
C
C
Yeah
so
there
especially
two
modes
for
Prometheus
to
quickly
go
in
today.
You
know,
reuse,
something
like
Prometheus
operator
and
let
the
config
be
built
through
what
Prometheus
operators
offering
your
firm.
You
buy
service
monitors,
but
if
you,
for
example,
if
the
normal
running
server
Prometheus
there's
a
default
contract
which
is
basically
using
from
if
you
prometheus
that
iOS
the
stripe
just
straight
and
the
second
mode
where
you
have
to
where
you
can
specify
describe
the
port,
the
path
and
everything
basically
alright,.