►
From YouTube: SIG - Performance and scale 2022-02-10
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
All
right
welcome
to
sixth
scale
everybody,
it's
february,
10th
notes
are
in
the
chat.
Send
yourself,
listen
to
me!
Please,
okay,
today,
a
few
items
we're
gonna
start
as
we
have
in
the
last
few
meetings.
Let's
just
go
over
the
performance,
periodic
job
results
again,
really
quick.
So
the
only
change
here
and
these
these
results
is
now
this
test
and
this
change
has
merged.
A
This
was
the
this
is
basically
targeting
or
making
our
establishing
the
relationship
between
the
range
vector
and
the
prometheus
scrape
interval,
and
it's
really
focusing
on
scraping
at
a
specific
time
near
the
the
end
of
the
right
right
near
the
end
of
the
test
right
when
we
we're
going
to
get
the
most
accurate
data.
So,
let's
see,
let's
look
at
some
of
the
recent
ones.
A
Yeah,
so
pretty
similar
looks
great
counts.
Oh
so
one
change.
Actually
what
I
did
do
with
with
that
pr
is
that
I
pulled
out
the
the
primer
tests
from
from
outputting
results.
We
don't
really
need
them,
so
they're
only
going
to
be
one
set
of
results
again
and
it's
just
the
test.
A
A
Okay
yeah
looks
pretty
good
okay,
there
were
so
from
last
time.
Let's
see
I
created
an
issue.
This
is
the
high
note
count
this
number
right
here.
A
This
is
an
issue
that
will
track
that
investigation.
Having
a
chance
to
look
at
this,
but
just
so
you
guys
know
this
is
where
we're
tracking
all
right.
Let's
move
on
to
pr's
then.
A
So
we
have
the
there's:
a
change
for
making
the
performance
pre-submit,
making
a
performance
precedent
job
so
not
just
a
periodic
job.
I
tagged
daniel
marcel.
I
tagged
you
and
david.
I
I
don't
know
I
don't.
I
don't
really
know
like
what
to
do
with
this
change.
Like
I
don't
know
much
about
what
I'm
doing
here,
I
kind
of
just
copied
from
other
tests
seemed
pretty
reasonable,
but
yeah.
I
definitely
if
you
have
any
advice
on
this.
Let
me
know
I
have
not
used
this.
I'm
not
very
familiar
with
this.
B
Hey,
oh,
you
guys
hear
me:
yes,
okay,
yes,
so
I'm
really
distracted
because
there's
a
customer
issue
I'm
getting
pinged
on,
but
so
you
want
to
add
this
as
a
pre-submit
as
long
as
you
make
it
optional
and
not
run
by
default,
and
I
forget
the
settings
for
that:
yeah
optional
truth,
yeah.
B
Then
all
you
have
to
do
is
type
a
comment
test.
The
name
of
this
present
and
you
yeah
you'll,
get
it
when
you
want.
Do
you
need
that
just
to
be
merged.
A
A
Yeah,
I
was
trying
to
hope
like
because
what
I
did
was
I
was.
I
copied
the
tests
that
we
have
like
all
like
this
command
everything
some
of
the
flags
and
then
I
copied
some
of
the
flags
from
another
optional
test
like
that's,
where
I
got
some
of
the
stuff
from
so
I
don't
know
I
mean
I
we
could
just
we
want
to
merge
it,
and
I
can
try
and
test
it
on
a
pr,
but
it
might
just
be
trial
and
error.
I
mean
I
don't
know
how
to
I.
B
Don't
know
probably
what
we
have
to
do
so
that
global
timeout
of
four
hours
is
probably
a
little
extreme,
but
it
doesn't
really
matter,
but
that
was
for
our
entire
intent
test
suite
or
whatever.
C
C
Yeah
some
things
that
I
don't
know
is
just,
for
example,
basel
unnested,
for
example.
I
have
no
idea
what
those
like
this
label
is
doing
and
I
think
ding
the
dingy
it's
like
some
authentications
for
play.
If
I
remember
correct,
maybe
you
don't
need
this
label
here,
yeah.
C
A
If
we
no
one
knows
like
I,
I
mean
I,
I
think
almost
we
just
merge
this
like
there's
some
things
that
are
worrisome
like
this.
I
I
don't
know
if
this
means
it's
going
to
be
running
parallel
with
the
stuff,
but
I
think
maybe
we'll
know
like
we'll.
I
think
maybe
some
trial
and
error
might
do
do
what's
good
here.
Just
if
there's
enough
way
to
test
this
because
then
maybe
like
to
me,
it
seems
reasonable,
I
think
it'll,
you
know,
might
work,
but
we'll
get
some
feedback.
I
guess,
if
we
don't.
A
A
C
B
B
B
Let
me
do
you
have
the
pr
in
the
notes
yeah,
let
me
ping
somebody
to.
Is
that
one
right
there?
Let
me
get
that
through,
for
you.
B
And
let's
let
somebody
else
do
that
looks
good
to
me
just
all
right.
You
can
move
on
to
something
else.
If
you
want
and
I'll
try
to
get
that
through,
for
you.
A
Okay,
thanks
yeah,
and
I
think
so
like
this
would
like
marcelo.
I
think
you
comments
down
there,
but
we
what
we
do
is
we'd
run
we'd
run
like
I
think
I
have
yeah.
We
run
this
test,
pull
and
tensing
performance,
and
this
would
just
run
it
for
us,
which
is
something
like
I
really
wanted
to
do
for
the
other
pr
here,
which
is
the
threshold
count,
so
the
second
pr
is
so
we
talked
about
this
last
meeting
right
like
we.
A
So
what
I
did
was
I
added
I
added
a
way
to
to
relate
two
different
metrics
to
the
audit
tool.
A
So
basically
I
call
it
ratio
and
metric,
and
so
the
ratio
means
like
this
is
the
limit,
like
the
amount
of
the
maximum
amount,
that
we
expect
it
to
be
greater
than
so
two
times
the
amount
of
it
must
be
less
than
two
times
the
patch
virtual
machine
count
must
be
less
than
two
times
create:
pods
counts
to
pass
the
threshold,
so
we
just
so
it's
basically
just
a
relationship
between
the
two.
A
So
it's
pretty
easy,
and
so
we
we
talked
about
ten
to
one
last
time,
two
to
one
on
the
crepe
pods
count
and
then
added
the
the
other
threshold
values
based
on
seconds
which
was
already
supported.
So
that's
all
this
change
does
so
it
gives
us
a
way
to
compare,
but
yeah
one
is
where
I
wanted
to
run
that
test.
I
wanted
to
see
how
this
how
this
did
in
the.
A
A
A
Yeah,
so
what
I
wanted
to
do
was
this:
I
wanted
to
do
this
to
test
it,
but
yeah
it
doesn't
exist.
So.
A
Yeah
but
yeah
I
mean,
I
think,
this
one's
ready.
I
don't
know
if
marcelo
looks
like
you
reviewed.
I
know
dave
if
you've
got
comments
on
this
one
yeah.
Let
me
know
to
get
something
review.
A
C
C
A
Yeah,
that's
a
good
question.
A
I
don't
know
what
the
all
the
labels
are,
but
I,
my
guess,
is
what
you'd
have
to
do
is
to
tie
into
you
have
to
create
the
label,
and
then
you
probably
have
to
tie
into
the
bot
that
that
triggers
this.
I
don't
know
who
to
ask
for
that,
though,
but
yeah
that
would
be
interesting
to
see.
C
Yeah,
I
think
kubernetes
doing
that
so
in
the
rcicg
system.
Okay,
it
doesn't
need
to
be
right
now,
but
we
can
keep
in
mind.
You
know
it
would
be
nice
to
have
like
a
label
that
it's
more
generic
to
trigger
that,
and
anyone
that
thinks
that
a
pr
it's
it
should
you
know,
have
some
performance
impact.
A
Yeah
yeah
that
would
be
nice,
okay,
cool,
okay,
let's
go
to
defining
pressure
is
what
I
call
it
so
this
what
I
wanted
to
do
is
so
I
I
we
have
this.
We
have.
I
have
this
this
dock
here.
I've
already
talked
about
a
few
times
outlined
some
of
our
tests.
I
wanted
to.
A
I
wanted
to
review
this
because
marcel
you
brought
this
up
previously,
and
I
kind
of
want
to
just
do
spend
a
few
minutes
talking
about
this,
just
to
get
a
sense
of
like
how
we
can
define
pressure
because,
what's
important
is
like
you
know,
I
talk
about
in
the
in
this
in
this
in
this
document.
It's
like
you
know
what
is
what
a
test
means
like
we're.
A
You
know
what
what
are
we
testing
when
we,
you
know
based
on
based
on
even
the
tests,
we're
doing
like
study,
state
or
or
whatever
burst
test
like
what?
What
is
it
like
that?
You
know
what
is
this:
the
pressure
that
we're
putting
on
the
cluster
and
how
does
that
affect
scale
and
performance,
and
so
on
and
and
so
there's
this
presentation
marcel,
you
were
the
one
that
made
me
aware
of
this.
This
was
you
showed
this.
A
You
had
a
link
to
this
in
one
of
your
your
documents,
and
this
was
I
found
this
interesting.
I've
read
this
through
once
or
twice,
and
I,
like
the
way
sort
of
it
defines
defines
scale
availability
because
it's
sort
of
I
think
you
know,
people
use
the
term
right
like
nodes
like
how
many
nodes
you
scale
to
and
and
it's
like
it's
a
little
bit
more
than
that,
and
this
describes
it
really
well.
A
So
that's
you
know
kind
of
like
when
we
talk
about
what
the
study
say
test
all
these
things
like
make
up
what
we
call
like
the
scalability
of
the
system
and
kind
of
the
way
that
I've
been
characterizing.
This,
like
is
just
by
when
I
read
this,
is
like
it's
the
sum
of
pressures.
A
The
sum
of
all
pressure
is
really
what
the
scalability
of
a
system
is
and
there's
a
lot
of
interesting
graphs
in
here
like
oh
and
then
like
this,
like
when
they
talk
about
how
the
relationship
between
the
number
of
pods
per
node
and
the
number
of
nodes
so
like
two
variables
and
that
you'd
think
that
you
know
maybe
there's
a
relationship
kind
of
like
a
and
they
have
it
like
a
square
here
like
where
there
you
know,
maybe
like
there's,
you
know
where
what's
the
right
place
or
the
number
of
nodes,
the
number
of
pods
that
you're
safe
and
you
know
it's
not
always
the
relay,
it's
not
exactly
the
relationship,
you
think,
and
so
they
have
some
interesting
graphs
like
here's
one
like
if
you
have
the
number
of
back
end
services
and
the
number
of
services.
A
Here's
the
relationship
like
rough
numbers
like
here's
like
if
you're,
if
you
have
250
back-end
services,
the
number
of
services
that
you
can
have
it's
fairly
low
down
here,
you
know
so
it's
yeah.
I
mean
your
5k
services,
125
vacuums.
This
is
no
bueno,
so
we
have
to
be
somewhere.
I
don't
know
under
here
like
a
hundred
services
and
maybe
200
backing.
C
C
A
C
A
What's
fascinating?
Is
that
like
we
can
tell
with
those
based
on
what
phase
we're
in
who
is
doing
work
and-
and
that's
like
important
so
like
we
can
tell
like
okay,
we're
taking
long
from
here
to
here.
Here's
our
rate
of
churn.
Here's
our
number
of
nodes,
here's
our
number
of
and
services
services
so
on.
We
can
try
and
eliminate
some
variables
or
kind
of
maybe
make
an
equation
of
like
what
pressure
would
look
like
and
then
have
an
idea.
Okay,
it's
in
this
pending
phase.
A
A
Yeah,
so
here's
some
of
the
other
there's
lots
of
really
good
graphs
in
here,
so
like
the
different
effects
that
that
different
pressure,
different
pressure
variables
can
have
kind
of
an
interesting
picture.
A
Some
limits,
and
then
these
are
these
are
good
ones.
These
are
some
of
these
were
pretty
good,
so
this
is
like
they
started
to
quantify
a
few
things.
So
here's
this
was
cool,
so
1300
nodes
at
under
110
pods
per
node
is
about
the
limits
here.
That
seems
to
be
the
the
green
area,
the
good
area
and
then
at
5k
nodes,
which
I
think
is
what
they're
advertised
as
the
limit
of
this
their
scale
right.
A
It's
that's
defined
as
30
pounds
per
node,
so
I
mean
this
basically
is
a
good,
some
good
summary
of
like
why
it's
so
important
to
define
what
you're
testing
and
why
saying
you
can
scale
the
5k
nodes
is
misleading,
because
a
workload
could
may
require
you
to
have
more
pots
per
node
like
it's.
Not
it's,
not
it's
not
perfectly
clear.
Like
okay,
you
know
what
does
it
mean
when
you
scale
the
5k?
A
C
About
that
one
of
my
interpretations
of
you
know
there:
if
you
can
come
back
yeah,
so
they
say
that
they
are
far
limits.
It's
5k
nodes,
however,
for
a
normal,
you
know
normal
cluster,
no
more
configuration.
I
would
say
that
it's
up
to
13
000.
1300,
you
know
because
it's
it's
where
we
can
have
normal.
You
know
number
of
pods
per
node.
A
A
The
point
is
like,
like
you
know,
it's
it's
interesting
that,
like
you,
want
to
stay
in
this,
but
you
know
when
it's,
but
it's
also
not
safe
to
say,
like
oh
yeah,
I
have
this
many
nodes,
so
I
must
be
able
to
so.
If
you
know
that
means
that
kubernetes
should
scale
to
this
level,
and
I
don't
need
to
worry
about
ipods
for
no,
but
it's
it's
not
true.
You
do
need
to
worry
about
your
pots,
number
ponts
per
node
and
other
pressures
as
well.
You
know
that's
like
which
is
which
is
critical.
A
D
A
Services
per
name,
space,
kind
of
interesting
yeah,
name,
spaces.
C
A
Yeah,
so
this
is
I've
experienced
this
we.
So
this
is
something
why
this
was
so
interesting.
When
I
read
this,
marcelo
is
because,
like
I've
been
doing,
this
is
something
I've
done
internally
in
testing
and
we
had
and
actually
it's
not
actually
it's
not
this
one,
it's
well.
No,
it
is
sorry.
Is
this
one?
A
It's
the
we
have
a
number
of
namespaces
certain
number
of
namespaces
and
we
have
a
lot
of
objects
in
the
namespace,
but
it's
usually
we're
kind
of
packing,
some
of
them
into
a
single
namespace
and,
and
that
could
have
an
effect
like
too
many
too
many
objects
for
namespace
means
now.
Controllers
need
to
spend
more
time
going
through
and
locating
those
objects.
So
this
is
like
a
this
is
a
behavior.
That's
that
I've
seen.
Even
though
it's
for
services
there's,
I
think,
there's
one
other
another
metric
in
here
somewhere.
A
It
says
it
talks
about
how
how
this
can
affect
controllers.
A
Yeah
this
one's
interesting,
a
minute
ahead
and,
having
observed
this,
it's
definitely
I
don't
know
what
the
limit
is
like.
I've
been
able
to
quite
to
relate
the
limit,
because
this
is
like
services
for
new
space.
I've
seen
this
like
when
internally
it's
like,
we
have
it's
so
many
other
objects.
It's
not
just
services.
It's
it's!
It's
really
any
object.
A
Patreon,
this
is
another
big
one.
This
was
the
one
that
we
want
to
definitely
want
to
target
in
that
steady
state
test
right.
This
is
like
this
is
like
we,
we
create
100
vmis.
A
We
start
deleting
some
and
recreating
some
right
so,
like
the
pod
turn
rates,
pod
creates
updates
deletes
per
second
that
churn
rate
is,
is
20
per
second,
so
yeah
I
mean
it
would
be
interesting
to
see
with
all
the
phase
transitions
that
we
have,
what
the
how
they
get
affected.
Based
on
the
turn.
This
is
something
that
one
of
my
colleagues
should
be
talking
about
for
the
keyword
summit.
A
We
have
a
bunch
of
good
pictures
that
show
that
actually
show
this
like
how
how
the
churn
actually
affects
shows
up
in
the
phase
transition
times,
which
is
kind
of
neat,
and
then
it's
qps
limit
and
throughput.
A
And
then
nodes
versus
configs
or
secrets.
So
again
I
mean
it's
like
this
is
a
relationship
between
configs
for
node
and
nodes
but
and
they're
both
they're,
both
certain
amounts
of
pressure
like
and
here's
how
they
relate
to
each
other,
but
it'd
be
interesting
to
see,
like
other
things,
how
they
relate
to
nodes,
and
you
know
so
it's
so
kind
of
hard
to
find.
Like
you
know
it
seems
like
they're
like
we
can
relate.
You
know
individual
variables,
but
it'd
be
interesting
to
see
like
some
of
the
others
like
how
they
compared
to
nodes.
A
Like
you
know,
the
churn
to
the
number
of
nodes
would
be
interesting
and
there's
lots
of
things
that
we
can
do
namespaces
pods
for
namespace
yeah.
I
mean
another
really
interesting
one.
That's
this
is
the
one
that
I
was
thinking
of
so
having.
If
you
have
a
single
name
space-
and
you
know
we
have
a
few
thousand
vms
in
it-
that
can
affect
our
ability
to
scale
right
so
pawns
for
name,
space
3k
and
then
that
can
affect
the
number
of
namespaces
the
sweet
spots,
3k
and
50.
A
So
I
kind
of
where
I
wanted
to
go
with
this,
though,
is
that
it
would
be
interesting.
I
like,
like,
I
was
saying
at
the
start
like
it
would
be
interesting
to
define
some
of
these
some
of
these
variables
and
like
there's
a
ton
of
them
and
that's
kind
of
what
I'd
like
to
go
with,
like
eventually
like
as
the
goal
for
when
we
write
this,
we
write
some
sort
of
tests
that
we
can
hand
off.
A
Like
you
know,
what's
the
number
we
want
to
get
back
and
kind
of
the
way
that
I
think
about
it?
It's
like
it's
like
the
summary
of
pressure
like,
so
what
would
make
up
pressure
like
we
just
went
through
a
bunch
of
things
like
like
number
of
nodes
like
we
need
to
know
these
things
like
number
of
nodes.
A
You
know
pods
for
node.
All
these
things
number
of
objects.
C
A
Agree
yeah,
I
mean
we're.
Basically
it's
one
to
one
but
yeah
I
mean
it's
the
we
should
yeah.
I
mean
it
makes
sense.
We
use
you,
know
it's
pronounced.
So
in
our
context,
yeah
yeah
yeah.
I
agree
objects
for
name
space
return
rate.
I
think
verbs
like.
I
thought
that
was
one
of
them
in
here.
Isn't
there.
A
A
Oh
here
we
go
this.
Is
it
okay,
so
for
deletes
through
the
through
the
garbage
collector?
Only
a
throughput
of
10
per
second
can
be
achieved.
Currently,
as
each
delete
uses.
Two
api
calls
so
yeah,
so
these
are
actually
api
calls
pod
creates.
This
is
done
by
api
calls.
C
C
C
Well,
when
I
have
it,
you
know
dpr
for
that
first
burn.
I
will
give
more
than
that.
A
Yeah
I
mean
what
what
are
some
of
the
other
I
mean
like
if
we
as
like
an
exercise
like
what
would
we,
what
are
like
the?
How
what
are
all
the
can?
We
name
all
the
variables
like
that
we
would
go.
We
call
like
add
into
pressure
like
I
think
I
have
a
bunch
of
the
big
ones
here:
nodes
v,
minus
per
node
objects
for
name
space
like
number
of
name
spaces.
A
A
You
want
to
quantify
pressure,
so
I
mean
it
just
it's
just
the
sum
of
like
all
these
things.
I
think
yeah,
something
like
that.
I
mean
demise
and
I
don't
know
like
it's.
Basically,
every
api
object.
A
D
A
Yeah
yeah
yeah
makes
sense,
so
every
cuber
object
in
the
cluster.
Okay
that
limits
the
scope,
so
then
not
objects,
so
bmi
is
per
node,
so
v
minus
per
name
space.
Well,
it
could
just
be
not
just
vmi's,
I
mean
it's
the
v
mice
per
node.
A
That
would
be
w1,
so
this
would
be
keyword,
objects
per
game,
space.
A
A
A
A
A
And
then
with
the
so,
how
would
that
work
in
reverse
so
like?
Let's
do
if
we
knew
the
work
rate,
the
current
work
rate
and
we
knew
the
density,
then
we
could
solve
for
the
number
of
api
objects
in
the
cluster,
so
that
would
be
like
nodes,
for
example
right.
That
equation
would
be
like
I'll
write
it
out
so
like
like
nodes,
equal
just
be
like
scalability
limits
like
nodes
number
of
nodes.
Number
of
nodes
would
be
like
equal
to
the
density
plus
the
the
work
rate
and
then.
A
Rates
be
the
sum
of
number
of
objects,
plus
density.
Maybe
something
like
that.
I
don't
know
it's
a
very
loose
algebra
but
kind
of
that's
what
I'm
thinking.
So
we
can
kind
of
go
that
direction
like
if
we
start,
if
we
design
or
with
these
principles
in
mind
like
taking
a
view
of
the
current
state
of
things,
and
maybe
we
can
that'll
give
us
a
more
accurate
picture
of
you
know
what
the
work
rate
the
limit.
What
that
would
be?
A
D
A
C
It
looks
like
it's
failing,
but
it's
not
really
failing.
You
know
to
clean
up
after
the
execution.
If
you
can
go
yeah.
C
C
So
this
is
regarding
the
make
cluster
down.
For
some
reason:
oh
or
maybe
classically
clean
up,
I
don't
remember
the
command
but
anyway.
So
it's
not
it's
failing
to
delete.
You
know
coop
veered
from
the
cluster,
and
I
need
to
debug
that
I
don't
know
what
why
what's
failing,
it's
not
reporting
anything
but
the
experiment's
running
and.
C
C
A
D
A
It's
it's
650,
the
other
one,
this
one's
only
down
to
12
and
patch,
and
this
one
doesn't
have
patch
nodes,
but
the
other
periodic
job
does
that's
kind
of
interesting.
A
C
Me
cluster
down,
and
there
is
another
there
is
another
job
that
it's
running
there,
but
it's
actually
failing
about
the
way
yeah.
If
you
go
the
cluster
you
see
plus
there's
chao
density,
you
can
see
the
red
ones.
C
C
C
A
Many
what's
in
this
test,
what
are
you
doing
here.
A
C
It's
running
before
it
was
running,
but
now
it's
not
running
anymore.
It's
very
curious
that,
but
anyway,
is
it's
actually
creating
200,
300
and
400.
C
Oh
by
the
way-
yes
so
those
jobs,
so
it's
something
that
we
need
to
discuss.
I
need.
I
need
the
help
to
think
about
that.
So
you
know
those
jobs
cannot
run
collocate
with
other
jobs
in
and
if
we
want
to,
you
know
to
run
it
for
a
pr.
For
example,
we
need
to
make
to
create
some
logic
that
a
job
indentifies
that
another
job
is
running
in
weight.
You
know
something
like
that.
C
I
I
was
discussing
with
daniel
healer
with
the
guy
that's
responsible
for
the
ci.
The
point
is:
we
cannot
control
it
via
pro
because
the
pro
is
actually
you
know
only.
C
We
can
define
one
pro
job
this
maximum
concurrence,
but
if
we
have
many
pro
jobs,
do
you
all
of
them
will
access
the
cluster
and
maybe
access
the
cluster
at
the
same
time,
so
I
was
thinking
things
that
I
was.
I
was
thinking.
Maybe
before
run
the
test
to
see
if
the
test
namespace
is
created,
you
know
if
any
wait
until
it
disappears.
Something
like
that
anyway,
somebody
need
to
think
about
that.
You
know.
A
I
thought
we
could
use
the
the
serial
header
like
to
make
sure
we're
not
running
in
the
same
time
as
anyone
else.
C
A
Yeah,
okay,
so
I
mean
then
we're
we
could.
That
could
be
the
case,
though,
then
like
we
basically
we're
basically
going
to
race,
okay,
and
also
it
could
happen
that
someone
else
could
run
their
test
the
same
time
as
us,
though,
even
if
you,
even
after
a
check
right
like
so
there
was
no
guarantee
that
I
mean
isn't,
but
so
this
is
in
the
performance
cluster,
though,
that
that
we
have
this
problem.
C
Yeah,
this
is
the
performance
cluster,
because
we
don't
want
to
make
to
run
tests
at
the
same
time
and
they
also
interfere.
You
know
make
it
they
interfere
each
other,
so
they're,
it's
the
same
cluster
they're,
not
it's
not
like
the
ci
that
creates
a
whole
new
cluster
it
they
will
create
the
same
thing.
The
leads
vms,
you
know,
jobs
can
really
doesn't,
cannot
run
the
same
cluster
at
the
same
time
also
impacts
the
performance
right
now.
C
Those
jobs
that
I'm
running
there
is
fine,
because
I
define
different
no
time
for
the
you
know
for
for
them,
because
we
have
to,
as
you
saw
we
have.
We
have
two
kind
of
jobs,
one
that
one
runs,
this
rates
100
vms,
another
one
that
creates
more
varied
range,
so
one
runs
like
in
the
morning
and
another
one
at
night,
but
if
we
want
to
enable
it
for
our
pr,
we
need
to
make
sure
that
the
job
will
wait.
A
Okay,
yeah,
I
mean
that
would
be
worth
having
a
conversation
with
daniel
because
right
otherwise
like
we
don't
really
have
a
way
to.
We
don't
really
have
a
way
to
care
like
because
this
is,
if
we're
doing
performance
testing,
we
we
always
need
exclusive
access.
So
yeah
I
mean
for
any
test.
It's
not
even
just
ours
that
we're
doing
here
or
we
wouldn't
want
any
other
test
to
interfere
with
any
other
test.
So
maybe.
C
A
Yeah,
okay,
yeah,
we'll,
maybe
start
a
it
might
be
worth
the
mailing
list
thread
marcelo.
Maybe
this
is
something
that
when
you
get
a
little
more
focusing
on
and
get
some
more
opinions
on.
A
Oh,
it's
good.
Okay,
all
right!
Well,
we're
pretty
much
at
time.
I
think
so.
Something
to
think
about
is
continuing
to
think
about
this
pressure,
the
sum
of
pressure
whatever.
However,
we
can
define
this
and
kind
of
the
work
ahead,
something
to
keep
in
mind.