►
From YouTube: SIG - Performance and scale 2022-05-19
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
Okay,
sis-
this
is
sixth
scale,
it's
may
19th
2022..
The
link
to
the
notes
is
in
the
chats.
I
think
it's
just
you
and
me
news.
So
if
you
can,
please
add
yourself
an
attendee
all
right
we're
going
to
go
over
a
few
few
things,
and
if
you
have
any
topics,
you
can
just
add
them
in
sure
sure
yeah
all
right.
A
Okay,
first
one
is
the
the
let's
see
the
performance
periodic
okay,
so
we
have
two
there's
two
periodic
terms:
there's
a
perf
there's
a
period,
sorry,
there's
two
performance,
jobs,
there's
a
periodic
job
and
then
there's
a
an
optional
one
that
we
can
run
per
pr
for
a
long
time.
There
was
failures
and
we
were
able
to
figure
this
out
in
between
last
meeting.
There
was
there
was
a
change.
A
Let's
see,
does
this
have
what
I'm
looking
for
yeah?
Okay,
there
is
a
there's,
a
pr
that
was
out
of
that,
actually
increases
the
amount
of
vm
or
amount
of
memory
vms
consume,
and
we,
I
guess,
work
towards
the
upper
limits
of
the
our
amount
of
memory
we
required.
Since
we
launched
100
vms
it
kind
of
like
you
know,
it
was
multiplicative
and
you
know
how
so
we
just
kind
of
the
last
12
vms.
A
We
had
a
lot
of
trouble
with
and
it
just
went
and
they
were
never
scheduled.
We'd
have
met
enough
memory,
so
we
were
going
to
fix
this
with
increasing
the
memory
size
and
we
identified
the
the
pr
that
actually
did
this.
I
think
I
linked
it
in
here.
It
should
be,
should
be
one
of
these.
I
think,
let's
see.
A
Yeah
this
is
it
so
just
for
visibility.
This
is
the
so
we
increased
the
memory.
I
think
it's.
What
was
it
a
note
here
by.
A
Is
it
it's
that
wasn't
the
total?
I
think
it
was
like
30,
30,
megs
or
something
I
think
it's
the
difference
between
those
two
numbers.
So
it's
like
40
43
megs
per
vmi,
so
I
mean
it's
something
that
to
keep
in
mind
I.
This
is
one
of
those
situations
where
we.
It
would
be
good
to
have
a
release
note
for
this,
but
we
weren't
aware
of
it,
but
we
caught
it
in
our
job.
We
actually
caught
it
because
we
went
out
of
memory.
A
So
this
is
something
I
think
we'll
need
in
the
future
to
just
keep
in
mind.
I
think
this
is
one
of
those
areas
where,
if
we
have
our
job
voting
and
in
for
prs,
we
would
have
caught
this
and
we
could
have
made
the
suggestion
there.
I
don't
I
I
doubt
they
ran
it,
because
I
think
because
it's
optional,
I
doubt
it
was
run
here.
They
would
have
seen
it
so,
but.
B
We
are
we
using
like
default
memory
assignment
for
the
for
the
vmis
or
we
use
some
custom
values
for.
A
This,
I
think
internally,
I
think
internally,
we
use
the
default
values.
Okay,
voice
check,
though,
because
but
that's
an
important
detail.
Let
me
just
see
what
what
release
what
this
was
in.
I
don't
know
if
we
actually
are
going
to
take
it
in
with
the
050
release
we're
moving
to
yeah
okay.
So
it's
not
so
it's
a
0,
5
3
is
when
this
comes
into
effect.
A
So,
okay,
so
we
have
some
time,
but
that's
something
we'll
need
to
keep
in
mind.
I
mean
we're
going
to
just
you
just
need
to
be
aware
of
it.
So
what
I'll
do
I?
I
can
maybe
do
some
tracking
for
this
we're
going
to
file
a
card
internally.
So
we
have
this
as
a
note
and
just
to
just
so
review
like.
Let's
just
make
sure
that
we-
because
I
don't
remember
if
we
do
the
do-
use
a
default.
A
A
Okay,
next,
oh
I
should
hold
on.
I
didn't
finish
with
this.
So
so
we
have
these
jobs
they're.
Failing
now,
they're
another
fix
now
they're,
looking
much
better.
The
job
that
is
still
failing
is
in
the
performance
cluster.
A
Where
is
it
it's
the
it's
a
periodic
and
then
it's
the
100
density
test.
So
it's
it
says
it's
succeeding,
but
it's
not
it's.
Actually
it's
it's!
The
the
vmis
are
attempting
to
be
created,
but
it's
it's
the
name.
It's
requiring
a
namespace
to
expect
the
namespace
to
be
there,
and
it's
not
there
see
if
I
find
it
somewhere
in
here.
A
There
we
go
yeah,
so
we're
trying
to
create
a
bunch
of
eyes
and
it's
it's
expecting
the
using
the
load
generator
tool
and
we.
A
B
Were
some
some
other
errors?
There
are
there
connected
to
also
the
the
problem
with
with
namespace,
I'm
not
sure
if
they're
they're
relevant,
but
when
you
scroll
down
like
in
general
that
there
were
errors
or
on
the
top
so
somewhere.
Similarly,
also
for
the
namespace.
B
A
B
B
A
Yeah
they're
they're
split.
This
is
supposed
to
like
all
this
stuff
up
here
like
before
that
you
know,
like
you,
saw
those
name
space
errors
below
this
is
like
supposed
to
set
up
the
cluster,
some
q
verts
and
then
eventually
it
triggers
our
tests.
So
I
mean
there's
yeah.
There
could
be.
Maybe
it's
I
think
I
think
it's
supposed
to
happen.
I
think
we're
waiting
here,
yeah
there's
some
waiting.
So
it's
just
it's
expecting
to
see
this
yeah
okay,
so
these
are
so.
A
The
two
issues
that
I
want
to
go
over
are
the
two
issues
that
created
a
while
ago
about
some
of
our
goals,
and
I
filled
in
a
few
of
the
issues
since
we've
actually
gotten
a
lot
farther.
So
I
think
I
wanted
to
add.
I
think
it
was
like.
I
think
this
one
so
adding
actually
no
wasn't
this
one.
A
It
was
the
the
rate
limiter,
the
working
metrics
and
a
few
others
here
or
filled
in
so
like
some
of
the
remaining
items
are
so
breaking
down
some
of
the
latency
we
have,
but
so
like
I
said
this
is
latency
between
all
pods
being
ready
and
review.
A
My
objectives
and
just
scheduled,
I
think
this
one
yeah
so
this
one's
like
involves
looking
at,
I
think
the
ponzi
jamal
we
have
to
basically
compare
like
when
each
container
becomes
ready
and
look
at
the
difference
between
when,
because
now
we
have
the
time
stamp
and
when
it
comes
scheduled,
that'd
be
a
way
to
do
that.
I
don't
think
it's
something
we
can
do
in
cuber.
It
might
be
something
we
have
to
we'd
have
to
do
in
our
tooling
to
to
know
this,
but
it's.
B
You
can
collect
the
biggest
problem
with
that
is
that
we
don't
have
like
whenever
there
is
like
a
controller
that
that
looks
for
for
vmis.
It
doesn't
like.
Basically,
we
we
don't
have
like
a
way
to
check
for
for
the
pods
connected
to
this
vmi
and
to
do
that,
we
would
need
to
either
get
it
with
get
request,
which
is
quite
expensive
or
somehow
like
watch
the
pods
and
like
connect
between
vmi
and
pods
to
to
have
this
transition.
B
A
Yeah,
it
would
be
interesting
to
see
because,
like
you
can
see
like
in,
for
instance,
in
the
controllers
like
we,
we
know
like
we're
already
watching
the
objects
right,
so
I
guess
what
it
would
be
like
it
would
have
to
be
like
when
we
noticed
this,
I
mean
it
could
be
like
a
metric
or
something
I
don't
know
that
we
could
spit
out
the
other.
A
Like
you
said,
the
other
option
is
that
we
would
have
to
do
it
so
if
we
could
do
it
outside
of
it
like
with
a
tool,
that's
like
that's
watching
or
or
we
scrape
afterwards
all
the
objects,
because
I
do
have
time
stamps
on
when
they,
when
these
things
happen.
So
it
could
be.
I
mean
I
guess
it
could
be
either
one
I
don't
know
I
mean
we'd
have
to
do
some,
maybe
some
more
exploring
to
see
where
it
best
fits.
I
would
be
interesting.
A
Yeah
I'd
be
interesting
to
see,
because
I
wonder
like
this
idea,
like
being
when
a
pot
is
ready.
I
don't
know
if
it's
really
blasphemy
to
say
like
we
we
could
report
like
hubert,
could
report
this
as
a
metric.
I
don't
think
it
would
be
something
on
the
object,
but
it'd
be
like
something
we
could
just.
A
We
could
just
report
as
a
metric.
I
don't
know
that
would
be
interesting
to
interesting
to
see.
I
don't
know
we
need
to.
We
need
to
explore
that,
though
okay
and
then
latency
between
launcher
pods
being
assigned
to
a
node
and
creation
timestamp.
I
think
it's
another
one.
Yeah
like
this
is
another
one
we
could
find
on
the
object
and
if
we're
watching
the
pod
object,
we
can
figure
this
out,
but.
B
Like
I
feel
like,
don't
we
do
that
already,
like
kind
of
because,
like
we
have
the
vmi
schedule
or
scheduling
like
one
of
those
two,
we
include
that
the
transition
latency.
A
B
So,
basically,
when
we,
when
we
create
oh,
oh
or
is
it
like
just
just
for
like
weird
launcher,
because,
like
I'm
still
kind
of
thinking
that
that
we
have
like
this
one
metric
that
is
like
from
creation
to
phase
and
for
vmis?
B
And
I
was-
I
was
thinking
that
that
all
of
these
we
could
include
maybe
as
as
a
as
a
different
phase.
But
but
I'm
not
sure.
If,
if
we
want
to
do
that
or
we
can
want
to
like
have
like
separate,
even
separate
metric
for
this
extra
extra
info.
A
Yeah
so
yeah,
so
what
I'm
kind
of
the
way
I
look
at
like
all
these
is
that
they're
sort
of
they're
sort
of
extra
outside
of
like
keywords,
phases
like
they're
sort
of
a
little
more
granular,
more
specialized
yeah.
And
it's
really
it's
like
the
question
I
ask
is
like:
can
you
know?
A
A
Know
that
it's
there's
no
pvc
signed,
we
we
asked
for
one
and
it's
taking
a
while
to
get
there.
Maybe
that's
you
know
it's
something
since
we're
watching
the
object.
Maybe
it's
something
we
measure
in
keyboard.
Maybe
it's
something
we
measure
outside
of
keyboard,
but
it's
just
yeah.
The
idea
is
that
these
are
things
that
could
sort
of
further
help.
A
B
Yeah,
so
if
that
would
be
like
a
separate
metric,
I
think
like
we
can.
We
can
have
a
lot
of
interesting
stuff
there,
like.
I
am
thinking
because,
right
now
we
have
like
this,
this
metric,
that
that
records
from
creation
to
this
face
either
running
or
I
think,
scheduled
and
and
then
or
succeeded.
B
I
think,
but
then
we
could
have
like
a
a
different
metric
that
could,
let's
say
to
some
kind
of
event
that
that
we
and
the
event
is,
is
some
abstract
thing
either
like
this
pods
being
ready
for
the
launcher,
pods
assigned
or
networks
assigned,
so
so
that
that
could
be
something
something
interesting
and
yeah.
I
totally
support
that
that
this
would
be
like
great
to
have,
because
I
currently
to
do
that.
B
We
need
to
have
like
custom,
watchers
or
custom
tooling,
that
that
would
like
basically
scrape
those
those
events
and
and
put
put
everything
together.
But
if,
if
we
would
have
this
as
a
metric
in
in
a
convert
itself,
we
could
we
could
just
have
it
in
in
one
place
and
available
for
for
everyone.
A
Yeah,
it's
like
yeah.
I
kind
of
the
way
I'm
writing
is
like
we
could
measure
the
like.
We
can
measure
the
the
time
from
create
yeah
I
mean
just
like
you
said
like.
I
think
it's
really
the
same
same
measurement
right,
because
the
time
frame
create
measurement
is
like
looks
at
the
creation
time
stamp,
and
it
looks
at
the
other
phase
transitions
on
the
on
the
on
the
object.
All
we're
doing
is
we're
just
changing
like
we're
changing
the
the
the
other
object.
A
B
You
could
relate
this,
this
one
metric
that
that
only
talks
about
phases
and
this
other
one
that
that
is
kind
of
abstract
and
like
like
looks
at
different
events,
but
but
it's
also
like
from
from
from
the
same
kind
of
vm,
it's
related
to
the
same
vmi
and
it's
related
to
the
same
kind
of
starting
point,
this
creation
time
and
and
therefore
we
we
can
like,
like
basically
sum
up
these
this
together
in
terms
of
like
like
sets
and
have
like
a
holistic
view
on
on
when
the
vmi
had
certain
reach
certain
certain
events
at
certain
points
in
in
in
a
life
cycle,
and
the
same
goes
like
when
we
are
talking
about
creation.
B
I
feel
like
the
same
goes
for
deletion,
so
when
the
when
the
pod
was
deleted
when
and
that
the
networks
were
dropped
or
like
something
like
this,
I
feel
like
that.
There
is
like
an
opposite
view:
kind
of
there.
There
is
the
same
story.
A
Yeah
so
the
deletion,
we
did
add
the
deletion,
but
the
way
it
works
is
it
goes.
It
takes
the
time
that
the
finalizer
or
it
takes
the
the
deletion
timestamp
and
it
takes
a
it
subtracts,
the
time
between
deletion,
timestamp
and
when
the
finalizer
is
removed.
So
what
it
doesn't
include
is
when
kubernetes
removes
the
object.
It
includes
the
amount
of
time
kuvert
spent
for
deleting
the
object.
A
B
Yeah
yeah
differently
notes,
I
feel
like
there's
always
like
latency,
that
that's
one
and
the
second,
it's
like
various
various
other
events
that
that
may
happen
before.
A
Yeah
like
it's
like
watchers
could
be
closed
and
I
mean
even
if
you
saw
like
the
tombstone
object,
I
don't
know
if
the
timestamp
is
correct,
so,
like
there's
still,
there's
still
challenges
doing
so
at
least
we
can.
We
can
get
pretty
close
to
it,
but
yeah
I
mean
the
point
still
stands,
though,
which
is
that
like
we
could
take
something
like
the
deletion,
timestamp
or
the
created
time
stamp
measure
other
things
changing
on
the
object,
yep
and
get
a
much
more
granular
picture,
yeah
yeah.
B
Of
course
like
we
could,
we
cannot
have
like
the
and
the
very
precise,
but
but
having
like
just
just
overall
idea
when
when
things
happen,
it's
it's
more
than
enough.
I
feel,
and
if
somebody
like
needs
a
better
granularity
or
like
precision
like,
I
guess
that
that
needs
to
go
to
either
api
server
or
even
etc.
When
such
events
happen
actually.
A
Yeah
yeah,
that
makes
sense.
I
I
I'm
sort
of
more
a
more
like
I
think,
after
talking
about
it
more
more
in
line
with
kind
of
this
making
sense
as
a
metric
in
hubert,
it
seems
to
me,
like
that's,
that's
pretty
reasonable,
okay
cool
all
right.
Let's
look
at
a
few
others
on
here
monitor
vms
that
take
a
long
time
to
reach
running,
so
we
have
the
famous
transition
times.
A
Modern
vmware
take
longer
than
expected
threshold
time
in
the
face.
Actually
we
did
we
do
this.
We
have
thresholds
now
yeah.
Actually
this
is
this
is
covered.
A
I'm
gonna
mark
that
one
off
okay,
cpm
usage
open,
go
routines
gc
times
bmi
pod
directrix.
I
don't
remember
what
what
I
meant
by
this.
I
don't
know
if
we're.
B
Maybe
that
does
veer
launcher
like
some
some
of
the
like
vivid
launcher
things,
but
I
it
isn't.
Maybe
not
because
I
I
was
thinking
that
that
maybe
like
there
is.
B
I
feel
like
that
there
is
like
a
prometheus
like
something
related
to
prematures,
that
that
can
give
us
those
those
values
that
looks
up
the
the
pots
and
and
like
cpu
memory
usage
that
that's
for
sure,
but
guardians
ngc.
B
A
Yeah,
I'm
not
sure
I
I
forget
what
the
meant,
where
the
metrics,
where
the
metrics
for
this
sort
of
stuff
ends.
I
I
know
it's
on
the
control
plane,
but
yeah,
I'm
not
sure
if
it's
in
the
neighbor
launcher.
Okay
I'll
leave
this
one
open,
since
I
think
that's
it's
unknown
okay
apl's
made.
Maybe
that
calls
me
by
us
crisp
latency
in
seconds
number
of
http
requests.
So
we
do
have
that
cross
latency
for
vmis
and
vms.
A
B
Like
one
thing
that
that
that
still
goes
to
my
mind
about
like
this
vm
vmi
pod
matrix,
I
think
like
there
is
like
a
pro
pro
like
default,
prometheus
exporter
or
like
a
handler
from
us
handler
that
reports
all
of
these
right.
So
if
we
could
like
include
that
in
the
vert
launcher,
if
that's
that's
what
what
it
means,
we
could
include
that
into
weird
launcher
and
and
have
it
have
it
export
or
all
the
all
those
values.
B
And
basically,
what
we
would
need
to
do
is
register
like
an
under
slash,
metrics
or
whatever.
We
you
like
this,
this
prometheus
handler,
and
it's
already.
B
Exposes
this,
this
anchor
routines
numbers,
gc
intervals,
memory
usage
and
many
many
more
yeah.
A
A
So
I
have
to
look
at
that
one,
okay,
cool,
so
we're
down
to
looks
like
two
just
these
two
remaining,
which
is
pretty
good
for
this
issue.
Okay,
let's
go
to
the
next
one.
I
think
it's
this
one
yeah
five,
eight,
seven,
eight,
okay,
all
right!
So
the
this
one's
about
performance,
test
framework
we
something
we
want
to
use
in
ci,
something
we
want
to
use
locally.
A
Okay,
so
we
first
things
is:
we
want
to
have
a
tool
to
create
load,
so
we
have
that
in
the
load
generator
and
then
we
want
to
break
this
down
to
different
types
of
load.
So
the
two
types
are
now:
destiny
bursts
disney
slash
burst.
I
think
we
push
call
burst
and
then
steady
state
or
some
people
call
churn.
It
currently
supports.
I
think
I
was
looking
at
these
again.
I
feel
like
these
are
just
features
on
top
of
these
two,
like
do
things
you
know
slowly
generate
high
count.
A
Determining
the
system
can
maintain
performance.
That
one
sounds
like
maybe
a
little
bit
like
churn.
Okay,
actually,
maybe
it's
just
like
a
long
running,
burst
test
or
something
and
then
sudden
increase
in
decreasing
in
my
accounts,
so
I
guess
like
so
I'm
thinking
just
maybe
leave
these,
because
I
think
these
are
opportunities
to
make
the
tool
a
little
bit
more
flexible,
because
to
me
they
just
sound
like
configuration
for
each
of
these
different.
B
A
That
makes
sense
so,
okay,
so
still
some
configuration
okay
performance
and
skill
functional
test.
So
this
was
a
the
regression
there's
a
regression
test
for
the
for
the
the
go
routine
like
we
said
that
was
a
while
ago.
We
fixed
that
we
didn't.
We
didn't
really
have
a
test
that
we
could
use
to
verify
it.
I
think
we're
getting
closer.
I
think
now
that
we
have.
A
I
think
this
is
what
would
happen
is
this
would
go
as
part
of
the
you
know
the
burst
test
as
part
of
like
what
I
was
looking
at
earlier,
the
the
periodic.
Maybe
what
we
do
is
we
need
to
create
a
a
new
suite
that
we,
you
know
we
do
things
we
do
functional
tests
like
in
that
periodic
or
something.
Maybe
we
need
to
I'm
not
sure
how
that
would
look,
but
I
guess
we'll
wait.
Maybe
something
like
we
would
run
that
100.
A
I
think
so.
The
way
this
worked
is
we
would
create
then
delete
and
we'd
see
a
higher
go
routine
count.
We
do
it
a
few
times
and
just
increase
and
keep
increasing.
So
I
guess
the
way
this
would
work
is
we'd.
Have
to
run
the
burst
tests,
delete
burst,
test,
delete
first
test
delete.
Actually
we
can
do
it
in
like
a
study
state
test.
A
We
just
we
do
steady
state,
we
create
a
lot,
we
delete
a
lot
and
then
it
recreates
automatically,
and
we
do
it
a
few
times
and
we
should
see
we
shouldn't
see
the
guaranteed
leak,
but
that
that's
that
would
be
the
test.
So
I
guess
it
would
be-
is
like
something
like
we
have
to
think
about
how
like
we
do
the
load
generator,
so
the
load
generator
is
just
kind
of
a
standalone
tool
we
would
need
to.
A
B
Yeah,
I'm
I'm
now
thinking
like
if
we
are
talking
about
this
verified,
no
guardian
leaks,
I'm
I'm
thinking
how
we
would
like
to
like
check
for
for
those.
That's
that's
some
kind
of
a
problem
I
think
yeah.
Let
me.
B
Is
yeah
so.
A
I'm
thinking
because
that's
that's
actually
where
we
found
it.
So
let
me
link
this
issue
to
this
just
so
we
it's
clear
what
I'm
talking
about
so
this
is.
A
This
is
that
issue
yeah.
So
what
the
idea
is,
let's
see
who's
got
a
picture
on
here
or
something
here
we
go.
So
if
you
see
in
his
test,
let's
see
so
I
think
he
says
no,
no
so
fix
your
original,
potentially
causing
issues
with
high
terror.
Okay,
let's
see,
oh,
it
looks
like
it's
gone.
Yeah.
A
Okay,
all
right,
that's
fine!
I
remember
what
it
looked
like.
It
was
basically
he
would
create
and
delete
and
oh
wait.
Maybe
we've
got
a
picture
here
here
we
go
so
yeah.
You
can
see
right,
okay,
so
at
this
in
each
of
these
graphs,
this
is
like.
Okay,
we
created
10,
20,
30
40,
whatever
up
to
some
number,
and
the
this
baseline
is
purple
oops.
A
This
purple
line
every
time
we
create
more,
the
the
number
of
go
routines
continues
to
go
up
when
we
have
after
we've
deleted
it.
So
that's
not
right
so
right.
So
the
idea
is
that
we
would
have
to
use
the
prometheus
to
my
to
use
this
metric
and
to
make
sure
that
our
routine
is
constant,
but
do
the
exact
same
test
yeah.
A
So
the
idea-
I
guess
you
hear
it,
I'm
here
so
ideas.
We
would
do
something
like
this.
We
would
do.
I
don't
know
we
could
do.
I
think
we
could
do
it
either
way
burst
our
study
state.
We
create
both.
We
create
some
vms
and
delete
some
vms
and
we
just
check
the
metric
and
looks
like
I
have
it
here
and
we
just
check
the
metric
and
we
make
sure
that
it's
that's
not
increasing,
and
I
think
that
will
cover
it.
B
Yeah
sounds
good,
I
think
like
we
should
go
with
the
constant
number
I
think
like,
because
increasing
can
like
vary
from
from
one
test
to
another.
But
if
we
like
go,
let's
say:
300
then
drop
to
zero
and
then
go
300.
We
shouldn't
go
like
much
more
or
like
even
like
one
more
after
the
finishing
of
of
the
second
run,
and
then
then
we
went
in
in
the
first
kind
of
test.
So
so
that's
that's.
That
would
be,
I
think,
ideal
because
like
if
we
would
create
10
and
then
20.
That
might
be.
B
I
don't
know,
but
I
feel
like
there.
There
might
be
some
some
differences
in
yeah.
A
Yeah
so
yeah
you
make
the
point
like.
I
think
I
think
what
right
so,
I
think
we
maybe
something
we
look
to
do
is
like
we.
I
don't
know
like
we
did
like
a
10
50
100.
Maybe
we
do
this
a
few
times
like
10
30,
50,
80
100,
that's
like
our
test
or
something
maybe
that's
not
enough,
and
then
in
each
iteration.
A
And-
and
this
could
be
something
we
could
actually
apply
so
go
routine
is
gonna,
be
one
thing.
We
could
probably
apply
this
to
a
few
things.
Yeah
I
mean
this,
but
I
think
go
routine
is
a
good
start,
though
like
would
give
us
sort
of
a
launch
pad
for
other
sort
of
tests
to
cover.
You
know
other
things
that
could
be
happening,
but
yeah,
something
like
this.
I
guess
I
don't
know.
I
think
that
would
I
think
that
would
give
us
some
an
idea.
A
A
B
Number
because,
like
I
feel
the
the
reason
for
it
is
that
if
you
create
10
and
then
30,
I
feel
like
that.
There
might
be
some
some
case
where,
where
we
create
more
goal
teams
in
this
30
right
after
some
time,
it
should
be
like
equivalent
but
but
then
still
yeah.
B
A
Is
both
I
have
no
idea,
I
don't
know
if
it's
like
gonna,
I
think,
for
the
go
routine,
though
the
one
like
I
think
this
makes
I
agree
with
you.
I
think
this
makes
sense.
We
should
we
should
be
able
to
like
measure
like
yeah
I
mean
it
should
be,
it
should
be
zero,
but,
like
this
is
a
little
bit
more
controlled.
We.
We
have
some
sort
of
expectation
that,
like
it's
yeah.
B
A
What
it
wasn't,
try
one
this
is
what
should
be
in
try
two
and
we
can
measure
that
each
time,
I'm
not
sure
for
this
one.
Maybe
we'll
have
another
use
case
at
some
point,
but
I
don't
know
that's
another
possibility
that
we
can
consider
yeah.
B
Okay,
make
sense
to
me:
yeah
I
mean,
as
the
the
first
the
first
test
kind
of
like
with
this
10
30.
B
I
feel
like
if
we
have
the
the
assumption
that
we
should
have
the
the
same
number
of
of
gore
routines
like
in
in
each
kind
of
like,
regardless
of
of
how
many
bmis
we
are
starting,
then
it
should
be
okay,
but
I
feel
like
there
might
be
some
components
that
that
we
even
don't
like,
cannot
control
basically
like
even
like
client
goal
or
or
some
some
packages
that
that
we
use
that
may
start
gordo
things
extra
routines
because
they
they
like
because
we
created
more
connections
or
we
we
created
like
something
right.
B
I
cannot
like
think
of
something
right
now,
but
I
feel
like
there
might
be
like
basically
a
difference
when,
when
we
create
10
and
then
30.,
for
example,
like
number
of
connections
or.
A
A
A
A
Okay,
good
all
right,
that's
those
were
the
two
I
wanted
to
cover
good
all
right.
I
think
we
got
through
everything
all
right,
any
anything
else
from
people.