►
From YouTube: SIG - Performance and scale 2021-12-09
Description
Meeting Notes: https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.yg3v8z8nkdcg
A
Okay,
it's
december
9th,
welcome
to
sixth
scale.
Add
yourself
as
an
attendee,
please
and
the
link
to
the
documents
in
the
chat.
Okay,
so
just
a
few
topics
today
that
I
wanted
to
talk
about
so
we
we
actually
merged
both
the
tracing
and
the
vm
pool
work.
A
I
think
we
have
tools
words.
I
think
I
saw
that
you
know
it's
tracing
it
yeah,
okay,
good.
So
I
wanted
to
talk
about
some
of
the
next
steps
in
these
because
there's
a
lot
we
can
do
with
each
of
these.
We
have
we
can
well.
We
can
do
next
on
these.
So
let
me
list
out
some
things
here.
A
So
what
I'm
thinking
so
this
is
just
this
was
just
the
vert
controller,
this
tracing.
We
can
add
this
anywhere
and
to
all
the
controllers,
so
I
mean
that's
one
of
them
like
so
that
was
for
controller.
We,
I
think
it
makes
sense
to
do
vert
handler
the
work
queue
and
we're
at
launch
or
work
queue.
A
I'm
not
sure
what
else,
though,
what?
Where
else,
would
we
want
to
have
this.
A
A
Yeah
I
mean
other
than
that
I
mean
it's
like.
I
think
it's
pretty
similar
across
these
three
controllers,
like
in
the
basically
the
I,
so
I
don't
know
what
how
it
looks
in
the
word
api,
but
at
least
for
these
there
should
just
be
an
update
status,
function,
call
and
then
a
sync
function,
call
that
is
part
of
that
execute
loop
in
the
work
queue,
so
it
almost
should
be
the
same
for
the
same
code
as
as
the
controller.
A
Same
okay,
so
yeah.
D
B
B
B
D
I
think
like
one
way
to
also
like
maybe
that's
that's
harder
and
that's
for
the
future,
like
with
the
tracing.
You
can
also
like
consider
like
this
open
tracing
and
and
make
sure
that
that
this
kind
of
like
it's
not
just
just
this
util
logging
stuff.
D
It's
it's
more
like
this,
this
jager
and
and
everything
so
so
that
would
help
to
get
more
like
depth
into
into
what's
happening,
and
with
this
spans,
you
can
like
more
easily
like
see
what's
what's
happening,
where
we
spent
the
time
and
how
how
long
and
did
like
every
every
call.
Last
and
as
so,
I
think-
like.
Maybe
that's
that's
for
for
for
the
future,
but
I
think
this
would
be
like
really
great
to
have
like
this
open
tracing
functionality
and
there
as
well.
A
Yeah
marcelo,
you
talked
about
this
or
was
it
you
talked
about
this
yeah
you,
you
missed.
You
wrote
a
comment.
You
you
were
commenting
on
on
the
tracing
pr
about
this
right.
B
Yeah
yeah
so
yeah,
I
think
maybe
we
can.
We
can
definitely
discuss
in
the
future
all
day,
we'll
open
tracing,
although
it
probably
has
some
challenge,
like
you
know,
helping
phrase:
the
real
open
trace
implementation.
We
need
to
afford
contacts
and
and
define
where
request
starts,
and
things
can
get
very
complicated
for
that,
especially
kubernetes.
They
even
don't
have
a
content
of
that.
B
There
are
a
lot
of
people
discussing
if
they
really
should
have
that
in
kubernetes
or
not
because
of
the
asynchronous
behavior
an
example
for
example,
for
example,
you
can
create
a
persistent
volume
when
you
create
a
pod,
you
know
the
same,
you
know
flow
or
you
can
create
a
persistent
volume
before
and
then
you
create
a
pod
and
attach,
and
then
it
makes
very
hard.
A
I
wonder
I
wonder
if
any
of
the
ci
at
all
does,
if
like
when
we
gather
logs
like
if
we
look
at
any
information
specifically
and
pick
it
out,
because
I
mean
right,
this
is
only
going
to
be
logs
when
we,
when
it's
slow,
like
we,
don't
have
metrics
around
this.
That
would
be
interesting
actually,
but
it's
not
in
the
library
right
now
like
it's
there's
no
at
least
I
didn't
see
any
way
to
like
have
metrics
around
it.
If
we
were
slow.
That
would
be
interesting.
F
Metrics,
though,
because
we
have,
we
have
work
key
metrics,
don't
we
so
we
know
how
long
so
if
we're
running,
for
example,
a
density
test
and
the
work
queue
exceeds
our
threshold,
then
when
we
dump
something
like
logs,
we
can
gain
more
insight,
so
we'd
be
able
to
dump
all
the
logs
for
for
handlers
and
whatever
else.
F
A
I
guess
like
I
think
we
we've
done
some
studies,
like
you
know,
on
this
work
queue
the
lengths
color
cube.
I
think
I
think
maybe.
F
B
C
F
Be
helpful
is
maybe
so,
let's
let's
say
we
get
to
the
point
where
we
see
that
we've
exceeded
a
threshold,
the
next
step
for
us
to
debug
and
kind
of
gain,
an
understanding
of
why,
from
a
developer
perspective,
would
be
to
understand
what
key
was
it
perhaps,
and
was
there
something
unique
about
this
scenario
versus
all
the
other
ones
if
it
was
kind
of
an
outlier
that
would
be
kind
of
tricky
you
just
had
to
parse
logs
to
find
which
one
was
the
one
that
took
a
really
long
time,
looking
at
the
traces,
so
maybe
there's
some
tooling
and
stuff
to
help
with
that,
but
really
that's
grep.
A
Yeah,
maybe
I
mean
like
I
guess
the
way
if
it's
just,
I
think
this
would
maybe
be.
I
think
I
guess
like
with
tracing
now.
We
we
have
the
capability
if
we
wanted
to
as
a
developer
now
to
see
this
it
just
and
the
question
I
guess
is
nci
like
I
think
I
mean
it
would
be.
I
think,
there's
some
advancements.
We
probably
need
to
do
first
to
get
to
this,
but
yeah
I
mean
I
guess
there
is
a
path
here
like
if
you
see
that
you
know
work
queues
are
long.
A
A
Okay,
I
mean,
I
think.
That's
I
mean
that's.
I
think
that's
pretty
good.
I
mean
it's
at
least
a
password.
If
this
is
something
that
we
eventually
want
to
have
around
you
know
for
for
debugging,
but
I
mean
I
think
it's
good.
For
now.
At
least
I
mean
the
point
of
tracing
right
was
just
so
that
we
could
have
the
capability
to
to
look
at.
You
know
what
key,
and
maybe
you
know
specifically
like
what
function
was
was
slower
than
what
we
expected.
A
B
A
There's
even
to
the
question
of
like
right
now
we're
at
one
second.
For
for
the
the
time
I
mean
I
don't
like
it
could
be.
That
could
be
slow
pretty
fast.
I
don't
know,
I
mean
we're
just
we'll
start
with
that
too.
So,
like
we
have
to
like
we
we
need
to
see,
like
you
know
how
long
you
know
when,
like
there's
still
a
lot
of
information
here,
a
lot
of
unknowns.
A
I
think
kind
of
want
to
see
like
as
this
you
know,
as
this
kind
of
matures
a
little
bit
as
maybe
we
see
some
more
tracing
and
some
of
these
work
cues.
Let's
see
what
comes
out
of
it,
see
what
information
we
learn
and
but
yeah
I
mean
it's
a
good
question
that
we
can.
I
think
we
can
hold
on
to
something
we
can
look
at.
I
think
down
the
road.
A
Okay,
the
other
item
was
the
so
the
vm
pools
merged.
I
think
it
emerged
this
morning
david.
I
wanted
to
see
you
know
what
are
some
of
the
other
items
that
we
could.
What
are
the
next
steps
on
this
some
of
the
pr's
we
could
have
after
this.
F
Yeah,
so
the
next
prs
are
to
start
fleshing
out
the
api.
The
initial
pr
was
just
the
default
behavior.
So
if
you
made
a
virtual
machine
pool-
and
you
just
created
the
virtual
machine
template
portion
of
that-
and
you
set
how
many
replicas
you
want-
and
you
didn't
set
any
of
those
other
options
that
we
talked
about
in
the
design-
that's
the
behavior
that
you
get
today.
F
So
we
need
to
start
layering
in
the
different
update,
and
I
forget
what
the
other
strategy
parameters
were,
but
that's
that's
it
and
the
code
is
structured
in
a
way
that
really
shouldn't
be
that
difficult.
I
think
the
main
advantage
of
not
adding
those
right
away
is
that
it's
going
to
take
longer
to
develop
the
testing
around
a
lot
of
these
features
like
the
more
advanced
tunings
that
is
going
to
be
to
develop
the
feature
itself.
A
Okay,
so
you
get
the
different
scale
and
skill
up
behaviors
and
other
symmetrics.
I
think
that
were
in
there
that
we
could
look
at.
I
think
you
had
in
there
the
burst
rating
the
burst
level.
I
think.
F
A
F
That's
kind
of
an
internal
tuning,
so
it's
not
something
you
could
set
on
the
vm
pool
spec
itself.
It's
a
it's
similar
to
the
virtual
machine
replica
set
where
we
set
a
burst
count
for
how
many
vms
that
can
be
created
in
one
reconciled
loop,
it
doesn't
do
a
whole.
It
doesn't
really
limit
us
much.
It's
just
it's
gonna,
be
the
round
trip
of
the
creation
of,
I
think
250
virtual
machines
to
the
point
where
the
informer
tells
us
that
we've
seen
all
250.
F
A
Yep,
okay,
cool,
so
yeah
a
bunch
of
stuff
here
and
then
I
think,
the
the
secret
and
config
map
the
unique
secret
in
configmap
integration.
F
E
F
We
and
the
virtual
machine
templates
there's
a
day
of
line
templates
section
and
we
have
to
create
a
unique
data
volume
for
every
vm.
So
it's
the
same.
A
F
Okay,
is
there
a
out
of
these
things
that
we're
looking
at
for
your
adoption
of
this
feature?
Is
there
something
that
stands
out
as
being
a
higher
priority
than
others.
A
I
think,
let's
see,
I
think
it's
the
the
biggest
two
is
the
scale
and
scale
up
behaviors.
I
think
particularly
the
scale
in
behavior
having
control
having
all
the
control
there.
I
think
some
of
the
ones
you
mentioned
with
labels.
I
think
that
was
like
sort
of
the
mvp
to
consider
for
that
scale
and
control,
and
then
this
and
then
this
one
here
that
the
unique
config
map
and
secret
for
each
vm
is
the
other
one.
F
Makes
sense,
okay
I'll
see
what
I
can
work
on
it
might
be
next
year
we'll
see,
though,.
A
And
then
I'm
thinking
so
there's
there's
also
someone
from
nvidia
that
that
I
think
is
interested
in
helping
on
this
too
saw
ask
him
to
join.
I
don't
think
he's
been
to
one
of
these
meetings
yet
so
I
was
going
to
join
one
of
these
and
we'll
see
he
actually
did
talk
with
me
in
the
last
keyboard
summit
and
he
was
the
one
who
talked
to
me
when
we
we
originally
like
brought
about
this
topic,
the
general
idea
of
kind
of
what
he
talked
about
as
a
job.
A
So
I
think
bring
him
up
speed
on
this
and
then
maybe
he
can
take
one
of
these
and
help
you
out
sure
yeah.
Let
me
know.
A
B
Can
we
come
back
to
the
tracing
sure
I
just
like
put
here
again
the
snapshot?
This
is
the
previous
task
that
we
saw
before
last
meeting.
You
know
the
work
queue
so
just
to
try
to
see
what
how
we
can
leverage.
You
know
this.
You
know
tracing
so
here
I
think
it's
creating
four
hundred
six
hundred
and
eight
hundred
gems,
three
worker
notes
and-
and
then
you
see,
the
the
q
duration
is
the
time
that
the
key
stays
in
the
queue
and
the
work.
B
A
Like
90,
what
what's
the
percentile.
A
Is
it
like
if.
A
A
B
A
Okay,
well
so
yeah.
So
is
it
your
point
that
when
we
find
this,
we
want
to
know
like
what
key
it
is
like?
We
go,
look
in
the
logs
and
try
and
find
this
and
that's
what
I
would
expect
right
like
we
would,
because
we
have.
We
have
one
key
here,
that's
slow,
so
we
should
be
able,
because
10
seconds
is
way
over
the
threshold
of
one
second
right,
so
we
should
be
able
to
see
it
right.
That's
right,
but
we
don't
have
well.
We
don't
have
tracing
here.
A
Actually
we
do
it
here
in
this
one
now
you
have
here.
Maybe
this
is
just
something
we
can
do
for
next
time
now
that
it's
merged
marcel
when
the
next
time
you
do
your
tests.
That
would
actually
be
a
good
thing
to
keep
an
eye
out
for
to
see
if
we
can
find
this.
This
key.
B
A
B
A
B
F
Or
any
of
the
reconcile
threads
could
get
stuck
conceivably.
The
the
thing
that
concerns
me,
the
most
about
verb
handler
taking
a
long
time,
is
the
communication
channel
between
vert
handler
and
burt
launcher.
So
sometimes
there's
a
synchronous
thing
that
has
to
occur
where
we
have
this
kind
of
chain
of
of
commands.
We
say
bert
handler,
wants
to
tell
vert
launcher
to
do
something.
Vert
launcher
then
tells
libvert
to
do
something,
and
then
we
are
waiting.
F
F
Trace
would
give
us
information
there.
Also.
We
would
see
a
deadline
exceeded
on
our
connection
to
invert
every
once
a
while,
we'll
see
that
with
the
grpc
call
and
again
that's
in
the
logs
as
well.
F
B
A
Yeah
I
mean,
let's
that's
some
tracing
in
these
areas.
Let's
see
if
we
can
find
these,
I
I
really
I'm
really
interested
in
marcelo
with
like
and
this
next
test,
because
I
mean
we've
seen
this
for
a
while,
like
we've
noticed
when
you
did
when
you
first
brought
these
to
our
attention
like
so
I'm
wondering
you
know,
and
if
we
can,
if
the
diff
tracing
we'll
be
able
to
point
out
this
key
now
yeah,
especially
the
word
controller.
So
I
it's.
A
Okay,
that's
good
all
right!
Well
then,
we,
as
we
add
more
you
know,
maybe
we'll
find
some
more
get,
some
more
clarity
on
some
of
these
yeah
and
some
of
these,
like
women
20,
and
let's
try
and
eliminate
some
of
these
okay.
All
right.
So
we
finished
the
vm
pool.
Okay,
so
the
last
one
is
the
performance
ci
job.
So
I
wanted
to
just
look
over
the
date
again.
No,
I
don't
know
where
I'm
gonna
go,
find
that.
A
See
if
there's
been
a
little
bit
more
consistency
in
some.
D
A
A
A
B
A
A
A
A
A
B
I'm
actually
I'm
having
some
problem
right
now
with
the
newest
cold
like
when,
before
I
don't
know
when
it's
happened,
I
thought
it
was
something
I
I
don't
know
if
it
was
happening
with
the
other
test.
What
I
mean
is
normally
in
my
environment
when
I
install
cover,
I
use
provider
external
and
I
do
make
cluster
sync
and
it's
automatically
installed
reverts
in
it.
B
However,
it's
not
working
anymore.
For
me,
it's
gets
the,
so
it's
not
stalling
things.
So
it's
like
in
the
in
the
hack
cluster
deploy
script.
B
A
A
A
A
Okay,
so
this
one,
it's
kind
of
interesting
31,
39,
39,
39,
52
58..
It's
interesting!
How
we,
I
wonder
if
it
let's
see
if
it
affects
any
of
the
api
call
counts.
Yes,.
B
B
But
it's
it's
creates
100
pods,
see.
F
It
should,
let's
see
what,
with
the
counts,
it's
not
a
precise
because
obviously.
F
I
think
it's,
the
the
technical
expression
used
is
called
increase
in
prometheus,
so
increase
rate
over
this
period
of
time.
So
we're
looking
at
a
counter
we're
looking
at
how
much
it
increased
over
a
period
of
time,
and
I
think
that
prometheus
interpolates
does
some
sort
of
interpolation
to
get
a
result
there,
but
I
would
expect
it
to
be
within.
F
Let's
see,
where
are
we
actually
out
great?
We
would
expect
100
and
we
got
63.
That's
not
great,
that's
not
great
either.
I
think
we
should
investigate
that
understand
that
a
little
bit,
maybe
we
need
to
give
a
buffer
time
for
how
far
back
we
go
on
the
perf
audit.
F
Or
missing,
like
just
a
really
brief,
we
definitely
account
for
the
time
after
a
test,
but
we
don't
account
great
for
the
time,
perhaps
the
first
yeah
the
first
time
we
actually
get
a
data
point.
F
A
Yeah
yeah,
I
mean
I
think
yeah
I
would
like
to
see
like
I
would
like
yeah,
I
think,
like
he's
saying,
okay,
it
would
be
nice
just
for
prometheus
to
tell
us
with
like
with
some
certainty
the
number
of
pods
it's
seen
like.
I
would
feel
more
comfortable
if
we
knew
that.
C
F
Let
me
show
you
I'll.
F
So
first
we'll
look
at
the
metric.
It's
perf
scale.
F
I
don't
remember
which
ones
I
added
so
what
we're
looking
at
are.
This
is
the
specific
metric
that
we're
looking
at
I'll
put
in
the
chat.
F
This
is
what
we're
calling
on
the
audit
tools.
Side.
F
And
the
percent
s
in
this
command,
I'm
posting
is
filled
in
where's.
The
chat.
F
It's
filled
in
based
on
the
time
period
that
we
give
perform
it
and
that
request
clients,
whatever
we're
calling
it
total,
is
a
counter
vac
vector.
A
Increase
yeah,
I
don't
understand
what
you're
saying
like
I
thought
so
I
understand
the
crease
to
be
like
it's
a
it's.
It's
like
turns
the
counter
into
like
a
vector,
so
it's
like
a
period
of
time
to
it,
so
it
just
grabs
the
the.
F
F
F
F
And
okay,
maybe
it's
our
use
of
rate,
but
I
think
you
have
to
use
rates
in
order
to
maybe
I'm
wrong
did
I
let
me
see,
did
my
command
use
a
rate.
F
B
F
That
could
cause
problems
if
we
had
multiple
tests
running
because
we
might
accidentally
overlap
with
the
previous
test.
But
the
fact
that
this
density
test
is
only
one
a
one-shot
thing.
It
shouldn't
be
a
problem.
I
I
can
add
that,
let's
see.
A
What
was
the
I
I
remember
we
talked
about
this
like
remember
the
v
we
looked
at.
We
talked
about
my
face
count
I
added
this,
but
it
doesn't
work
because
it's
after
right,
like
I'm
checking
after
the
test
is
run,
is
that
what's
going
on
here,
I
forgot
why
that
didn't
work.
F
Right,
we
only
get
the
phase
count
of
the
current
like
snapshot
like
that
point
in
time
we
don't
get
the
the
phase
count
over
and
however
long
the
test
took.
I
mean
well
think
about
that
that'd
be
kind
of
strange,
because
each
each
vmi
goes
through
different
phases.
F
A
Yeah,
because
it
because
it's
a
gauge
it'll,
be
I
mean,
like
it'll,
be
zeros
for
all
the
yeah
it
would
it
would
essentially
we
the
only
way
for
this
to
work
would
be
the
felt
with
this
phase
count
was
b.
We'd
have
to
do
it
during
the
test.
We
have
to
do
it.
We'd
have
to.
We
have
to
capture
the
phase
count
right
after
the
density
test
finished
its
creates
here's.
What
we'd
have
to
do.
A
We'd
have
to
run
the
audit
tool
right
after
so
we
did.
We
do
density
this.
This
is
well.
This
is
a
that's
now
remembering
okay,
this
is
the
to-do
that
I
have
to
do.
That's
good
marcel.
Do
you
know,
if
do
in
the
density
test?
Do
we
delete
right
after
like
like
it
creates
100?
Then
it
immediately
deletes.
B
B
B
F
A
F
Hack
heck,
slash.
F
A
F
B
A
A
A
I
think
it
just
likes
to
say
that
they
are
running
yeah,
so
wait.
So,
let's
see
they're
running
so
wait
until
they're
running
and
then
and
then
what
then
we
just
then
we
run
performance.
F
F
If
that
30
seconds,
if
a
bunch
of
vm
vms
get
created
before
the
first
sample
was
taken,
then
maybe
we
missed
those
because
it's
like
between
the
start
time.
F
Let's,
okay,
let
me
make
a
timeline
for
us.
We
have
the
start
time.
We
have
perf
test
starting,
we
have
vmis
getting
created.
We
have
the
first
sample
getting
taken
by
prometheus
and
maybe
there's
a
few
other
samples
that
get
taken
the
perf
test
ends,
and
then
we
call
perf
audit.
If
perf
audit
goes
back
to
the
start
time,
I'm
not
sure
if
it's
taking
the
I'm
not
sure
which
sample
it
starts
with
if
it
was
between
two
samples,
so
maybe
we
have
to
go
back
and
ensure
that
doesn't
make
any
sense
to
me.
A
F
A
F
A
Okay,
that's
fine,
all
right,
the
okay!
I
I
still
think
like.
Where
is
this?
I
still
think
that
the
like,
when
I,
when
I
do
this
this
count
wherever
it
is,
I
don't
know
where
it
is
now
this
thing
when
I
do
this,
this
should
technically
work
like
it's
after
at
this
point,
since
we
don't
do
cleanup
when
we
capture
the
sample,
it
should
capture.
F
Let
me
explain
this
point
right:
yes,
when
we
run
a
functional
test
after
the
functional
test
completes.
If
it's
our
normal
framework,
we
we
tear
everything
down.
All
the
bmi
is
down
before
we
exit,
so
perf
test
would
run
and
then
potentially
tear
everything
down
by
the
human
exits
if
it
uses
our
normal
framework,
it
might
not.
B
F
A
F
F
A
A
F
F
I
got
I
would
I
don't
have
any
doubts
that
if
we
left
the
perf
test,
I
have
very
little
doubt
that
we
left
its
perf
test
results
running
that
we
would
get
to
after
the
perk
test
ran
and
see
that
all
100
vmis
were
created
in
a
running
state.
That
would
be
really
surprising
to
me
if
they
weren't.
A
A
F
Look
at
the
crates:
oh
wait,
never
mind.
The
vmi
was
being
created
by
the
test.
F
To
be,
no
events
are
different.
Great
events
is
every
time
we
post
an
event
to
a
vemma,
so
that
would
be
like
the
vmi
is
been
defined.
It's
starting,
it
started
it's
stopping
it
stopped.
Those
are
all
events
and
every
time
we
post
one
of
those
that's
a
create
of
the
type
event.
F
B
A
F
F
A
F
Results
and
if
it
doesn't
we're
gonna,
have
to
go
deep
into
understanding
prometheus
and
why
this
result
isn't
what
we
expect.
A
Because,
like
for
instance
like
look
at
here's
like
with
40
like
we
have
less
vms,
like
you
know,
this
could
affect
this
right.
I
mean
here's
more
and
now
we're
or
our
95
is
going
up,
because
maybe
you
know
that's
gonna
affect
our
threshold
or
it
could
affect
your
threshold.
I
mean
if,
if
I
understand
what
this
actually
means
like
so
but
okay,
I
mean.
F
I
guess
differently,
so
I
feel
more
confident
about
the
histogram
than
I
do.
The
create
pods
count,
so
the
histogram
is
probably
fairly
represent
the
of
what's
happening.
Yeah.
It's
like
you
know,
averages
and
all
that,
but
it's
that
increase
operation
occurring
over
a
vector,
that's
unique,
for
the
counts
of
the
api
endpoints,
that's
something
different
and
that's
the
one.
That's
giving
us
kind
of
crazy
results.
F
I
wonder
if
there's
a
different
way
of
getting
this
data,
if
I
should
just
increase,
is
the
only
way
I
know
to
count
over
a
time
period,
though
there
was
a
different
way
to
count
like
an
absolute
way
of
counting
over
a
time
period.
I'd
be
way
more
interested
in
it
than
whatever
it's
extrapolating
data.
Somehow.
A
I
mean
we
could
we
could
try.
I
mean
this
would
be
kind
of
funny,
but
we
we
could
take
a
sample
right
before
and
we'll
know
the
create,
and
then
we
can
take
it
after
right.
That's
crazy
supposed
to
be
doing
right,
like
that.
That
could
tell
us
interesting
yeah.
We
just
take
the
count
at
that
point
in
time
and
subtract
them.
A
I
mean
it
would
give
us
all
values,
but
that
would-
and
I'm
not
sure
like
I
mean
to
do
that
we'd
have
to
we
have
to
do
when
I
guess
what
do
we
do?
We
run
audit
tool.
Yeah
I
mean.
Is
there
any
harm
in
that
like
running
audit
tool
right
beforehand?
A
Well,
I
could
just
make
audit
tool
do
that
it
could
take.
It
could
do
its
own
interpolation.
You
could
take
a
so
don't
do
an
increase
like
just
take
it
at
the
moment
in
time
for
a
count.
F
Okay,
I
can
investigate
that.
That
would
be
interesting
to
do
just
to
see
if
we
okay.
So
if
we
are
looking
at
a
completely
fresh
environment-
and
I
do
a-
I
want
to
see
how
many
pod
creations
occurred
at
the
start,
that
should
always
be
zero
if
it's
not
zero.
That's
really
curious,
because
there's
nothing
in
our
api
that
creates
pods
and
we
create
things
like
deployments
and
stuff
like
that
and
bird
operator.
But
we
don't
the
one
time
we
create
a
pod
that
I'm
aware
of
is
for
vmi.
A
Maybe
for
sanity,
I
I
like.
I
think
it
would
be
interesting
that
we
we
I
like,
I
could
see
value
in
us
actually,
printing,
that
value
out
at
the
start
and
at
the
end
and
then
a
difference.
Just
because,
like
I
mean
right
now
at
least
like
this
is
a
shared
environment
right
for
the
like.
It
would
just
be
interesting
and
also
just
for
the
way,
we're
interpreting
this.
It
would
be
good
to
know.
A
F
A
Well,
that's
I
mean
you.
The
other
way
you
suggested
was
that
you
just
look
at
the
one
point
in
time
right
with
the
count
yeah.
A
Like
yeah
yeah,
that
would
be
the
same.
I
I
think,
then
I
think
that's
fine
I
like
in
terms
of
like,
because
otherwise,
if
we
run
per
test
beforehand
yeah
I
mean
it
gets
the
same
thing,
but
it's
probably
better
for
I,
like
my
point,
is
like
I
don't
care
either
way
if
we
do
protest
run
beforehand
or
if
we
get
the
count
of
like
right
before
we
start
the
test.
A
The
time
before
we
start
the
test,
like
maybe
we
just
like
you
subtract
a
little
bit
from
time
stamp
to
just
gather
the
start
like
like
10
seconds
or
something
before
it,
the
other
count
right
before
the
start,
and
then
just
right
after
and
then
I
think
that
would
provide
a
lot
of
credibility
in
terms
of
reading
yeah.
So
same
thing
is.
B
A
F
Easiest
thing
to
begin
gathering
some
sort
of
data
back
would
be
to
add
to
sleep
before
the
perf
test
and
understand,
if
that
has
any
impact
on
our
results,
and
that
would
give
us
a
clue
making
the
code
changes
and
things
like
that.
It's
gonna
be
a
challenge
for
when
that
can
get
done.
A
Okay-
and
I,
this
is
something
I
I
might
take.
This
is
something
I
just
wanted
to
do.
I
just
I
won't
be
gonna
get
to
this
week.
Maybe
next
week
the
the
and
then
the
sleep
30
would
go
before
a
start
time
stamp
or
before
perf
test
no.
F
B
F
I
think
it's
pretty
easy
for
us
to
check.
We
run
something
like
this.
Perf
test
locally
was
like
a
small
sample
set,
like
maybe
10
virtual
machines
and
then
inspect
the
prometheus
directly.
So
do
a
like
get
the
number
of
pods
that
were
created
before
the
test
ran
the
time
before
the
test
ram
and
look
at
a
sample
after
the
test
ran
and
see,
if
those
make
sense-
and
they
don't
make
sense,
then-
and
we're
certainly
not
going
to
get
a
correct
increase
operation
over
that
time
period.
F
B
A
So
the
other
thing.
A
A
Okay,
that
would
be
the
other
thing
we
could
do
all
right
all
right.
I
think
it
gives
us
the
password
on
those
okay,
good,
all
right,
we're
over
time.
Guys
thanks
a
lot
we'll
see
you
all
next
week,
I
think
is:
we
is
tomorrow
like
a
full
or
next
week
of
full
working
week.
I
think
it
is
right,
yeah
december,
it's
the
last
one
for
last
one
right
right
at
least
yeah.
Okay,
all
right,
we'll
see
you
guys
next
week
at
this
time.
Okay,
all
right
have
a
good.